So, You Want to Build a Decision-Support Tool? Assessing Successes, Barriers, and Lessons Learned for Tool Design and Development

Scientific Investigations Report 2023-5076
By: , and 

Links

Acknowledgments

This research was funded by the U.S. Geological Survey (USGS) Center for Data Integration. Thank you to Mike Duniway, Stephen Fick, Leslie Hsu, Karen Jenni, J.C. Nelson, Emily Read, and Nate Wood for providing input to the research design and to Paul Exter, Joseph Nielsen, and Jennifer Rapp for providing feedback on earlier versions of this report. The authors are grateful to all the USGS employees who participated in data collection for this project and to Michael Blanpied, Kara Doran, Kurt Kowalski, Alice Pennaz, and John Wolf for contributing the example tools that appear in the text. Thank you to our collaborators who worked with us on previous decision-support development and evaluation projects for shaping the ideas that appear in this guide, especially the members of the USGS Coastal and Marine Hazards and Resources Program’s State of our Nation’s Coast project team.

Abstract

The purpose of this study is to increase understanding of how the U.S. Geological Survey (USGS) is developing decision-support tools (DSTs) by documenting successes and barriers across all levels of USGS scientific tool creation and outreach. These findings can help streamline future tool design and development processes. We provide a synthesis of lessons learned and best practices across a spectrum of USGS decision-support efforts to, A, provide guidance to future efforts and, B, identify knowledge gaps and opportunities for knowledge transfer and integration. We present this information as five guiding principles for those striving to create effective DSTs. These principles are: (1) use an adaptive, iterative design process, (2) collaborate across disciplines and organizations, (3) engage with the target users of the tool, (4) develop an empirical understanding of use and usability, and (5) plan for the tool’s full life span. By providing guidance on how effective DSTs are realized at every phase of development (from planning to maintenance), these principles provide a starting point to improve the process of designing DSTs and thus help further the USGS mission of delivering actionable science.

Introduction

The U.S. Geological Survey (USGS) is the Nation's largest water, earth, and biological science and civilian mapping agency. Unlike other Federal Government agencies, the USGS has no land management mandate, and thus, outside of the role of Delaware River Master, has no decision-making authority. Rather, the USGS mission is to deliver actionable information at scales and timeframes relevant to decision makers at all levels, from individual landowners to Federal agencies. The core functions that the Bureau performs to accomplish its mission and support stakeholders include long-term data collection and monitoring, conducting research and assessments, and developing tools and applications. The USGS provides decision support through a range of products; in this report, we focused on decision-support tools (DSTs).

USGS-developed tools and applications take a range of forms—from simple web portals that provide access to real-time data, to map-based interfaces that display information from many datasets, to applications that allow users to adjust parameters via “knobs and levers” to explore a range of possible outcomes associated with specific decisions. Web-based applications or “web-tools” are an increasingly popular way to provide dynamic visualizations, analyses, and access to USGS data products. Across the USGS, tools and applications are proliferating, ranging from quick “off-the-shelf” data mock-ups (like R Shiny apps and other dashboards) to professional deployments built after years of investment (Chang and others, 2016). Regardless of format, these tools and applications share the underlying objective of providing actionable information to those who need it in order to make important decisions to prepare for or respond to natural hazards and to facilitate the management of biological, hydrological, energy, or mineral resources. The people to whom USGS provides information through these tools and applications similarly range widely, from policy makers to natural resource managers, to scientists, to recreationists, and many others. Thus, designing, developing, and maintaining tools and applications to provide actionable science to decision makers and other users is a core USGS function that crosses all levels of the organization (including Centers, Regions, and Mission Areas).

While decision support is a core piece of the USGS mission, the number of tools and the number of scientists creating tools appear to be growing both within the Bureau and among scientists more generally (Palutikof and others, 2019; Wong-Parodi and others, 2020, Keenan, 2021). This increasing focus on DSTs has several causes. Perhaps most important in recent years is the growing interest in making science—particularly Federal science—actionable for decision makers (Arnott and others, 2020; U.S. Geological Survey, 2021). Decision-support tools are seen as a logical and tangible way to make information actionable (Wong-Parodi and others, 2020). Interactive DSTs in particular hold great promise to provide tailored support to decision makers, making information available at the temporal or spatial scale needed for a particular decision. For example, DSTs can enable decision makers to quickly check the status of the quality of a body of water, facilitating the daily decisions needed to protect ecosystems. The USGS’s Grand Canyon Monitoring and Research Center’s “Discharge, Sediment, and Water Quality Monitoring” web application1 provides the specific information on water quality and sediment loads that Federal decision makers need to support the Glen Canyon Dam Adaptive Management Framework.

Although the literature on DST development offers guidance, principles, and recommendations on how to develop DSTs within scientific organizations (for example: Loucks, 1995; Newman and others, 2000; Fleisher and others, 2014; Barnhart and others, 2018), the unique barriers and opportunities met by USGS employees had yet to be documented. Our research investigated experiences in initiating, designing, and implementing decision-support projects within USGS through both a survey and interviews of USGS employees.2 Among the DSTs developed by USGS employees across the Bureau exist many compelling examples of successful tools. A small number of these are featured as example tools throughout this report. The information provided about each example tool was codeveloped with one of the USGS employees who worked to create the tool, and a quotation from that individual is included. Our investigations compiled lessons learned from successes and identified barriers that teams encountered. By analyzing how USGS scientists and technology professionals described their experiences and comparing those experiences with best practices as described in the wider literature, we offer five principles of successful DST design and development to consider before diving into DST creation. The synthesis of these lessons learned across the spectrum of USGS tools and applications is intended to inform future DST efforts. Specifically, this report is intended to be a resource for project managers, Bureau decision makers, programmers, and researchers considering building a DST.

2

Data either are not available or have limited availability owing to restrictions (privacy concern). Contact Amanda Cravens at aecravens@usgs.gov for more information.

Background

Advances in technology that facilitate near-real-time data collection and collaborative information sharing, coupled with computational advances such as machine learning and cloud computing services, have made it easier to develop sophisticated tools and have fostered a growing demand for such tools (Wong-Parodi and others, 2020; Heavin and Adam, 2022). This demand will likely continue to grow as the impacts of climate change are increasingly felt with natural hazards increasing in frequency and intensity as well as growing interest in providing applications that incorporate projections and forecasts (National Academies of Sciences, Engineering, Medicine, 2018).

The wider scientific literature examining the effectiveness of decision support testifies to how well-designed DSTs aid decision making (for example: Loucks, 1995; Newman and others, 2000; Shim and others, 2002; Uran and Janssen, 2003; Cravens and Ardoin, 2016; Grêt-Regamey and others, 2017; Barnhart and others, 2018). In one detailed examination of an interactive geospatial tool called MarineMap,3 used to site marine protected areas along the coast of California, Cravens (2016) found that the DST shaped how users solved problems, negotiated trade-offs, and made mutually acceptable decisions in a collaborative setting. In turn, the DST users were able to identify specific ways that the tool used in the California case performed these decision-support functions, including by creating a common language among participants, helping users understand the geography and scientific criteria in play during the process, aiding stakeholders in identifying shared or diverging interests, and facilitating joint problem solving.

3

MarineMap no longer exists in the form that was used during the California Marine Life Protection Act (MLPA) Initiative planning process. The experience of developing and implementing MarineMap influenced subsequent projects by the development team, primarily a tool called SeaSketch (https://www.seasketch.org/).

Wong-Parodi and others (2020) investigated another coastal case, the Louisiana Coastal Master Plan Process and Planning Tool.4 The authors analyzed how the tool’s characteristics supported “high quality” decision making, noting that, “stakeholders must be able to understand the costs, benefits, and uncertainties of decision alternatives well enough to afford them the ability to make decisions that are in accordance with their values, beliefs and contexts” (Wong-Parodi and others, 2020, p. 54). They conclude that DSTs are effective when they support the full process of making decisions, from defining goals to identifying and evaluating alternatives to monitoring outcomes.

Yet this same body of research on the effectiveness of decision support also offers caution: realizing the potential of DSTs is difficult. In too many cases, well-intentioned ideas, time, money, and enthusiasm do not result in effective tools (Moser, 2009; van der Molen and others, 2018; Stoltz and others, 2023). Documented barriers include the following:

  • Insufficient resources of the development team (Dale and English, 1999).

  • Institutional constraints on the development process (Pearman and Cravens, 2022).

  • Limited resources of users to vet the credibility of new tools; high transaction costs for users to switch from a known information source to a new tool; and difficulties keeping track and differentiating between tools with similar purposes (Cravens, 2018).

Usability can also be a significant barrier to the success of DSTs. Specific usability challenges for scientific information products include mismatches between the type or scale of information provided and the needs of users (for example: Wong-Parodi and others, 2020; Cravens, 2016; Cravens, 2018; Dilling and Lemos, 2011), users’ perceptions of the usefulness and trustworthiness of the information or product (for example: Cash and others, 2006; Dunn and Laing, 2017; Jacobs and Buizer, 2016; White and others, 2010), and users’ capacities to interpret and incorporate the information into decision making (for example: Dilling and Lemos, 2011; van der Molen and others, 2018). Information products may not adequately account for how users understand decision contexts; for example, Cravens (2018) described a manager who thought of their region (the Southern Rockies) in terrestrial as opposed to watershed terms and thus saw no relevance in a drought early warning system focused on the “Upper Colorado River” despite the overlapping geographies. More broadly, as technology becomes more sophisticated, user expectations of what applications should look and behave like also change, influencing expectations of government or scientific tools (Cravens, 2016). These persistent barriers have led to repeated calls for greater attention to evaluating the effectiveness of DSTs and learning from past successes and failures (for example: Moser, 2009; Cravens, 2016; Wong-Parodi and others, 2020).

Designing, developing, and maintaining tools and applications to provide actionable science to decision makers involves collaboration among (at minimum) subject matter experts, technology professionals, and the intended user community. For USGS’s DSTs, projects are often initiated based on USGS research, and USGS data become the foundation of the DSTs. In terms of technology, many USGS researchers are familiar with specific programming languages as part of their research. However, software development is a broader professional field that requires knowledge and skills in addition to programming. For example, a full software development cycle might include the following stages: project planning, requirement definitions, design, prototyping, quality assurance, implementation, and maintenance (Ruparelia, 2010). Another aspect of software development is project management, which is often a dedicated role in larger projects or organizations. A typical project manager would have expertise in the areas of managing project scope, timeline, cost, quality, outreach, risks, and human resources (Project Management Institute, 2021).

A body of interdisciplinary research and professional expertise called human-computer interaction addresses the challenge of ensuring software programs like DSTs work effectively for their intended users and emphasizes the importance of considering the lived experience of the people who will use a tool (Rigby and Priest, 2023). Many would date the beginnings of human-computer interaction to 1983, when computer scientist Austin Henderson and anthropology Ph.D student Lucy Suchman put a video camera in the copy room at the Xerox Palo Alto Research Center and analyzed how the scientists at the research laboratory interacted with the early Xerox copier (PARC, 2016). Suchman’s ethnography was pivotal in demonstrating that people rarely interact with technology in ways that designers expect (or hope) they will, and thus, it can be disadvantageous to make assumptions about what users will do without testing or observing “in the wild” (PARC, 2011). Human-computer interaction research and the associated profession is now commonly understood to be a sub-field of the overall “usability” discipline. By combing the findings contributed by a diverse set of disciplines, including engineering, cognitive science, and design, the “usability” discipline and its professionals aim to create human-computing experiences that are as efficient and effective as possible (Rubinstein and Hersh, 1984; Gould and Lewis, 1985; Nielsen, 1994). Within modern corporate software design teams, usability experts and design researcher positions have become common, illustrating the value that the corporate sector finds in understanding the experiences, motivations, and behavior of application users. Within the Federal Government, the value of this expertise is increasingly recognized and initiatives (such as Usability.gov) that focus on building capacity are increasingly common. However, the number of people with usability and empirical human-computer interaction expertise within USGS remains small.

Many software design frameworks emphasize the importance of iteratively developing an application in response to feedback and usability testing results from the target users. In general, a human-centered design (HCD) approach can be used to promote mindsets and techniques that help a design team approach problem solving in a way that places central focus on human experiences. Particularly, user-centered design (UCD) is a human-focused technology design methodology, “in which designers focus on the users and their needs in each phase of the design process” (Interaction Design Foundation, undated). The UCD process is generally depicted as a series of iterative stages by which a team develops increasing understanding of users’ needs and a finer match of a tool’s capacities with those needs. Figure 1 shows one common depiction of these stages.

Flow chart shows the four key aspects of user-centered design: context of use, user-specified
                     requirements, design solutions, and tool evaluation against requirements.
Figure 1.

Diagram showing the user-centered design process. User-centered design is an iterative process that focuses on understanding users and their context in all stages of design and development. Modified from Interaction Design Foundation (undated).

While pilot projects at the USGS have adopted more human and user-centered approaches in recent years (see Box 1), this iterative approach is not yet widespread nor familiar across the Bureau.

Research Objectives and Methods

This project grew from the observations of the author team, each of whom played a role in supporting stakeholder engagement, decision-support design and development, or tool usability and evaluation for one or more USGS Mission Areas. In our respective roles, we realized that decision-support products of various types are increasingly common outputs across USGS, but that scientists and technology professionals working in different parts of the Bureau did not necessarily have access to lessons learned from the design and development process in other parts of the USGS community. This interest in capturing lessons learned and insights about best practices led to the USGS Community for Data Integration funding the current research.

The purpose of this study is to understand USGS experiences to help improve the DST design and development process. This report addresses four research questions:

  • What can be learned from USGS employees’ experiences with DST development?

  • What principles would be beneficial for USGS tool developers to consider before building a DST?

  • What barriers to success do USGS tool developers experience?

  • What resources do tool designers believe would improve the design process?

The study was conducted in three steps described in the following sections.

Step 1: Literature Review

We reviewed the literature to identify papers describing best practices for DST design, development, and evaluation. Papers addressing the definition and functions of decision support were also considered. The goal of this literature review was to understand the state of the scientific literature regarding the process of building successful DSTs. The literature review was conducted in Google Scholar and Web of Science using the search terms “decision support,” “decision support tool,” “decision support product,” “decisions support system,” “usability,” “user-centered design,” “human-centered design,” and “stakeholder engagement” both individually and in combination. We selected these terms because they are inclusive of the varied ways that DSTs are discussed within human-computer interaction and other scientific literatures that address them. Criteria for papers reviewed included whether they focused on one of the search terms or included a case study of a decision-support design process. Twenty-two papers were reviewed. The information collected in this literature review guided the design of both the survey and interview protocol used in this research and informed the principles that we present throughout this report.

Step 2: Web-Based Survey of USGS Employees

We designed a survey including a variety of open-ended and multiple-choice questions to understand how USGS scientists and developers create tools. The survey was distributed on November 30, 2020, in the weekly all-employee USGS newsletter “NeedToKnow” to maximize the awareness of the survey and advertise the opportunity to participate. All USGS employees who had been involved in initiating, designing, or implementing decision-support projects were encouraged to complete the survey. The survey (appendix 1) was administered in Qualtrics5 (and was open for four months,6 during which 54 responses were collected across USGS Mission Areas (table 1) and Regions (table 2)). Survey questions were informed by the literature review as well as our past experiences supporting USGS employees to develop DSTs.

6

The survey administration fell over the winter holidays when many employees were out on leave, so the survey was left open over the entire time before and after the holidays.

Table 1.    

U.S. Geological Survey Mission Areas of survey respondents.

[USGS, U.S. Geological Survey; %, percent]

USGS mission areas of survey respondents % Count
Core Science Systems 17 9
Ecosystems 33 18
Energy and Minerals 0 0
Natural Hazards 15 8
Water Resources 35 19
Total 100 54
Table 1.    U.S. Geological Survey Mission Areas of survey respondents.

Table 2.    

U.S. Geological Survey Regions of survey respondents.

[USGS, U.S. Geological Survey; %, percentage]

USGS regions of survey respondents % Count
Headquarters 20 11
Southeast 9 5
Midcontinent 20 11
Rocky Mountain 7 4
Southwest 17 9
Northwest - Pacific Islands 17 9
Alaska 4 2
Northeast 6 3
Total 100 54
Table 2.    U.S. Geological Survey Regions of survey respondents.

We designed the survey to collect general information about DST creation experiences throughout USGS as well as to collect information that would guide the development of the interview protocol. Topics covered in the survey questions included respondents’ definition of DSTs, their role in creating DSTs, details about past DST(s) created, and barriers to tool development (see appendix 1 for details). The survey also asked respondents to name up to five DSTs that they had been involved in creating. A total of 78 DSTs were named that spanned many different formats. These tools are listed in appendix 2. Survey respondents were not required to provide identifiable information within the survey but were given the option of providing their contact information if they were interested in participating in an interview to further discuss their experiences with DST development.

Step 3: Interviews of USGS Employees

We developed our interview protocol based on both the literature review and information gleaned from the survey responses. We conducted two pilot interviews in January 2021 that resulted in improvements to the original interview protocol, such as including additional questions like, “What resources would have helped you in the DST design process?” and a reordering of questions for better interview flow. Appendix 3 shows the final interview protocol.

We sent requests for interviews via email to the survey respondents who indicated their interest in being interviewed in the web-based survey. We conducted interviews May through December 2021. At the end of each interview, interviewees were asked whether they knew of other USGS employees with DST experience who the research team could interview (called “snowball sampling”). All 25 USGS employees who agreed to the interview requests were able to participate in an interview. The interviews were conducted virtually using Microsoft Teams and designed to take no more than one hour each. The interviewees spanned four USGS Mission Areas (table 3) and came from a range of Regions (table 4).

Table 3.    

U.S. Geological Survey Mission Areas of interviewees.

[USGS, U.S. Geological Survey; %, percent]

USGS Mission Areas of interviewees % Count
Core Science Systems 16 4
Ecosystems 16 4
Energy and Minerals 0 0
Natural Hazards 28 7
Water Resources 40 10
Total 100 25
Table 3.    U.S. Geological Survey Mission Areas of interviewees.

Table 4.    

U.S. Geological Survey Regions of interviewees.

[USGS, U.S. Geological Survey; %, percentage]

USGS Regions of interviewees % Count
Midcontinent 32 8
Southeast 20 5
Southwest 16 4
Headquarters 16 4
Northeast 8 2
Rocky Mountain 4 1
Northwest—Pacific Islands 4 1
Alaska 0 0
Total 100 25
Table 4.    U.S. Geological Survey Regions of interviewees.

The 25 interviews were recorded on Microsoft Teams and professionally transcribed. The transcriptions were then uploaded to NVivo,7 a qualitative data analysis software. Qualitative data analysis (“coding”) was completed by one member of the author team. This analysis method means the researcher coded individual sentences or sections of text with descriptive labels that allowed for the rigorous identification of related content and themes across the data (Saldaña, 2016). We used a list of predefined codes based on the survey data and the interview protocol, a method known as deductive, concept-driven, or thematic coding (Guest, MacQueen, and Namey, 2012) Some example codes were “Resources Needed,” “Barriers,” and “Hindsight.” The full codebook appears in appendix 4.

7

NVivo, release 1.3, a product of Lumivero.

Decision Support and Decision-Support Products at USGS

How Do USGS Employees Define Decision Support?

The range of tools and applications created by USGS scientists and developers with the goal of delivering information to support decisions may broadly be referred to as “decision-support tools,” “decision-support systems,” or “decision-support products.” Within the larger scientific literature, DSTs were classically defined narrowly as software systems that assist in solving unstructured problems for which there is not a mathematically optimal solution (Geoffrion, 1983). In 2009, the National Academies surveyed the landscape of decision support for climate change science and offered a definition of decision support as, “organized efforts to produce, disseminate, and facilitate the use of data and information in order to improve the quality and efficacy of climate-related decisions.” (National Research Council [NRC], 2009, p. 2) and highlighted the improved efficacy of decision support when scientists and decision makers engage in mutual learning. Similarly, Wong-Parodi and others, (2020, p. 52) define DSTs as, “the array of computer-based tools developed to assist sound decision making, including the management of environmental risks and planning for impacts in different sectors and regions.” Within the USGS, the definition of “decision support” and the understanding of what constitutes a “tool” varies across the Bureau, with programs, projects, and scientists using these terms in subtly but importantly different ways. Recognition of these differences was one of the motivations for this study.

Key to understanding the experiences of USGS employees who develop DSTs was understanding their definition of DSTs. Therefore, one of the first questions asked in both the survey and the interviews was how individual scientists and developers define DSTs. “Decision science,” “decision-support tools,” “decision-support frameworks,” “decision-support systems,” and even “decision-support system tools” are all terms used within USGS to describe data or applications that help a stakeholder make a decision. In the survey and interview protocols, for the sake of simplicity, we used the term “decision-support tool” (DST).

In the decision-support literature, definitions of DSTs range from narrow, to broad, to expansive. Some, like Fleisher and others (2014), limit their definition only to interactive software applications that structure a decision-making process. Others broaden the definition to include all software applications (Wong-Parodi and others, 2020). Others further expand the definition to a wide range of activities that support decision making (NRC, 2009). We provided three definitions based on these distinctions in the literature as response choices in the survey, then asked respondents to choose among them or to select “other” and fill in their own definition. Eighteen percent of our respondents defined DSTs as interactive software applications that structure a decision-making process (for example, modeling tools that allow users to explore different scenarios), 14 percent defined DSTs as any software application that supports user decision making (for example, data portals and modeling tools), 62 percent defined DSTs as any activity (software or other) that provides data or other types of information products that supports user decision making (like decision-maker outreach, infographics, data portals, and modeling tools), and 6 percent chose “other” and provided alternative definitions. When these answers were compared with individuals' Mission Areas, it became clear that most respondents (62 percent) trended towards choosing the broadest definition of DSTs (fig. 2).

Graph shows that option three was most selected at every Mission Area except Core
                        Science Systems.
Figure 2.

Survey respondents’ definitions of “decision-support tool” by U.S. Geological Survey Mission Area.

In the interviews, we asked USGS employees to define decision support and some of their responses revealed an even broader definition. As one interviewee asserted:

I think decision support, lowercase, has been the main reason I'm here. If I didn't want to do decision support, I wouldn't be at the USGS because I would just get [National Science Foundation] funding and be an academic. I personally feel everything we [as a Bureau] do is decision support, or at least should be. (Interviewee 3)

This quote exemplifies how some USGS employees have adopted an even broader definition of DSTs than those that appear in the literature (which informed the multiple-choice options provided to them in the survey). This broad definition can include providing raw data and information to the public without an explicit focus on who will use it and how. This definition may come from the fact that USGS is not a decision-making Bureau, but rather a Bureau with a mission to provide quality science, information, and data that can be used by cooperators and other stakeholders to support decisions. It follows that some interviewees view this distinction as evidence that everything USGS does is decision support. In essence, this broad (and, to our knowledge, unique to USGS employees) definition suggests that the activity of decision support is analogous to the USGS mission of making scientific information accessible. This sometimes results in a mentality that USGS employees could simply share data with those who need it rather than thinking about ways that they could enable and empower decision makers to use the data more effectively. As one interviewee explained:

If someone still decides to build in a tsunami zone because of many other preexisting economic and social reasons, and political reasons, that's their call. It's not my job to say you can't build in a tsunami zone * * *. As long as they're letting us be at the table, and they say, “I'm glad you all were here for the conversation.” That's success. (Interviewee 3)

Alternatively, this belief that all USGS science is decision support can also lead to the creation of DSTs where a DST may not be necessary, and the decision maker simply needs streamlined data access. As an interviewee explained:

There has been more than one example of building a tool that was not used because there was a misunderstanding, or we didn’t go the whole nine yards in terms of understanding the use case before creating something. (Interviewee 10)

Understanding when to build a DST is not always easy, and the desire to bring greater clarity to this question was one of the driving forces behind this research and the principles presented in this report.

Barriers to Successful Decision-Support Tool Development

The survey allowed respondents to answer the same group of questions for up to five different tools that they were involved in creating. One of these questions was, “What barriers did you and (or) the team face in building the DST?” Respondents were given eight options of barriers to choose from including “barriers in another category” and “did not experience any barriers” (appendix 1). Respondents were able to select multiple options and could answer this question for up to five decision support tools. This question allowed us to gather information on what challenges USGS employees face when building a DST to both report on these challenges and offer potential solutions.

Survey responses revealed a multitude of barriers faced by DST designers within USGS (table 5). Notably, only one respondent indicated that they did not experience any barriers when building DSTs. The majority of respondents (98 percent) faced obstacles related to Department of the Interior (DOI) or USGS policies, funding, software, stakeholder engagement, tool usage, or staffing. Sixteen percent of responses were “barriers in another category” and respondents described specific barriers they or their team had faced including limited advertising and outreach, lack of tool maintenance, changes in operational requirements, no access to affordable programmers, lengthy and challenging hiring processes, lack of stakeholder engagement experience, difficulties meeting too many varied and opposing stakeholder needs in a single tool, difficulty tracking usage, and run time especially regarding tools with large datasets.

Table 5.    

Barriers to building decision-support tools.

[IT, information technology; DOI, Department of the Interior; USGS, U.S. Geological Survey; %, percent; n.a., not applicable]

Barriers to building decision-support tools % of respondents Total responses across all respondents
Barriers related to funding (for example, funding availability, contracts) 52 28
Barriers related to software (for example, approval of platform, cloud hosting) 48 26
Barriers related to stakeholder interactions (for example, understanding needs) 39 21
Barriers for tool usage (for example, low usage or not user-friendly) 33 18
Barriers related to staffing (for example, lack of IT capacity, shortage of usability experts) 30 16
Barriers related to DOI or USGS policies (for example, Paperwork Reduction Act, Fundamental Science Practices) 13 7
Did not experience any barriers when building this tool 11 6
Barriers in another category1 43 23
Total n.a. 145
Table 5.    Barriers to building decision-support tools.
1

Sixteen percent of survey respondents selected “barriers in another category” and described specific barriers they or their team had faced including limited advertising and outreach, lack of tool maintenance, changes in operational requirements, no access to affordable programmers, lengthy and challenging hiring processes, lack of stakeholder engagement experience, difficulties meeting too many varied and opposing stakeholder needs in a single tool, difficulty tracking usage, and run time especially regarding tools with large datasets.

We asked our interviewees about the barriers they faced in building the DSTs, how these barriers were or were not overcome, and if there were any resources that would have helped them in the process (appendix 3). One survey respondent, who agreed to be interviewed, explained how multiple interacting barriers kept their tool from being successful:

While substantial financial and technical resources were invested in building [the tool], there was never adequate * * * internal commitment behind consistently promoting it, ensuring timely development, [or] having the USGS set an example by meaningfully populating it with our own data. As a consequence, we built a great tool that was never used or adopted by the community. (Interviewee 18)

This quote demonstrates a range of barriers across development, project management, and deployment encountered by many research participants that the USGS could address.

Successful Decision-Support Tool Development at USGS

Before we asked the research participants whether they participated in creating tools that were successful, we asked them to share their definition of success. Several interviewees pointed to use of the DST as an indicator of success. Others, however, stated that use must be accompanied by informing decisions the tool was developed to support. These interviewees further pointed out that this kind of success could be in the form a small group of decision makers rather than a large number of total users. This is similar to the findings of Pearman and Cravens (2022) in which drought tool creators defined success to include producing tools that met defined needs and were used. Other interviewees felt that success could be measured by widespread use and adoption of a DST beyond the initial group of decision makers or cooperators that funded the tool’s development. A growing audience over time, too, was an indication of success for these interviewees.

As is clear in these differing definitions of success, DST success can be measured in several ways, which was also stated by a handful of interviewees who maintained that success is different for every project and varies depending on the perspectives of the developer and the user (and, presumably, the funder). Some interviewees also pointed to “ease of use” of the tool as a metric of success. One interviewee stated that customer satisfaction was a way of identifying success, saying, “The fact that people keep coming back to us and asking us to [make another DST]. That’s really our measure of success. People like what we make, and they come back and ask for more” (Interviewee 15).

Although barriers to DST development were consistently identified and described by both survey respondents and interviewees, many interviewees explained ways that these barriers were avoided or overcome, leading to successful DSTs with high numbers of users, media attention, and long-term support. The types and frequency of barriers identified by all survey respondents (table 5) were very similar to those types and frequency of barriers experienced in developing tools that are still in use today. However, all six of the tools that did not experience any barriers to their development are still in use today either in their original or a modified form. This finding points to the importance of adequate funding, staffing, software, and stakeholder engagement in building a successful tool with longevity.

Throughout the interviews, interviewees provided examples of successful DSTs they have been a part of and how they knew that the tool had become a success. Most examples related to media coverage of a tool, examples of users incorporating the tool into their decision-making process, or an uptick in the number of users and use by unexpected groups over time. While media coverage and direct relationships with decision makers are easy ways to tell if a tool is successful, many interviewees described the roundabout ways they learn about a tool’s success and lamented the lack of quicker and more reliable avenues for obtaining this information. One interviewee described the Paperwork Reduction Act (PRA)8 as a barrier to soliciting feedback from users:

8

The Paperwork Reduction Act is a law governing how federal agencies collect information from the public. For more information see: https://pra.digital.gov/.

It would be great to send out a survey to 50 different people within [the geographic area] and say, “What do you think of our tool? Do you use it? Give us good examples of how it’s changed your world.” I can’t easily do that without a 6-month process. (Interviewee 4).

Instead of soliciting direct feedback from users, interviewees described receiving feedback from users via emails that let them know in which ways the DST was succeeding and which areas needed improvement; Pearman and Cravens (2022) found the same widespread reliance on informal methods of feedback in their study of the experiences of drought tool creators across State and Federal agencies. According to the interviewees, when a DST is successful it can make complicated science easier to use by ensuring robust and repeatable outcomes to inform decisions, thereby saving time and money, improving decision making, and making the science more understandable.

Five Key Principles of Designing Effective Decision Support

The information collected from the surveys, interviews, and the literature on DST design and development was analyzed to identify best practices that apply both at USGS and more broadly. We present this information about best practices and lessons learned from successful tools as five main principles that contribute to effective DST development. These principles are: (1) use an adaptive, iterative design process, (2) collaborate across disciplines and organizations, (3) engage with the target users of the tool, (4) develop an empirical understanding of use and usability, and (5) plan for the tool’s full life span. Considering these principles before and during tool design and development supports the USGS mission of delivering actionable science by providing guidance on how effective DSTs can be designed, as well as important considerations for USGS DST developers before they begin creating a DST.

Principle 1. Use an Adaptive, Iterative Design Process

The first principle is about giving equal consideration to the process by which tools are created rather than focusing exclusively on the intended products or information outputs. This principle includes the following best practices:

  • Allow space for adaptation and learning

  • Work iteratively

The importance of allowing space for adaptation and learning is a common tenet of human-centered design and user-centered design (defined in Box 1). Most, if not all, visualizations of the design process depict a cycle starting with understanding user needs and continuing through the other design stages (fig. 1). Equally important, these visualizations typically show the iterative nature of the process by using symbols, such as arrows, to signify that the stages feedback to one another. When describing successful tools, many research participants described the process of designing the tool as iterative, participatory, and inclusive of trial and error.

Box 1. Human-Centered and User-Centered Design

Human-centered design (HCD) and user-centered design (UCD) processes both in general have, “phases throughout a design and development life-cycle all while focusing on gaining a deep understanding of who will be using the product” (Usability.gov, undated). Typically, four key phases are visited iteratively throughout an HCD or UCD process: (1) understanding target users and their context, including their needs; (2) defining user requirements, especially in relation to other types of requirements, such as technical and data requirements; (3) identifying design options and creating possible solutions; and (4) evaluating design options and possible solutions for best support for the users. More information on these key phases can be found at https://www.interaction-design.org/literature/topics/user-centered-design.

Examples

At the U.S. Geological Survey (USGS), decision-support tool (DST) developers are making strides in increasing the usability of their products. As discussed in this report, USGS DST projects address a wide range of scientific issues and decisions. Each project ideally would consider specifically how to improve its usability based its own unique context. However, the UCD process is one that all the USGS DST projects can use to integrate usability concepts and techniques into their project lifecycles.

For example, the USGS State of Our Nation’s Coast (SNC) project (https://www.usgs.gov/natural-hazards/coastal-marine-hazards-and-resources/science/state-our-nations-coast) decided to emphasize its DST development process on the first two phases of the UCD process (understanding target users and their context; defining user requirements). The SNC project aimed to improve the visibility of USGS coastal hazards science and prioritize science that meets stakeholder needs. In turn, the success of the SNC project’s tool is dependent on how well its target users can use the project’s science to support their coastal hazard mitigation activities. Although the SNC project has already been in existence for three years, much can still be learned about the project’s target users and their needs. By focusing on the first two phases of the UCD process, the SNC project is ensuring that user research and the resulting data are the foundation of their DST development decisions as the project moves forward with the other UCD phases.

Another example is the USGS Volcano Hazards Program (VHP) (https://www.usgs.gov/programs/VHP). The VHP has the goal to minimize social and economic disruption from volcano hazards, and the program’s key mechanism for achieving its goal is by providing map-based risk and hazard-communication products. In order for these products to meet the target users’ needs effectively, the target users must be able to understand the map-based information and apply the acquired knowledge about the risks to the associated hazards. However, unlike the SNC project, the VHP already has existing products, where the new designs will not only need to have an improved experience for the new users, but also provide continuity for the existing users. As a result, the VHP has focused on the latter two phases of the UCD process (evaluating the existing design; identifying the potential design improvements for the next iterations). By involving target users in usability studies of the current map-based risk and hazard-communication products, the VHP has established a baseline of the existing user experience. Subsequently, the VHP can determine the features and content to improve as well as to maintain in order to provide the desired usability for its target users as the program iterates over the other UCD phases.

Key Takeaways

HCD and UCD provide a framework to guide a team’s process as it iteratively works with stakeholders to design a DST. Although a team may move iteratively through the phases and return to different phases of the process multiple times, the focus on user needs in both of these frameworks reminds teams to place user needs at the center of DST design.

Allow Space for Adaptation and Learning

Allowing space for adaptation and learning ensures that the design team can stay flexible and open to changes regardless of when they occur in a project’s lifetime (Conboy, Fitzgerald, and Golden, 2005). However, any development process is inherently confined by resource allocation, such as funding and staff availability. To accommodate for unforeseen changes, a project manager would ideally strike a balance between creating time allocations for completing each project outcome and keeping planning to a minimum (Barnhart and others, 2018; Meso and Jain, 2006). Ideally, managers would not try to control the development process too rigidly in order to be more responsive to changes (Cusumano and Yoffie, 1999). This part of Principle 1 is about staying flexible and embracing uncertainty in a proactive and organized manner.

Interviewees described how decision-support projects at USGS often focus on creating a tool the same way many scientists think about publishing a scientific paper—as a finite goal to be completed as quickly as possible. However, many of the larger, more successful tools at USGS were created and improved over the span of multiple years. While some interviewees revealed that this process could be frustrating early on when there is messy exploration and ambiguity, respondents also explained how allowing adequate space for the design and development process would eventually result in better tools. One interviewee stated:

Each time we have an improved capability, then we have something more to discuss. As we go along, we get a better idea what it is that users want, and so, we’re able to put more information, either prototypes or ideas and so forth, into the conversation. (Interviewee 9)

Although essential to improving the tool, the iterative process described in this quote is time and resource intensive. Creating the space and time necessary for trial and error is essential for this process to occur. This can be achieved by having the team leaders monitor the team’s progress at regularly scheduled review meetings where any issues or changes in requirements can be assessed and addressed by the next meeting (Grenning, 2001). When the development team is an adaptive organizational unit that is ready to tolerate the ambiguities of the design and development process, it allows for sufficient time to discuss the assumptions, needs, and limitations of approaches (Barnhart and others, 2018). Allowing space for the design and development process to unfold requires leadership and funders to have buy-in in the process (Lavery, 2018).

Work Iteratively

Successful DST development requires iteration (Newman and others, 2000); that is, successive rounds of refining designs based on user feedback. When focusing on iteration with user groups and prototype testing, some USGS tool development teams have enlisted expertise in this area by working with usability experts within USGS or contracting organizations like 18F,9 a technology and design consultancy within the U.S. Government.

Respondents reported that without iteration, DST projects could miss the mark. One USGS software developer described the benefits of the Agile method of software design (a method described in Meso and Jain [2006] that includes constant iteration and stakeholder engagement throughout the design process). He stated what happens when iteration is not prioritized:

One thing I've learned in nine years of doing software development, the other way of doing it which is generally called the waterfall method where you just say, “Okay, build this thing for us. Here's all the specs, and we'll see you in six months.” That does not work. It never leads to good outcomes. (Interviewee 19)

As the interviewee explains, development processes that lack iteration rarely led to good outcomes. It is important to note that iteration can occur during the development process as well as after the tool is created. Successful tools created by interviewees often resulted from long-term relationships between the tool’s creators and the tool’s users. Some interviewees reported attending meetings or workshops with the user group (sometimes meetings that were already regularly held for other purposes) in order to keep the DST relevant. For example, one interviewee stated:

Periodically, [we had] presentations and engagements with the [users] over the course of a year or year and a half to update them on new pieces or getting their feedback on that usability, getting their feedback on suggested functionality. (Interviewee 10)

This quote demonstrates how, over time, the development team worked iteratively with their users to improve the tool after it was created. Iterative prototyping and evolving development processes are key contributors to successful DSTs (Newman and others, 2000).

Obtaining user feedback throughout a tool’s lifetime is an ongoing operational task that improves a tool’s uptake and usability. User feedback can be obtained through both qualitative methods, like emails from users or media coverage, or quantitative methods, such as web analytics. One interviewee explained how user feedback helps him to modify his tool:

[I] understand what the users might want by paying attention to what sort of questions people [are] asking—when people do run into problems and contact me I [try] to see if there’s something my program could do that would have made it so that they wouldn’t have had to contact me… These user questions should not be viewed as a drag on the developer’s time. They should be viewed as a resource to be mined for ideas. (Interviewee 21)

This quote explains why user feedback is key to any iterative tool development process and why the pathways for this feedback should be as accessible as possible. User feedback can sometimes also reveal uses for a tool that the tool's designers did not originally intend.

 

Example Tool 1. The Chesapeake Bay Watershed Data Dashboard

  • Design time.—3 years

  • Team lead(s).—John Wolf (USGS) and Emily Trentacoste (EPA).

  • Principle represented.—Use an adaptive, iterative design process

  • Decision makers or end users.—Environmental managers and planners in the Chesapeake Bay

  • Decision opportunity.—Watershed restoration plan development and implementation

  • Decision-support need.—Information on nontidal monitoring, tracking interventions, and best management practices

Tool Description

The Chesapeake Bay Watershed Data Dashboard is an online tool that provides accessibility and visualization of data and technical information that can help guide water quality and watershed planning efforts. A large amount of scientific and technical information is available to environmental managers and planners at both State and local levels to inform restoration efforts. Much of this information has been updated or newly generated in recent years.

The Chesapeake Bay Watershed Data Dashboard is a collaborative project of the Chesapeake Bay Program Partnership, with input from the U.S. Environmental Protection Agency (EPA), the U.S. Geological Survey (USGS), and other science providers.

Quote

“We had the mindset of ‘if you build it, they will come’ [laughs], and so, occasionally, that worked if you happened to guess what was in their minds in advance. At this point, we’re much more inclined to interact with the proponent, the sponsor, if you will, as well as the people they are trying to reach, and trying to do that in advance of actual tool construction. We have certainly evolved, I think, in a good way.” — John Wolf, team lead (USGS)

Key Takeaway

Focusing on the process rather than the product allowed tool development to be driven by the needs of the target users rather than the assumptions of tool developers.

For more information, go to https://www.usgs.gov/centers/lower-mississippi-gulf-water-science-center/science/chesapeake-bay-watershed-data-dashboard?qt-science_center_objects=0.

Principle 2. Collaborate Across Disciplines and Organizations

The second principle is to collaborate and build connections across disciplines and organizations. This principle includes the following best practices:

  • Work with diverse teams

  • Bring diversity into the team early

  • Get everyone on the same page and speaking the same language

When interviewees described DST design processes that were successful, they often were the product of teams with diverse skillsets where the entire team convened from the beginning and was able to overcome differences in communication or starting assumptions to get on the same page.

Work with Diverse Teams

Human-centered and user-centered design approaches put a great emphasis on the composition of the development team and their talents, skills, perspectives, and knowledge (Meso and Jain, 2006). Multi-disciplinary teams have been shown to increase the success of DST development (Fleisher and others, 2014). This success may be due in part to the integration of plural perspectives which can create a more holistic process (Vennix, 1995) or due to the way that more diverse team composition enhances creativity (Ulibarri and others, 2019).

USGS respondents who believed their DST(s) were successful all spoke highly of their team members, especially those who brought experience and knowledge to the table that the respondents themselves did not have. One interviewee added that in addition to a mix of roles in a development team, it is good to have a mix of personalities and perspectives: “You need a mix of roles and personalities in a group to make one of these [DSTs] effective * * *. Make sure you don’t have everybody who thinks just like you and is from just your little group on your team” (Interviewee 9).

This interviewee went on to explain that the type of person who excels at the operational side of maintaining the tool and someone who is primarily interested in the initial creation of the tool are very different personality types, and that the development team needs both for the tool to be successful. Multidisciplinary design teams benefit from members that span a range of roles and expertise. Some of the roles and job titles that research participants identified as being valuable included fundraiser, spokesperson, social scientist, usability specialist, graphic designer, writer, coder, and project manager. It is important to note that some of these roles and job titles are scarce in the USGS and comprise skillsets that conventionally trained USGS natural scientists do not necessarily have. For example, out of more than 8,000 USGS employees, only 11 permanent employees hold the title social scientist, although other people with social science training may work under other job titles.10 This lack of employees with specific expertise generally results in scientists backfilling these roles and shows that more training in those skillsets or hiring more employees who are trained in these skillsets could be beneficial for developing DSTs. As one interviewee explained (when referring to the hiring of usability experts), “There's people who are trained to do these skills. Appreciate them and realize the additional expertise that having someone who is trained and experienced can bring to the table as opposed to having a scientist [take on that task]” (Interviewee 18).

10

The DOI Position Locator provides statistics on the number of Federal employees working within DOI Bureaus under specific job titles and occupational series. The numbers reported here are based on accessing the DOI Position Locator on May 3, 2023, at https://careers.doi.gov/position-locator.

The mention of USGS scientists backfilling tool design roles outside of their expertise was common throughout the interviews. Having team members fulfill roles for which they do not have expertise is a tradeoff that can reduce costs, and sometimes design time, but that can also decrease the quality of aspects of the DST that may be crucial to its success such as graphic design or usability. However, human-centered design is inherently people focused and needed expertise can sometimes be found unexpectedly within teams. For example, some interviewees mentioned working with coders who also enjoyed graphic design. It is beneficial to recognize, use, and nurture the varied talents of all team members while seeking to develop a well-rounded team that can fill the roles required for successful DST development.

Bring Diversity into the Team Early

Managing diverse teams requires an open flow of communication that can benefit from early involvement of all team members. Katzenbach and Smith (1992) found teams performed more effectively when all the team members have a clear understanding of a team’s goal from the beginning. This finding was supported by the interviews on successful DSTs in which the entire design team was brought together early in the design process. The data revealed that teams that did not initially convene with every part of the design team left some roles (developers, usability experts, and stakeholder engagement scientists, in particular) being brought in too late to be fully effective.

Every USGS coder or designer, usability expert, and social scientist who was interviewed preferred to be brought into a DST project early in the process. For example, a USGS employee explained that when the programming team joins a project in its early stages, they can help define what the tool could look like and how it could function:

I think it's really important to have the programmer team involved in the beginning and involved with the people that actually have to use it on both sides. They'd come with me to these meetings with collaborators too * * * I think that's critical, because if they don't hear what the problems are firsthand, they're relying on one person to translate them, and that's a hot mess. (Interviewee 8)

The benefits of involving the programmer early in the process are twofold: (1) they learn about user needs firsthand, and (2) they can inform the design team of what is and is not possible based on their specific expertise. When interviewing USGS programmers, they confirmed that being involved early in the design process was helpful to them. This holds true for every member of the team essential to the tool’s design. When team members are brought on to a project late in the game, it can be difficult for them to fix the problem that they were brought on to fix. A quote from a project manager explains how a tool would have benefitted from earlier involvement of a UX-UI expert, a term that refers to someone with expertise in user experience (UX) design and user interface (UI) design:

[The tool] existed prior to us having a usability person, but we brought her in… So, the new [tool] looks completely different than the old [tool], and that's because of her presence. I brought in a [usability expert] because I knew [the tool] was kind of clunky. It worked, but it was clunky. Could have been fine, most things at the [Geological] Survey looked fine, but they're not great. I would love to have a hundred UX-UI experts instead of us geologists or seismologists thinking we can not only be experts in this science topic, but also are experts in tool development. (Interviewee 3)

Get Everyone on the Same Page and Speaking the Same Language

When building a DST, having a multidisciplinary team that can bring multiple perspectives to the development process is valuable (Fleisher and others, 2014). However, working with a multidisciplinary team can be challenging because software developers, natural scientists, social scientists, and usability experts all speak slightly different languages. When you add the users, stakeholders, and outside partners to this list, it takes a considerable amount of time and energy for everyone to get on the same page. Having a mutual understanding of the team’s goals is essential for a development team to be effective because its members must be responsive, competent, and collaborative (Cockburn and Highsmith, 2001; Boehm, 2002). As one of the interviewees explained: “You end up with a better team if everybody understands all the parts of the team” (Interviewee 8).

The interviewee went on to describe how it took multiple interagency meetings for everyone involved in the tool’s design to realize that they were talking past each other. For multiple days, these agencies were talking about the importance of delivering the data in “real time” without realizing that they all had different definitions of that term. For this tool, one agency defined “real time” as seconds, the USGS defined “real-time” as four hours, and another partner agency defined “real-time” as four to six hours. Then everyone turned to the primary collaborator and intended user of the tool, and asked what their definition of real time was, and they said that real-time for their organization is seven days. Once everyone realized that they had been talking past each other and that the worry of not being able to deliver data in seconds or four hours was not a key requirement, they could move forward and design a tool that worked for the primary organization’s needs. Although every agency involved in these meetings was approaching the idea of real-time decision making differently, the goals ended up being achievable once they all got on the same page.

 

Example Tool 2. The Great Lakes Coastal Wetland Restoration Assessment

  • Design time.—6–12 months per mapper

  • Team lead(s).—Kurt Kowalski and Chris Sanocki

  • Principle represented.—Collaborate across disciplines and organizations

  • Decision makers or end users.—Ecological managers and planners

  • Decision opportunity.—Coastal wetland habitat restoration planning

  • Decision-support need.—Information on which coastal wetlands have the greatest potential for habitat restoration

Tool Description

The Great Lakes Coastal Wetland Restoration Assessment (GLCWRA) initiative is composed of multiple individual mappers that are described by a larger story map. The GLCWRA initiative uses principles of geodesign to identify coastal wetland areas that have the greatest potential for habitat restoration. The resulting composite index raster can be used by ecological managers and planners to assist with the identification and selection of wetlands for restoration initiatives. The GLCWRA team partnered with the New College of Florida and U.S. Geological Survey Wisconsin Informatics and Mapping group for their geographic information system (GIS) experience and hosting abilities and collaborated with stakeholders at the U.S. Fish and Wildlife Service (Wildlife Refuges), State agencies, Ducks Unlimited, the Nature Conservancy, and more.

Quote

“I value the partnerships and the ideas of other experts, so to me, it’s better to engage collaborators [and] others in USGS who have that expertise and build off their expertise rather than trying to duplicate it within our center.” — Kurt Kowalski, team lead

Key Takeaway

When the decision-support tool development team lacks expertise in an area, collaborating with people who have that expertise benefits the tool development and implementation process.

For more information, go to https://glcwra.wim.usgs.gov/ or https://www.sciencebase.gov/catalog/item/56d862bfe4b015c306f6bdad.

Principle 3. Engage with the Target Users of the Tool

The third principle is about defining the target users for the tool you are building and engaging with them throughout the tool design and development process. This practice can be considered a special case of the more general process of stakeholder engagement (Box 2). In the human-computer interaction discipline there is an important distinction between the terms, “stakeholder” and “user.” A stakeholder is defined as any group or individual who can affect or is affected by the achievement of the organization's objectives (Freeman, 2010). This means that while some stakeholders may not use a DST themselves, they may still benefit from its creation (Kelly, 2019). All users are stakeholders, but not all stakeholders are users. In the context of DST development, users are a special subset of stakeholders, and user engagement is key to ensuring that tools serve the decision processes they are built to support (Uran and Janssen, 2003).

This principle includes three best practices:

  • Define the target users of the tool

  • Understand the decision context

  • Adapt to evolving user needs and expectations

Define the Target Users of the Tool

Defining specifically who the target users are for a tool is one of the challenges of the DST development process (Loucks, 1995), and limited involvement of users in the development phase can lead to unsuccessful DSTs (Uran and Janssen, 2003). When users become part of the development process, scientific expertise and local knowledge are combined, maximizing the opportunities and benefits arising from the development of the tool (Oliver and others, 2017; Burnett, 2020), including mutual learning (NRC, 2009).

In the interviews, tools that had low usage were often developed without specific users in mind. Many of the interviewees explained that within USGS unsuccessful DSTs are often built without considering who the target users of the tool will be. One remarked: “That’s my pet peeve of the [U.S. Geological] Survey, is that we do tend to build tools that don’t have a [specific intended] audience… We track it for three years of use and it’s at 200 hits in its lifetime” (Interviewee 15).

The sentiment that some USGS DSTs are built without target users in mind was expressed in many of the interviews. Oftentimes interviewees explained a process in which a group of scientists discovered an intriguing problem that they thought they could solve by building a DST, without thinking about who would actually use the DST or how they would use it. As one interviewee stated:

I think the [DSTs] that have user engagement have a lot more buy-in. Sometimes some of the things I've built it's much more of a ‘If you build it, they will come,’ mentality. * * * To be perfectly honest with you, sometimes these things do not get a lot of attention. They really don't. There might be 10 to 20 users who use it all the time, and that could be it, and then you get a few spikes here and there from time to time. Some of these probably hardly get seen at all after the initial year that they're a new thing, and then they might just fall by the wayside. (Interviewee 19)

Creating a tool in a vacuum, without identifying the target users of the tool, has led to the creation of DSTs that are not used over time and do not necessarily support the decisions they were built to support.

Understand the Decision Context

Once the target users of the DST are defined, one of the primary goals of user engagement is to understand the context of the decision(s) that the DST is built to support. Understanding the decision context means learning about how target users accomplish specific tasks, make decisions, and what they are trying to achieve or what problems they are trying to solve. Gathering this information enables the DST development team to meet user expectations and needs. As an interviewee explained:

I try to outline who are the people using [the tool] before we just build it. * * * We write up these use cases where we get into what [the user’s] pain points are. What are they trying to do? What decisions are they making without us? What decisions could they do better if they had us around? (Interviewee 3)

Understanding the decision context is important because too often the interactions between scientists and users are motivated by the interests and goals of scientists and scientific agencies rather than the needs of their users (Lavery, 2018). This creates a disconnect and can ignore the obstacles faced by users, including diverse data formats and low data literacy (Sandoval-Almazán and others, 2017; Restrepo-Osorio and others, 2022; Stoltz and others, 2023). The context in which a user makes a decision defines a set of constraints that needs to be analyzed and understood early in a tool development process as they are of crucial importance, especially when opinions of the project team differ (Wallach and Scholz, 2012). Structured user engagement is essential for increasing understanding of the complex social, economic, cultural, and political settings in which a decision is made (Röckmann and others, 2012; Carmona and others, 2013) because actual users’ behavior tends to differ from the imagined behavior assumed by DST developers—especially regarding the importance of proposed features (Wallach and Scholz, 2012).

 

Box 2. Engaging with Stakeholders

Stakeholder engagement is a term applied to a variety of interactions that organizations (like the U.S. Geological Survey [USGS]) carry out with groups or individuals that have a stake in the activities of that organization. These interactions can range from public outreach (in which an organization communicates about an activity) to developing partnerships with stakeholders to undertake an activity together (Bamzai-Dodson and others, 2021). Fiorino (1990) notes that organizations may engage stakeholders for a variety of reasons, including the following:

  • normative reasons, such as the inherent good associated with greater participation in decision making in democratic societies

  • substantive reasons, such as bringing about better decisions through increased depth and breadth of information that can brought to bear on an issue when more perspectives are included

  • instrumental reasons, such as increasing the legitimacy of a decision in the eyes of stakeholders when they feel ownership in the process

Structured stakeholder engagement is differentiated from unstructured stakeholder engagement, or the ongoing relationships that a USGS scientist or employee has with stakeholders that are characterized by unstructured interactions and other forms of public outreach. Structured stakeholder engagement requires rigorous social science methods and adherence to ethical standards like confidentiality.

Methods used in structured stakeholder engagement include the following:

  • surveys, a method of collecting data in a consistent way such as a questionnaire (Young, 2015)

  • semi-structured or open-ended interviews, which combine defined questions with unstructured exploration. This gives interviewees the opportunity to raise new issues that may not have been accounted for by the interviewer (Wilson, 2014)

  • focus groups, which obtain data from a purposely selected group of individuals (Nyumba and others, 2018)

Stakeholder engagement methods can uncover the decision context when developing decision-support tools (DSTs). Although usability testing may also fall under the umbrella of stakeholder engagement, it occurs later in the tool development process, after the decision context has been understood. Usability testing is described in more detail in Principle 4.

While structured stakeholder engagement methods can be cost- and time-intensive (Oliver and others, 2017), they are essential for ensuring that DSTs serve the need(s) they were built to support (Oliver and others, 2017; Burnett 2020). All of the USGS interviewees spoke about the benefits that can be gained from both structured and unstructured stakeholder engagement. Some of these methods included long-lasting relationships with individuals in partner organizations (unstructured engagement) as well as arranging focus groups with targeted tool users (structured engagement). Several interviewees described tools that only achieved their goal after involving a collaborator with social science expertise:

It’s not even possible to evaluate how helpful it was to have [a social scientist] be part of the group. It would’ve been a completely different experience without her, and it’s been very successful as a product because of her involvement in shaping it. (Interviewee 9)

Interviewees also emphasized the difference between stakeholder engagement focused on learning about user needs with outreach that takes place after a tool is built. Although outreach after a tool is made public is important, that tool may not meet the users’ needs if the developers do not work to identify and understand those needs early in the development process.

One interviewee explained why it is important to ask potential tool users about their needs before asking what kind of product they want:

If you take the time to do that, sometimes you'll go in a different direction, and that's great * * *. There's questions you can start to ask at the beginning that I think—and we're starting to do that now—could prevent some of the failings that we've seen [for] the future. (Interviewee 18)

This quote demonstrates how learning about the decision context of target users is essential to creating a usable tool. The kinds of questions that might be asked of target users include the following:

  • What is a recent task that you need to accomplish in your role?

  • What decisions are associated with achieving that task?

  • How does this task support your organization’s overall mission?

  • How do you relate to others (either internal or external to your organization) in completing this task?

  • What challenges do you face in completing this task?

Questions like these can guide the development team towards clearly defining the political, environmental, economic, social, and legal context of the decision, essential information that will allow the tool to support quality decision making (Wong-Parodi and others, 2020).

Adapt to Evolving User Needs

Engaging with users is a continuing process that ideally results in sustained relationships with users that begin before DST development and persist after a DST is launched. These relationships are important to maintain as the users’ needs and decision contexts evolve over time. User engagement needs to be strategic and sustained to be successful; Pearman and Cravens (2022) found strong links between user engagement throughout the development process and their perceptions of tools as salient and credible.

Interviewees who described successful tools often described efforts to continue learning from and sharing information with users after the DST was created. These efforts included emailing users who participated in user engagement efforts to let them know the outcome of their involvement, sharing links to the finished tools, sending out newsletters with updates, providing trainings or workshops on how to use the tool(s), and creating new or using existing pathways for user feedback. One interviewee explained their process for maintaining relationships with users:

I've spent some time just reaching out to these partners in the off season when we're not that busy and asking them what they like about the tool, what limitations they see, and sort of their wish list * * * so it's ongoing—we provide the data, and then we're there to support and answer questions as they interpret the data. (Interviewee 12)

When asked to reflect what they would have done differently in hindsight, some interviewees spoke about the importance of timing in managing relationships with users. Timing is important, and if user engagement occurs years before a tool is made available, the users’ needs may have evolved, other tools to meet those needs might have become available, or there could be a loss of interest in the tool. Maintaining continuity of participation can also be an issue; after too much time, user interest in tool development may decrease (Newman and others, 2000). One interviewee explained how it can be challenging to get users back on board when they lose interest in a tool before it is complete:

Management didn't push the developers to get things done, so we would promise a new functionality and it wouldn't come online for like a year and a half, and then people would be like, “Well, you guys are never going to get this done.” They'd lose faith in it. We didn't promote it. We didn't promote contributing to it. We didn't offer services to help people put data into it * * *. It just failed on all levels. (Interviewee 18)

This quote explains how user engagement at the beginning of the development process is not enough to guarantee that the usership will be there once a tool is built. Additionally, sometimes tools built by the USGS are created because they anticipate a need that targeted users have yet to express. This assumption can be problematic for tool development. Users rarely have the time or resources to use new and unfamiliar tools (Barnhart and others, 2018), and tool users report that there can be significant opportunity costs to switching from a familiar tool to a new and untested one (Cravens, 2018), which makes the bar for adopting new tools high. As an interviewee explains:

It takes a long time for the community to mature to be ready to handle the data, too. I think that's a key point. Sometimes we look at the science and we go, “I know you need this,” but you [don’t] wait long enough for your customer to mature, to be ready to be able to handle that decision. (Interviewee 8)

When developers do not bring their users along with them on the tool creation process, the users do not have investment in the tool being created or the chance to express preferences about how the tool is developed. Involving tool users throughout the development process can increase uptake and use.

 

Example Tool 3. USGS Operational Aftershock Forecasts

  • Design time.—2 years

  • Team lead(s).—Mike Blanpied, Jeanne Hardebeck, Andrew Michael, Sara McBride, Ned Field, Kevin Milner, Michael Barrall, and Eric Martinez

  • Principle represented.—Engage with the Target Users of the Tool

  • Decision makers or end users.—Federal Emergency Management Agency (FEMA) and the public

  • Decision opportunity.—Increasing the public’s understanding of aftershocks

  • Decision-support need.—Probabilistic models that provided accessible information on the frequency and size of earthquake aftershocks.

Tool Description

Using a probabilistic model, the U.S. Geological Survey (USGS) Operational Aftershocks Forecast provides a general understanding of the frequency and size of aftershocks that could occur given the mainshock magnitude. Communicating these probabilistic models presented many challenges such as communicating statistics to a diverse audience with various levels of knowledge. To address these challenges, the aftershock product presents the simplest information first and then expands in complexity as the user continues to browse. In contrast to previous products that were inaccurately reported in the media, this tool resulted in accurate media reports demonstrating that the forecast template was successful (Michael and others, 2020).

Quote

“We would present the kinds of information we could provide in these sorts of formats, in these sorts of timeframes and so forth, and then various user groups’ representatives [would] talk to us about what would be useful, how they would use that, what they would like to have, what kind of information and so forth. We’ve also had a fair number of more targeted meetings with, for example, different FEMA representatives. We’ve talked to urban search-and-rescue groups and so forth, so really talking straight to the users * * *. If you happen to have a good, a really outreach-focused person in the project group, it’s great, and then you can build and maintain those user relationships and engage users in product development and improvement and so forth over time.” — Mike Blanpied, team lead

Key Takeaway

A communication-focused social scientist conducted stakeholder engagement that increased the tool’s successful uptake and the public’s understanding of aftershocks.

For more information, go to https://earthquake.usgs.gov/data/oaf/overview.php.

Principle 4. Develop an Empirical Understanding of Use and Usability

Having an empirical understanding of the use and usability of the tool can help sustain a tool over its lifecycle and ensure that it meets user needs. This principle includes:

  • Define metrics of success

  • Analyze tool use trends

  • Understand user experiences

Define Metrics of Success

Identifying success metrics at the onset of tool development can help tool developers avoid trying to fulfill too many requirements with a single tool. The scope of an individual DST development effort should be kept manageable (Newman and others, 2000). Stakeholders, technical experts, and scientists need to develop a shared understanding of the decision being supported before creating the DST, and explicit metrics of success should be identified and refined throughout the process (Barnhart and others, 2018).

Evaluating the success of a DST was an aspect of tool deployment that challenged USGS interviewees. Here is how one interviewee explained the challenge:

Measuring the success is tricky. If we get good feedback, that’s good, or if we get user feedback, that’s good. It’s difficult to find concrete examples of a decision that was made that was aided by a particular tool. We’re always hungry for those examples, and when we find them, we treasure them. (Interviewee 9)

The challenge described by this interviewee is well documented in the wider literature on decision support and arises from three main sources. First, documenting the causal link between someone’s use of a DST and improvements in their decision-making processes is challenging (Cravens, 2018). Scholars of knowledge use distinguish between conceptual information use (adding to a decision maker’s general knowledge base) and instrumental information use (providing information that feeds into a specific decision) (NRC, 2012). Most DST developers associate instrumental use with success, but studies of how people make decisions indicate that a high percentage of the information that informs a given decision is conceptual rather than instrumental (Amara and others, 2004; van der Molen and others, 2018). Second, despite repeated calls for greater attention to and resources for evaluation, support for DST evaluation remains relatively rare (Moser, 2009; Cravens, 2016; Wong-Parodi and others, 2020). Finally, measuring success requires defining concrete metrics that correspond to developer and user objectives and then collecting empirical data to assess whether and how those objectives are being met (VanderMolen and others, 2019).

Metrics for DST evaluation may be broadly divided into two categories: (1) type and amount of use and (2) how individuals interact with a tool. When interviewees were asked to describe what makes a tool successful, they often said that a tool is successful if people use it. But when we asked whether the interviewees had usage expectations for the tools they had built in the past, the interviewees indicated that they often did not. In general, we found that USGS employees were not defining upfront what they wanted their tools to accomplish nor defining metrics that would let them know their objectives had been achieved. One interviewee expressed why defining success metrics is important:

You can't just measure on one [success] metric alone. I think that's also something that should be defined in the beginning. If our metrics were set up that way * * * we could then say at the end whether or not it was successful. That should be set up the same time you set up how you're gonna pay for it. (Interviewee 8)

Evaluating a tool’s performance is difficult without explicit metrics of success.

Analyze Tool Use Trends

One category of success metrics are measurements of tool use across users. This includes aggregated measures of numbers of users across time or space and automated counts of tool use using web analytics. Some interviewees also spoke about assessing success in terms of referrals, mentioning that some collaborators who worked with their design team thought the resulting DST was successful enough to continue making DSTs with them and to refer other groups to their design services.

Interviewees reported they primarily monitor usage using Google Analytics. Google Analytics is a web analytics service offered by Google that tracks and reports website traffic. Web analytics can be configured to track both simple counts (for example, number of total webpage views) as well as count actions that correspond to particular user behaviors that are important to a user’s experience using a given application (for example, submit a form or click on a button). Highly sophisticated automated tracking of user behavior is common in the corporate sector, but such uses of web analytics remain rarer on scientific and government applications due to a combination of privacy regulations, data and software regulations, and likely concerns about public backlash (Cravens, 2014; Pearman and Cravens, 2022). Importantly, the ability for web analytics to measure particular behaviors is intimately related to the design and deployment of the application, which means that defining metrics before an application is built is important to ensuring the software will be designed in such a way that measurement of behaviors of interest is indeed possible (Cravens, 2014).

One interviewee described the nuances of what quantitative analytic metrics can and cannot provide and how analytics relate to tool architecture:

From a quantitative standpoint, if you've got a web application, if you're getting what I guess you perceive is a good amount of traffic * * * those are numbers you can look at and understand * * *. You can just arbitrarily throw things out, but [you might not always know whether] 1,500 hits a day is really good. It's all relative, right? You might have an application that could be really important, but it's got a smaller subset of audience, a niche area. You're not going to get the general public going to it, but you're seeing consistent usage * * * I guess you hope to see it continue to increase. You'll naturally look at analytics compared to other, for us, applications [in our Mission Area]. There's a lot of nuances there when you look at those numbers. For instance, with a mapping application, the way analytics are tracked on a web map versus a normal web page... are quite a bit different. Someone could spend all day on the [web map], and it's going to show up as one event. If they're clicking around all kinds of different pages, each of those are going to be a separate hit. We're trying to refine the level of detail we collect. (Interviewee 16)

Notably, another interviewee reported that they erroneously believed USGS applications were not allowed to use Google Analytics and described this as a significant (perceived) barrier:

We can't track people via the web * * *. We used to have Google Analytics [for our tool], but, now, all of our traffic goes through Reston or Denver I guess. [Reports] look like everybody's from Reston or Denver. Apparently, there are other mechanisms available, but they're not made available to [my team]. (Interviewee 7).

This comment illustrates the diversity of experiences with web analytics across regions and Mission Areas at USGS and, although only reflecting the experience of a single interviewee, may highlight a larger issue with employee understanding of resources available for monitoring tool use.

Understand User Experiences

Another broad category of DST evaluation metrics assesses the experience of users to understand the ways and extent to which a tool supports decision making. This category of metrics is usually assessed using a well-known evaluation technique from the usability discipline called usability testing. Many of the successful tools at USGS involved elements of usability testing, for example, observing target users using early prototypes to carry out actual work. Usability testing enables tool designers to know how well a tool works for the tool’s intended users and, as necessary, make modifications in response to findings (Loucks, 1995).

In this report, we have focused on the usability testing technique because it allows a DST team to understand how a design meets the users’ needs by directly involving the target users. For example, during a usability testing session, the users might be asked to think aloud while performing representative tasks with a DST. In turn, by staying as non-intrusive as possible, the DST team can gain insights into the users’ experiences by listening to the users and observing their interactions with the tools. Jakob Nielsen (1994) defined five key quality components that a design team might learn about through usability testing:

  • Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?

  • Efficiency: Once users have learned the design, how quickly can they perform tasks?

  • Memorability: When users return to the design after a period of not using it, how easily can they reestablish proficiency?

  • Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?

  • Satisfaction: How pleasant is it to use the design?

As one tool designer explained: “Early on, I would have a fairly strong vision about what the tool should look like, and then I would realize that the way [the users] see the tool and the way they want to use tool is not necessarily the way I have developed the tool” (Interviewee 7).

This quote shows that one cannot assume how users use a tool, and that this information can only be obtained using empirical data collection methods such as usability testing. Many interviewees expressed how usability testing and sharing early prototypes with users revealed gaps and functionality issues that would not have been discovered otherwise. This process draws on aspects of the first principle (use an adaptive, iterative design process) because it requires space for trial and error. When asked what interviewees would have done differently in hindsight, several indicated that they would have done more usability testing earlier in the tool design process. Interviewees who had access to usability experts within USGS or were able to contract usability experts outside the Bureau reported that the added expertise in usability testing was key to their tool’s success. Importantly, we note that usability testing is a formal discipline with trained professionals who caution against the dangers of “guerilla” usability testing; that is, casual and not rigorous application of their methods. However, rigorous usability testing study design does not imply large sample sizes; testing with as few as 5 to 10 users can often allow a team to identify most issues (Oakley and Daudert, 2016).

 

Example Tool 4. The Operational Total Water Level and Coastal Change Viewer

  • Design time.—10 months

  • Team lead(s).—Kara Doran, Richard Snell, Meg Palmsten, Li Erikson, and Alex Nereson

  • Principle represented.—Develop an empirical understanding of use and usability

  • Decision makers or end users.—Regional Weather Service offices

  • Decision opportunity.—Responding to coastal hazards

  • Decision-support need.—Forecasts of wave-induced water levels

Tool Description

The U.S. Geological Survey (USGS) National Assessment of Coastal Change Hazards team works with the National Weather Service and the National Centers for Environmental Prediction to combine wave predictions from the Nearshore Wave Prediction System with USGS-derived beach morphology to provide detailed forecasts of wave-induced water levels. The viewer includes predictions of the timing and magnitude of water levels at the shoreline and potential impacts to coastal dunes.

Quote

“On the Total Water Level Viewer, we have a button to submit a user story. Sometimes people submit a story of how they’re using it or give us a suggestion for how to improve or something. I like that, too. It feels successful to me when we get feedback. It feels successful to me when it’s easy to train someone how to add data and update things * * *. I have a monthly call with the weather forecasters nationwide that use the total water level forecast and just the wave model in general from [the National Oceanic and Atmosphere Administration]. Those partnerships have been really great.” — Kara Doran, team lead

Key Takeaway

Decision-support tools should be easy to use and include methods to obtain user feedback so that future tool improvements can be made.

For more information, go to https://www.usgs.gov/centers/spcmsc/science/operational-total-water-level-and-coastal-change-forecasts.

Principle 5. Plan for the Tool’s Full Life Span

Last but certainly not least, Principle 5 is the need to understand and plan for a tool’s full life span. This principle includes the need to

  • Determine the full life span of the tool

  • Invest in maintenance (for example, money and staff time)

According to the interviewees, sometimes finished tools are too complicated or not durable because of software and maintenance limitations. Despite fulfilling a need, they end up not being maintained due to a lack of resources. This can occur when a tool is designed without plans for long-term support and maintenance. Within the USGS, there are efforts to increase data stewardship, such as requirements for USGS projects to develop Data Management Plans and guidance for scientists to include support and maintenance considerations when developing DST projects in the Water Mission Area (Herman-Mercer and others, written commun., 2020). Developers of DSTs in the USGS should also work within USGS technology policy by working with the Chief Information Officer’s (CIO’s) office11 in their planning.

Determine the Full Life Span of the Tool

The success of a DST can be constrained by resource limitations; thus, understanding what resources will be needed before embarking on tool development is essential (Dale and English, 1999). DSTs have a life cycle and require maintenance, funding, and staff time to keep them usable during their life span; thus, they can be thought of differently than a traditional research project that may have a distinct beginning and end.

DSTs are developed throughout the USGS to support a variety of decisions, some of which may change or evolve over time, while other decisions may be more stable. Some tools are developed to meet a specific cooperator’s needs with the intention to turn tool maintenance over to the cooperator at a certain point while others may be maintained by USGS personnel over many years. These typologies of maintenance warrant consideration ahead of time to ensure the appropriate level of support and maintenance is provided.

An interviewee explained why sometimes they will not begin a project if there is not enough maintenance support:

We also have a big consideration of ongoing support, and we’ve been supporting this tool for eight years, nine years? So, you have to consider how much your team can do for the group, for the people who are asking for these tools, and I consider that a lot when we build things. Your tools are like children. Once they come out, you’re stuck with it for the next, you know, 18 years, so you can’t just toss it over the fence and think you’re done. You have to provide support to whoever’s using it, make updates as needed. So, I try to space that out for our team as well, “Are we going in over our heads? Are we taking on projects that are too big?” And sometimes [when] the researcher [asks] for something, I say “That’s too big for our group. We can’t build something like that, we can’t maintain it.” (Interviewee 22)

This quote describes why the tool’s life span is best defined before tool development takes place. Supporting a tool, including scientifically, technically, or operationally, requires funding and ongoing resources. These resources are easier to acquire when the support requirements and request schedule are included in the planning process rather than retroactively. In addition to data management, technology changes rapidly, and software vulnerabilities occur. Planning in information technology (IT) for life cycle maintenance and support is critical to supporting a DST.

Invest in Maintenance

Some interviewees who experienced working on tools that fulfilled a need but were not maintained cited the inability to provide maintenance to the tool. The lack of maintenance can be due to a lack of funding to support IT staff and a dearth of programming expertise. This section of planning for the tool’s life span focuses on the importance of acknowledging and planning for technical debt. Technical debt is a concept in software development that reflects the implied cost of reworking a tool caused by changes in technology (Tom and others, 2013). Updates may also cause security vulnerabilities. Although technical debt cannot always be avoided, it can be anticipated by staying proactive and diligent about future tool maintenance requirements. During one of the interviews, a USGS developer explained the concept of technical debt:

Just like when your phone gets updates or your computer gets updates because of security stuff, it’s the same way with the software packages that we build on. That’s actually become an issue recently where an old software package is basically saying, “This is a security threat,” or “It’s a security problem.” To replace it would mean breaking some stuff and then now all of a sudden you have to spend time to make that update, and the funding hasn’t been there in years. Then you have to make a hard choice. “Well, is this thing going to come down?” Then the scientist says, “Wait, why are you taking my application down?” It's, “Well, because they don't have a permanent life span with one-time funding,” is basically the answer. (Interviewee 19)

Many interviewees spoke about technical debt and changing technologies, and the need for time and staff to keep a tool running. One interviewee who lacks in-depth programming experience explained their team’s way of overcoming this technological barrier:

We have readily embraced off-the-shelf software that’s more configuring applications as opposing to coding from scratch. When you’re doing your own coding, it’s not just building something. It’s maintaining it as well, because there’s nobody else that’s going to come in and tell you how to fix your own product. It has been advantageous to leverage off-the-shelf software that’s more configuration than programming. (Interviewee 10)

While this quote demonstrates one way of skirting the barrier of technological debt, other interviewees spoke about contracting out programming experience or training in coding software. These solutions require support and resources, and interviewees had many suggestions for resources that would help them in the tool design process. Furthermore, it is important that DST team members work with their supervisors or respective regional or Mission Area administrative leadership and the USGS CIO to ensure their DST uses the correct IT capabilities and addresses Federal requirements. Coordinating early in your planning with those in USGS who set IT policy can avoid issues in the long run.

 

Example Tool 5. The Strategic Hazard Identification and Risk Assessment Project (SHIRA)

  • Development time.—3 years

  • Team lead(s).—Alice Pennaz and Nate Wood

  • Principle represented.— Plan for the tool’s full life span

  • Decision makers or end users.—The Department of the Interior’s Office of Emergency Management

  • Decision opportunity.—Hazard prioritization for a baseline operation plan

  • Decision-support need.—Data, tools, and training on how hazards could negatively affect assets across the Department of Interior

Tool Description

The SHIRA project provides data, tools, and training exclusively for Department of the Interior (DOI) personnel to improve planning for realistic threats to DOI assets, resources, and people. The DOI is responsible for a diverse set of assets that include personnel; visitors to public lands; facilities; infrastructure; historic sites; and natural, cultural, and economic resources. Many natural and human-caused hazards could negatively impact any one of these assets. SHIRA tools are being developed to help DOI managers and senior leadership plan to prevent, protect against, mitigate, respond to, and recover from various hazards.

Quote

“From the very beginning of this project, we said, ‘If you’re going to invest all of this money and in all of these tools, you need to think about them in a long-term—who’s going to be maintaining them? Who’s going to be updating the data that are in them, et cetera?’ Because they will become obsolete within a year, really, if that doesn’t happen. The Office of Emergency Management heard us, and a request was made to the DOI working capital fund to maintain the tools and data over time.” — Alice Pennaz, team lead

Key Takeaway

Planning for a tool’s full life span can engender the resources needed to support successful tools over time.

For more information, go to https://www.doi.gov/emergency/shira.

What Resources Do Interviewees Say They Need?

Throughout the interviews, interviewees discussed resources that could have made the process of DST development easier or helped their projects to be more successful. Many of these comments were drawn from the final question asked in each interview: “Are there any resources that you wish you had that would have helped during tool design and development?” The three categories of responses to this question mentioned most often were institutional support, funding, and staffing.

Institutional support includes comments about USGS leadership demonstrating a broader commitment to develop effective tools that support USGS stakeholders to make sound decisions. It also includes comments about support from supervisors, Programs, Mission Areas, and others in USGS leadership that allows project teams to meet the objectives of specific DST development efforts. For example, one interviewee described how a new Associate Director became a champion for their project: “He saw what we had * * * and he was really excited about the potential. [He] was the catalyst for pushing this into a much larger project.” (Interviewee 23). As this quote demonstrates, support and advocacy from leadership is often necessary to obtain the investments needed to develop a successful DST or to scale a promising pilot into a larger-scale effort.

The second resource need discussed by interviewees was funding, including changes to funding models. One interviewee highlighted that DSTs represent “an investment,” pointing out that, “if it’s really worth doing and it’s going to have a big impact, then it’s going to have a big cost.” (Interviewee 9). The topic of ongoing financial support, particularly for tool maintenance, often came up in interviews. One interviewee summarized the challenges of ongoing maintenance for tools that can “cost $20,000 a year to run”:

It’s easy to build a tool. It’s really hard to keep it going * * * You’ve got Cloud costs. You’ve got maintenance costs…patches…upgrades. It’s not like publishing a report and then it just sits in Publications Warehouse, and you can check it out or read the PDF. A lot of people have still not wrapped their head around it. (Interviewee 15).

Interviewees also mentioned the challenge of meeting the needs of DST development within the constraints of year-to-year appropriated funding and not having access to working capital funds for tool operating costs like hosting.

Without adequate funding and institutional support, interviewees reported it can be difficult to ensure appropriate staffing within a project team. Research participants reported that they variously lacked access to programmers, usability experts, social scientists, graphic designers, and project managers to commit time to operational tasks. For example, one interviewee pointed out that USGS hiring processes are, “not well structured to handle the idea that usability is a thing.” (Interviewee 3). Interviewees similarly described their need for more software expertise and explained that skills in Python, Javascript, and integrating statistical capabilities within a geographic information system (GIS) are in particularly short supply. Many participants mentioned challenges with Federal hiring in general, especially the time between identifying the need for a new position and filling that position. Beyond hiring, interviewees commented on the need to develop skills within the current USGS workforce related to coding, usability, and stakeholder engagement. While some interviewees identified a gradual and welcome evolution currently taking place in the Bureau’s attitude toward these skillsets, others spoke of barriers to increasing their ubiquity, including other scientists’ and leadership’s awareness of their importance and performance criteria that do not necessarily value efforts to improve or measure the usability of tools.

Although not explicitly mentioned as resources needed, the way that participants framed barriers they face suggests a need for greater institutional guidance and potentially a common institutional framework for DST development across USGS. For example, confusion on USGS policy regarding use of Google Analytics to track tool traffic indicates that at least some developers are not receiving the information and tools they need to be successful and, in some cases, may be receiving inaccurate information. The PRA was explicitly identified as a barrier to understanding user needs by three interviewees. One interviewee communicated accurate information related to the PRA but seemed unaware of ways in which to collect information that could be useful to their DST that do not require approval under the PRA. This interviewee seemed to lack the support to complete the steps to receive approval through the PRA process to collect the information they sought. A broader framework outlining best practices (some of which have been outlined throughout this report) and resources available to USGS scientists and developers, including funding and staffing models, could improve overall DST development in the USGS.

Conclusion

U.S. Geological Survey (USGS) scientists and technology professionals have created a variety of successful decision-support tools (DSTs) that fulfill a range of user needs to enable scientifically informed decision making across USGS Mission Areas. This report distilled experiences from past DST projects to present general principles that contributed to tool success and provided best practices for USGS employees interested in building a DST. Of these best practices, these seven warrant particular consideration by USGS employees before building a tool:

  • Clearly define the target users of the tool

  • Decide how input from users will be acquired

  • Understand the context of the decision(s) the tool will be built support

  • Determine how tool success will be measured

  • Plan for the life span of the tool

  • Establish how the tool will be maintained and who will maintain it

  • Calculate how much the tool will cost over time to maintain

Our study also compiled information about barriers to tool development. Knowledge gaps that interviewees revealed included needing information about available resources that can help with DST development, knowledge of which colleagues have experience building similar tools, and ways to connect with USGS employees who can answer their questions about DST development.

We focused on DST creation across Mission Areas and Regions to distill common experiences. However, USGS DSTs vary widely in formats and the needs that they fulfill, so the needs and resources covered in this report represent guiding principles and a starting point rather than an exhaustive list. We hope this research can provide the foundation to begin identifying aspects of DST design and development that are unique to Mission Areas or regions.

The USGS has provided trusted science to the public for decades, and DSTs are an important way that the Bureau gets targeted scientific information into the hands of those who need it for real-world decisions. There are many successful examples of DSTs built by USGS scientists that have resulted in long-lasting relationships with users and enabled high-quality decision making, a small subset of which are highlighted in this report. DSTs help ensure that USGS science is actionable, but their effectiveness depends on the specifics of how tools are designed and developed. The principles presented in this report and the lessons learned from the experiences of employees across USGS improves the chances that every DST that the USGS chooses to invest in will be effective and used.

References Cited

Amara, N., Ouimet, M., and Landry, R.É., 2004, New Evidence on Instrumental, Conceptual, and Symbolic Utilization of University Research in Government Agencies: Science Communication, v. 26, no. 1, p. 75–106, accessed January 5, 2021, at https://doi.org/10.1177/1075547004267491.

Arnott, J.C., Mach, K.J., and Wong-Parodi, G., 2020, Editorial overview—The science of actionable knowledge: Current Opinion in Environmental Sustainability, v. 42, p. A1–A5, accessed January 5, 2021, at https://doi.org/10.1016/j.cosust.2020.03.007.

Bamzai-Dodson, A., Cravens, A.E., Wade, A., and McPherson, R.A., 2021, Engaging with stakeholders to produce actionable science—A framework and guidance: Weather, Climate, and Society, accessed May 4, 2023, at https://doi.org/10.1175/WCAS-D-21-0046.1.

Barnhart, B.L., Golden, H.E., Kasprzyk, J.R., Pauer, J.J., Jones, C.E., Sawicz, K.A., Hoghooghi, N., Simon, M., McKane, R.B., Mayer, P.M., Piscopo, A.N., Ficklin, D.L., Halama, J.J., Pettus, P.B., and Rashleigh, B., 2018, Embedding co-production and addressing uncertainty in watershed modeling decision-support tools—Successes and challenges: Environmental Modelling & Software, v. 109, p. 368–379, accessed January 6, 2021, at https://doi.org/10.1016/j.envsoft.2018.08.025.

Boehm, B., 2002, Get ready for agile methods, with care: Computer, v. 35, no. 1, p. 64–69, accessed January 5, 2021, at https://doi.org/10.1109/2.976920.

Burnett, C.M., 2020, Incorporating the participatory process in the design of geospatial support tools—Lessons learned from SeaSketch: Environmental Modelling & Software, v. 127, p. 104678, accessed January 7, 2021, at https://doi.org/10.1016/j.envsoft.2020.104678.

Carmona, G., Varela-Ortega, C., and Bromley, J., 2013, Supporting decision making under uncertainty—Development of a participatory integrated model for water management in the middle Guadiana river basin: Environmental Modelling & Software, v. 50, p. 144–157, accessed January 5, 2021, at https://doi.org/10.1016/j.envsoft.2013.09.007.

Cash, D.W., Adger, W.N., Berkes, F., Garden, P., Lebel, L., Olsson, P., Pritchard, L., and Young, O., 2006, Scale and Cross-Scale Dynamics—Governance and Information in a Multilevel World: Ecology and Society, v. 11, no. 2, at https://doi.org/10.5751/ES-01759-110208.

Chang, W., Cheng, J., Allaire, J.J., Xie, Y., and McPherson, J., 2016, Web application framework for R.

Cockburn, A., and Highsmith, J., 2001, Agile software development, the people factor: Computer, v. 34, no. 11, p. 131–133, accessed January 5, 2021, at https://doi.org/10.1109/2.963450.

Conboy, K., Fitzgerald, B., and Golden, W., 2005, Agility in information systems development—A three-tiered framework, in Business Agility and Information Technology Diffusion, IFIP TC8 WG 8.6 International Working Conference [Atlanta, Ga.], May 8–11, 2005, p. 35–49, accessed January 20, 2021, at https://doi.org/10.1007/0-387-25590-7_3.

Cravens, A.E., 2014, Evaluating software in environmental conflict resolution—The role of MarineMap in coastal planning and decision making in California: Stanford, Ca., Stanford University Ph.D. dissertation, 214 p. [Also available at https://stacks.stanford.edu/file/druid:pz784gq9221/Cravens_FinalDissertation_Aug2014-augmented.pdf.]

Cravens, A.E., 2016, Negotiation and Decision Making with Collaborative Software—How MarineMap ‘Changed the Game’ in California’s Marine Life Protected Act Initiative: Environmental Management, v. 57, no. 2, p. 474–497, accessed January 5, 2021, at https://doi.org/10.1007/s00267-015-0615-9.

Cravens, A.E., 2018, How and why upper Colorado River basin land, water, and fire managers choose to use drought tools (or not): U.S Geological Survey Open-File Report 2018–1173, 60 p., accessed January 5, 2021, at https://doi.org/10.3133/ofr20181173.

Cravens, A.E., and Ardoin, N.M., 2016, Negotiating credibility and legitimacy in the shadow of an authoritative data source: Ecology and Society, v. 21, no. 4, p. art30, accessed January 5, 2021, at https://doi.org/10.5751/ES-08849-210430.

Cusumano, M.A., and Yoffie, D.B., 1999, Software development on Internet time: Computer, v. 32, no. 10, p. 60–69, accessed January 5, 2021, at https://doi.org/10.1109/2.796110.

Dale, V.H., and English, M.R., eds., 1999, Tools to aid environmental decision making: New York, Springer, accessed January 5, 2021, at https://doi.org/10.1007/978-1-4612-1418-2.

Dilling, L., and Lemos, M.C., 2011, Creating usable science—Opportunities and constraints for climate knowledge use and their implications for science policy: Global Environmental Change, v. 21, no. 2, p. 680–689, accessed January 5, 2021, at https://doi.org/10.1016/j.gloenvcha.2010.11.006.

Dunn, G., and Laing, M., 2017, Policy-makers perspectives on credibility, relevance and legitimacy (CRELE): Environmental Science & Policy, v. 76, p. 146–152, accessed January 5, 2021, at https://doi.org/10.1016/j.envsci.2017.07.005.

Fiorino, D.J., 1990, Citizen Participation and Environmental Risk—A Survey of Institutional Mechanisms: Science, Technology & Human Values, v. 15, no. 2, p. 226–243, accessed January 20, 2021, at https://doi.org/10.1177/016224399001500204.

Fleisher, L., Ruggieri, D.G., Miller, S.M., Manne, S., Albrecht, T., Buzaglo, J., Collins, M.A., Katz, M., Kinzy, T.G., Liu, T., Manning, C., Charap, E.S., Millard, J., Miller, D.M., Poole, D., Raivitch, S., Roach, N., Ross, E.A., and Meropol, N.J., 2014, Application of best practice approaches for designing decision support tools—The preparatory education about clinical trials (PRE-ACT) study: Patient Education and Counseling, v. 96, no. 1, p. 63–71, accessed January 5, 2021, at https://doi.org/10.1016/j.pec.2014.04.009.

Freeman, R.E., 2010, Strategic management—A stakeholder approach: Cambridge, Cambridge University Press, accessed January 6, 2021, at https://doi.org/10.1017/CBO9781139192675.

Geoffrion, A.M., 1983, Can MS/OR Evolve Fast Enough?: Interfaces, v. 13, no. 1, p. 10–25, accessed January 5, 2021, at https://doi.org/10.1287/inte.13.1.10.

Gould, J.D., and Lewis, C., 1985, Designing for usability—Key principles and what designers think: Communications of the ACM, v. 28, no. 3, p. 300–311, accessed January 5, 2021, at https://doi.org/10.1145/3166.3170.

Grenning, J., 2001, Launching extreme programming at a process-intensive company: IEEE Software, v. 18, no. 6, p. 27–33, accessed January 6, 2021, at https://doi.org/10.1109/52.965799.

Grêt-Regamey, A., Sirén, E., Brunner, S.H., and Weibel, B., 2017, Review of decision support tools to operationalize the ecosystem services concept: Ecosystem Services, v. 26, p. 306–315, accessed January 5, 2021, at https://doi.org/10.1016/j.ecoser.2016.10.012.

Guest, G., MacQueen, K.M., and Namey, E.E., 2012, Applied thematic analysis: Thousand Oaks, Calif., SAGE Publications, Inc., accessed January 20, 2021, at https://doi.org/10.4135/9781483384436.

Heavin, C., and Adam, F., 2022, From Decision Support to Analytics: Oxford Research Encyclopedia of Business and Management, accessed May 3, 2023, at https://doi.org/10.1093/acrefore/9780190224851.013.255.

Interaction Design Foundation, [undated], User centered design: Interaction Design Foundation website, accessed January 20, 2021, at https://www.interaction-design.org/literature/topics/user-centered-design.

Jacobs, K.L., and Buizer, J.L., 2016, Building community, credibility and knowledge—The third US National Climate Assessment: Climatic Change, v. 135, no. 1, p. 9–22, accessed January 5, 2021, at https://doi.org/10.1007/s10584-015-1445-8.

Katzenbach, J., and Smith, D.K., 1992, The wisdom of teams—Creating the high-performance organization: New York, Harvard Business Review Press, 304 p.

Keenan, P.B., 2021, Thirty years of decision support—A bibliometric view, in Papathanasiou, J., Zaraté, P., Freire de Sousa, J., eds., EURO Working Group on DSS, Integrated Series in Information Systems: Springer, Cham, accessed May 3, 2023, at https://doi.org/10.1007/978-3-030-70377-6_2.

Kelly, A., 2019, Customers, users, and stakeholders, in Kelly, A., ed., The art of agile product ownership—A guide for product managers, business analysts, and entrepreneurs: Berkeley, Calif. Apress, p. 45–48, accessed January 20, 2021, at https://doi.org/10.1007/978-1-4842-5168-3_6.

Lavery, J.V., 2018, Building an evidence base for stakeholder engagement: Science, v. 361, no. 6402, p. 554–556, accessed January 5, 2021, at https://doi.org/10.1126/science.aat8429.

Loucks, D.P., 1995, Developing and implementing decision support systems—a critique and a challenge: Journal of the American Water Resources Association, v. 31, no. 4, p. 571–582, accessed January 5, 2021, at https://doi.org/10.1111/j.1752-1688.1995.tb03384.x.

Meso, P., and Jain, R., 2006, Agile Software Development—Adaptive Systems Principles and Best Practices: Information Systems Management, v. 23, no. 3, p. 19–30, accessed January 5, 2021, at https://doi.org/10.1201/1078.10580530/46108.23.3.20060601/93704.3.

Michael, A.J., McBride, S.K., Hardebeck, J.L., Barall, M., Martinez, E., Page, M.T., van der Elst, N., Field, E.H., Milner, K.R., and Wein, A.M., 2020, Statistical Seismology and Communication of the USGS Operational Aftershock Forecasts for the 30 November 2018 Mw 7.1 Anchorage, Alaska, Earthquake: Seismological Research Letters, v. 91, no. 1, p. 153–173, accessed January 6, 2021, at https://doi.org/10.1785/0220190196.

Moser, S., 2009, Making a difference on the ground—The challenge of demonstrating the effectiveness of decision support: Climatic Change, v. 95, no. 1-2, p. 11–21, accessed January 5, 2021, at https://doi.org/10.1007/s10584-008-9539-1.

National Academies of Sciences, Engineering, and Medicine [NASEM], 2018, Future water priorities for the Nation—Directions for the U.S. Geological Survey Water Mission Area: Washington, D.C., The National Academies Press, accessed January 21, 2021, at https://doi.org/10.17226/25134.

National Research Council [NRC], 2009, Informing decisions in a changing climate: Washington, D.C., The National Academies Press, 188 p. [Also available at https://doi.org/10.17226/12626.]

National Research Council [NRC], 2012, Using science as evidence in public policy: Prewitt, K., Schwandt, T.A., Straf, M.L., eds., Washington, D.C., The National Academies Press, 110 p. [Also available at https://doi.org/10.17226/13460.]

Newman, S., Lynch, T., and Plummer, A.A., 2000, Success and failure of decision support systems—Learning as we go: Journal of Animal Science, v. 77, E-Suppl, p. 1, accessed January 20, 2021, at https://doi.org/10.2527/jas2000.77E-Suppl1e.

Nielsen, J., 1994, Usability inspection methods, in Plaisant, C., ed., CHI '94—Conference companion on human factors in computing systems—ACM Conference on Human Factors in Computer Systems, Boston, Mass., April 24–28, 1994: New York, N.Y., Association for Computing Machinery, p. 413–414, accessed January 5, 2021, at https://doi.org/10.1145/259963.260531.

Nyumba, T.O., Wilson, K., Derrick, C.J., and Mukherjee, N., 2018, The use of focus group discussion methodology—Insights from two decades of application in conservation: Methods in Ecology and Evolution, v. 9, no. 1, p. 20–32, accessed insert date, at https://doi.org/10.1111/2041-210X.12860.

Oakley, N.S., and Daudert, B., 2016, Establishing Best Practices to Improve Usefulness and Usability of Web Interfaces Providing Atmospheric Data: Bulletin of the American Meteorological Society, v. 97, no. 2, p. 263–274, accessed May 4, 2023, at https://doi.org/10.1175/BAMS-D-14-00121.1.

Oliver, D.M., Bartie, P.J., Heathwaite, A.L., Pschetz, L., and Quilliam, R.S., 2017, Design of a decision support tool for visualising E. coli risk on agricultural land using a stakeholder-driven approach: Land Use Policy, v. 66, p. 227–234, accessed January 6, 2021, at https://doi.org/10.1016/j.landusepol.2017.05.005.

Palutikof, J.P., Street, R.B., and Gardiner, E.P., 2019, Decision support platforms for climate change adaptation—An overview and introduction: Climatic Change, v. 153, no. 4, p. 459–476, accessed May 3, 2023, at https://doi.org/10.1007/s10584-019-02445-2.

PARC, 2011, Busting the myth of the giant green button: UX Magazine, accessed January 20 2021, at https://uxmag.com/articles/busting-the-myth-of-the-giant-green-button.

PARC, 2016, Ethnography and the PARC Copier, PARC, the Xerox Company video, 00:01:30, recorded in 1983, posted December 22, 2016, accessed November 2, 2021, at https://www.youtube.com/watch?v=DUwXN01ARYgandt=16s.

Pearman, O., and Cravens, A.E., 2022, Institutional barriers to actionable science—Perspectives from decision support tool creators: Environmental Science & Policy, v. 128, p. 317–325, accessed January 10, 2023, at https://doi.org/10.1016/j.envsci.2021.12.004.

Project Management Institute, 2021, A guide to the project management body of knowledge (PMBOK guide)—Seventh edition and the standard for project management: Newtown Square, Pa., Project Management Institute, 756 p.

Restrepo-Osorio, D.L., Stoltz, A.D., and Herman-Mercer, N.M., 2022, Stakeholder engagement to guide decision-relevant water data delivery: Journal of the American Water Resources Association, v. 58, no. 6, p. 1531–1544, accessed January 7,2021, at https://doi.org/10.1111/1752-1688.13055.

Röckmann, C., Ulrich, C., Dreyer, M., Bell, E., Borodzicz, E., Haapasaari, P., Hauge, K.H., Howell, D., Mäntyniemi, S., Miller, D., Tserpes, G., and Pastoors, M., 2012, The added value of participatory modelling in fisheries management – what has been learnt?: Marine Policy, v. 36, no. 5, p. 1072–1085, accessed January 5, 2021, at https://doi.org/10.1016/j.marpol.2012.02.027.

Rigby, J.M., and Preist, C., 2023, Towards user-centered climate services—the role of human-computer interaction, in Schmidt, A., Väänänen, K., Goyal, T., Kristensson, P.O., Peters, A., Mueller, S., Williamson, J.R., and Wilson, M.L., eds., CHI ‘23—Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, April 23–28, 2023, p. 1–14, accessed May 4, 2023, at https://doi.org/10.1145/3544548.3580663.

Rubinstein, R., and Hersh, H., 1984, The human factor—designing computer systems for people (excerpt), in Design principles and methodologies, chap. 11 of Beacker, R.M. and Buxton, W.A.S., eds., Human-computer interaction—A multidisciplinary approach: San Mateo, Calif., Morgan Kaufman Publishers, Inc., 1987, p. 502–507.

Ruparelia, N.B., 2010, Software development lifecycle models: Software Engineering Notes, v. 35, no. 3, p. 8–13, accessed January 20, 2021, at https://doi.org/10.1145/1764810.1764814.

Saldaña, J., 2016, The coding manual for qualitative researchers (3rd ed.): London, Sage Publications.

Sandoval-Almazán, R., Luna-Reyes, L.F., Luna-Reyes, D.E., Gil-Garcia, J.R., Puron-Cid, G., and Picazo-Vela, S., 2017, Introduction, in Sandoval-Almazán, R., Luna-Reyes, L.F., Luna-Reyes, D.E., Gil-Garcia, J.R., Puron-Cid, G., and Picazo-Vela, S., eds., Building digital government strategies—Principles and practices: London, Springer International Publishing, p. 1–5, accessed January 5, 2021, at https://doi.org/10.1007/978-3-319-60348-3_1.

Shim, J.P., Warkentin, M., Courtney, J.F., Power, D.J., Sharda, R., and Carlsson, C., 2002, Past, present, and future of decision support technology: Decision Support Systems, v. 33, no. 2, p. 111–126, at https://doi.org/10.1016/S0167-9236(01)00139-7.

Stoltz, A.D., Cravens, A.E., Lentz, E., and Himmelstoss, E., 2023, User engagement to improve coastal data access and delivery: U.S. Geological Survey Scientific Investigations Report 2023–5081, 29 p., accessed July 2023, at https://doi.org/10.3133/sir20235081.

Tom, E., Aurum, A., and Vidgen, R., 2013, An exploration of technical debt: Journal of Systems and Software, v. 86, no. 6, p. 1498–1516, accessed January 5, 2021, at https://doi.org/10.1016/j.jss.2012.12.052.

Ulibarri, N., Cravens, A.E., Nabergoj, A.S., Kernbach, S., and Royalty, A., 2019, Creativity in research—Cultivate clarity, be innovative, and make progress in your research journey: Cambridge University Press, Cambridge. [Also at https://doi.org/10.1017/9781108594639.]

Uran, O., and Janssen, R., 2003, Why are spatial decision support systems not used? Some experiences from the Netherlands: Computers, Environment and Urban Systems, v. 27, no. 5, p. 511–526, accessed January 6, 2021, at https://doi.org/10.1016/S0198-9715(02)00064-9.

Usability.gov, [undated], User-centered design basics: U.S. General Services Administration webpage, accessed January 5, 2021, at https://www.usability.gov/what-and-why/user-centered-design.html.

U.S. Geological Survey [USGS], 2021, U.S. Geological Survey 21st-Century Science Strategy 2020–2030: U.S. Geological Survey Circular 1476, 20 p., accessed October 15, 2022, at https://doi.org/10.3133/cir1476.

VanderMolen, K., Wall, T.U., and Daudert, B., 2019, A Call for the Evaluation of Web-Based Climate Data and Analysis Tools: Bulletin of the American Meteorological Society, v. 100, no. 2, p. 257–268, accessed May 3, 2023, at https://doi.org/10.1175/BAMS-D-18-0006.1.

van der Molen, F., Swart, J.A.A., and van der Windt, H.J., 2018, Trade-offs and synergies in joint knowledge creation for coastal management—Insights from ecology-oriented sand nourishment in the Netherlands: Journal of Environmental Policy and Planning, v. 20, no. 5, p. 564–577, accessed January 5, 2021, at https://doi.org/10.1080/1523908X.2018.1461082.

Vennix, J.A.M., 1995, Building consensus in strategic decision making—System dynamics as a group support system: Group Decision and Negotiation, v. 4, no. 4, p. 335–355, accessed January 5, 2021, at https://doi.org/10.1007/BF01409778.

Wallach, D., and Scholz, S.C., 2012, User-centered design—Why and how to put users first in software development, in Maedche, A., Botzenhardt, A., and Neer, L., eds., Software for people: Berlin, Heidelberg, Springer, p. 11–38, accessed January 5, 2021, at https://doi.org/10.1007/978-3-642-31371-4_2.

White, D.D., Wutich, A., Larson, K.L., Gober, P., Lant, T., and Senneville, C., 2010, Credibility, salience, and legitimacy of boundary objects—Water managers’ assessment of a simulation model in an immersive decision theater: Science & Public Policy, v. 37, no. 3, p. 219–232, accessed January 20, 2021, at https://doi.org/10.3152/030234210X497726.

Wilson, C., 2014, Semi-structured interviews, chap. 2 of Wilson, C., ed., Interview techniques for UX practitioners—A user-centered design method: Boston, Mass. Morgan Kaufmann, p. 23–41, accessed January 5, 2021, at https://doi.org/10.1016/B978-0-12-410393-1.00002-8.

Wong-Parodi, G., Mach, K.J., Jagannathan, K., and Sjostrom, K.D., 2020, Insights for developing effective decision support tools for environmental sustainability: Current Opinion in Environmental Sustainability, v. 42, p. 52–59, accessed January 7, 2021, at https://doi.org/10.1016/j.cosust.2020.01.005.

Young, T.J., 2015, Questionnaires and surveys, chap. 11 of Hua, Z. ed., Research methods in intercultural communication—A practical guide: Hoboken, N.J., John Wiley & Sons Inc, p. 163–180, accessed January 7, 2021, at https://doi.org/10.1002/9781119166283.ch11.

Appendix 1. Decision Support Tool Survey

The original survey documents are available as a PDF file at https://doi.org/10.3133/sir20235076. These documents are provided for context and have not been modified or brought to U.S. Geologic Survey standards during the review of this publication.

Abbreviations

  • DOI Department of the Interior

  • IT information technology

  • USGS U.S. Geological Survey

Appendix 2. USGS Decision Support Tools Identified by Survey Respondents

The list below reports tool names exactly as reported by survey respondents. Except for removing identical responses, we have not edited or validated these tool names.
  1. 1. Aftershock “scenarios”

  2. 2. agro-climatology analysis tool

  3. 3. Annual Brome Adaptive Management (ABAM) Decision Support Tool

  4. 4. ARkStorm

  5. 5. BatTool

  6. 6. California Volcano Exposure Storymap

  7. 7. CarpDat

  8. 8. Chesapeake Bay Environmental Justice and Equity Dashboard

  9. 9. Chesapeake Bay Watershed Data Dashboard

  10. 10. Community of Iliamna, Littell, J.S., Fresco, N., Toohey, R.C., and Chase, M., editors. 2020. Looking Forward, Looking Back: Building Resilience Today Community Report. Aleutian Pribilof Islands Association. Iliamna and Fairbanks, AK. 48 pp.

  11. 11. Coordinated Assessments Partnership Data Exchange

  12. 12. COVID Dashboard

  13. 13. Drought Streamflow Probabilities in the Northeast

  14. 14. Early Warning Explorer (EWX)

  15. 15. Early Warning Explorer-Lite (EWX-Lite)

  16. 16. EverForecast

  17. 17. EverSnail

  18. 18. EverView

  19. 19. Fauquier County Groundwater Recharge

  20. 20. FishTracks

  21. 21. FishVis Mapper

  22. 22. Flood Inundation Mapper

  23. 23. Forecasts

  24. 24. GCLAS

  25. 25. Great Lakes Coastal Wetland Restoration Assessment

  26. 26. Groundwater Guardian

  27. 27. GSFLOW

  28. 28. Habitat Metric Integration Project

  29. 29. HayWired

  30. 30. Hazard Exposure Reporting and Analytics (HERA)

  31. 31. HSPF

  32. 32. Ecosheds

  33. 33. Ice Jams

  34. 34. Interactive Catchment Explorer (ICE)

  35. 35. IGEMS

  36. 36. Illinois River Catch Database

  37. 37. Kentucky Drought Monitor

  38. 38. Management Unit Prioritization Tool

  39. 39. ModelMuse

  40. 40. MODFLOW

  41. 41. MODFLOW GUI

  42. 42. Monarch Conservation and Planning Tools

  43. 43. MonitoringResources.org

  44. 44. MOViE

  45. 45. Multi-Hazard Planning Tool

  46. 46. Multi-hazard scenarios

  47. 47. National Water Census Portal

  48. 48. National Water Dashboard

  49. 49. NEMI

  50. 50. North Carolina Stochastic Empirical Loading and Dilution Model Catalog

  51. 51. Oahu Tsunami Evacuation app

  52. 52. Operational aftershock forecasts

  53. 53. PAMF model

  54. 54. Pedestrian Evacuation Analyst

  55. 55. Phragmites Decision Support Tool

  56. 56. PRMS

  57. 57. QWST

  58. 58. Scenarios

  59. 59. Science in the Great Lakes Mapper

  60. 60. SedLOGIN

  61. 61. SGMApy

  62. 62. ShakeOut

  63. 63. Shenandoah Wastewater Mapper

  64. 64. SLEDS

  65. 65. Sparrow DSS

  66. 66. SutraGUI

  67. 67. TapTool

  68. 68. Texas Water Dashboard

  69. 69. The Land Treatment Exploration Tool

  70. 70. TRAILS (Trail Routing, Analysis, and Information Linkage System)

  71. 71. TrendPowerTool

  72. 72. tse.ecosheds.org

  73. 73. Tsunami Summit

  74. 74. Virginia Monitoring sites Mapper

  75. 75. Water Data for the Nation

  76. 76. Water Point Monitoring

  77. 77. Water Quality Portal

  78. 78. WQ_Review

Appendix 3. Interview Protocol

Introduction: The Community for Data Integration project that we are working on is called, “So you want to build a decision support tool? Successes, barriers, and lessons learned for tool design and development.”
We are interviewing U.S. Geological Survey (USGS) employees to understand how the USGS creates tools that shape how users’ problem solve and make decisions. We are working to understand what researchers should consider before diving into tool design and development.
We would like to interview you about your experience in designing, initiating, or implementing decision support projects. Any question that you do not feel comfortable answering you do not need to answer. Please feel free to end the interview at any time by letting me know or just closing your browser.
This interview will not take more than one hour to complete and will be recorded with your consent. Do you consent to recording this interview? If you do not consent to a recording, we can still continue with the interview, and I will take notes.
Your contact information will not be shared beyond the research team and will only be used to initiate follow-up communication with you if needed.
Do you consent to participate in this interview?
Have interviewee introduce themselves.
Block 1. Their understanding of DSTs
  • How do you define Decision Support Tools? Why?

  • What role do DSTs play in your larger scientific research agenda?

  • How many DSTs have you been involved in creating?

Block 2. DST Project Information—Ask generally OR tool specific if only have designed one tool. Ask for examples.
  • What type of decision(s) was the tool(s) built to support and why?

  • How was this decision identified?

  • How does the tool support the decision?

Block 3. Stakeholder/User Engagement
  • Who was the intended audience for the tool(s)?

  • How was this audience chosen?

  • How did you engage with members of this audience? When?

  • Why did you choose these methods of engaging with the potential audience?

  • Have these relationships been maintained?

Block 4. Designing and Building
  • What was your role in creating the tool(s)? What did the rest of the team look like?

  • Who was your software developer and what role did they play in your team?

  • How did you find the developer? When did they join your team?

  • How did you/your team choose the software/algorithms/coding frameworks? What criteria were included/how conscious was this choice? If this person wasn’t you, who was responsible for this decision?

  • Was anyone from outside USGS involved? Why?

Block 5. Measuring Success
  • How do you define success for decision support?

  • Was/is the tool successful? How do you know?

  • What were your expectations for tool usage? Were they met?

  • Is it maintained? How? By Whom?

Block 6. Barriers—Ask about barriers in general, across multiple tools
  • What barriers did you and your team face in building the tool?

  • How were these barriers overcome (or not)?

  • How would you have done this process differently in hindsight?

  • Were there any resources you wish you had that would help the process?

Block 7. Anything else?
Thank you for your time today. Is there anything else you would like to add or anyone else you think it would be useful to reach out to?

Appendix 4. Codebook

[DST, decision support tool]

Code name Code definition Number of mentions by interviewees
Analytics Information related to Google analytics or other methods of tool use evaluation 20
Barriers Barriers to creating a DST 84
Decisions Supported Discrete decisions supported by DSTs 32
Hindsight Interviewee responses to the question: Are there things you would have done differently in hindsight? 65
Maintained Interviewee responses to the question of whether the DST is/was maintained 49
Process Information related to the overall process of building a DST 26
Iterative Information from interviewees who describe using an iterative process 10
Quotations Quotations from interviewees across all codes to be added to the report 46
Resources Needed Interviewee responses to the question: Are there any other resources that you wish you had that would’ve helped in the process of building this tool? 65
Software Interviewee responses that spoke specifically to different types of software used in DST design and development 36
Success Definition Interviewee definitions of DST success 26
Successful DSTs that interviewees identified as successful 49
Team Information related to the DST design and development team including project roles and when different roles were added to the team 64
User Engagement Quotes related to user engagement that took place during, before, or after DST development 86
Users Information about the intended end users of a described DST 38

Abbreviations

CIO

Chief Information Officer

DOI

Department of the Interior

DST

decision-support tool

EPA

U.S. Environmental Protection Agency

FEMA

Federal Emergency Management Agency

GIS

geographic information system

GLCWRA

Great Lakes Coastal Wetland Restoration Assessment

HCD

human-centered design

IT

information technology

PRA

Paperwork Reduction Act

SNC

State of Our Nation’s Coast

UCD

user-centered design

UI

user interface

USGS

U.S. Geological Survey

UX

user experience

VHP

Volcano Hazards Program

For additional information, contact:

Director, Integrated Information Dissemination Division

U.S. Geological Survey,

12201 Sunrise Valley Drive,

Reston, Virginia 20192

Publishing support provided by the Baltimore and Denver Publishing Service Centers

Disclaimers

Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

Although this information product, for the most part, is in the public domain, it also may contain copyrighted materials as noted in the text. Permission to reproduce copyrighted items must be secured from the copyright owner.

Suggested Citation

Stoltz, A.D., Cravens, A.E., Herman-Mercer, N.M., and Hou, C.Y., 2023, So, you want to build a decision-support tool? Assessing successes, barriers, and lessons learned for tool design and development: U.S. Geological Survey Scientific Investigations Report 2023–5076, 32 p., https://doi.org/10.3133/sir20235076.

ISSN: 2328-0328 (online)

Publication type Report
Publication Subtype USGS Numbered Series
Title So, you want to build a decision-support tool? Assessing successes, barriers, and lessons learned for tool design and development
Series title Scientific Investigations Report
Series number 2023-5076
DOI 10.3133/sir20235076
Year Published 2023
Language English
Publisher U.S. Geological Survey
Publisher location Reston, VA
Contributing office(s) Fort Collins Science Center, WMA - Integrated Information Dissemination Division
Description Report: vi, 32 p.; 1 Appendix
Online Only (Y/N) Y
Additional Online Files (Y/N) Y
Google Analytic Metrics Metrics page
Additional publication details