The USGS has a long history of analyzing and evaluating the stream-gaging program. The first known nationwide review of the stream-gaging program was conducted between 1953 and 1958. The purpose of the review was to design a hydrologic network of stations in accordance with principles described by Langbein ( 1954 ). During this review, stations were classified according to the primary uses of the data as either water management or hydrologic network (regional hydrology). Within the hydrologic network, the concept of primary and secondary stations was developed. The primary stations were used for long-term sampling of streamflow, and the secondary stations, which were operated for 5 to 10 years, were used to obtain geographic coverage of streamflow characteristics. Estimation of long-term statistics at the secondary stations was based on the correlation of monthly flows with the long-term primary stations. Recommendations were made for improving the stream-gaging program.
The second national study of the streamflow data-collection program was conducted in 1969--70. The stations were classified on the basis of the principal uses of data as providing data for current use (water management), planning and design (regional hydrology), and defining long-term trends and the stream environment. The goals of the program for planning and design data were to provide information that is equivalent to 25 years of record for principal streams (which drain 500 square miles or more) and 10 years for minor streams (which drain less than 500 square miles) ( Carter and Benson, 1969 ). These data were to be provided by either a gaged record or equations that relate streamflow characteristics to basin characteristics. In general, these goals were met only in the humid Eastern United States. The results are described in a series of statewide reports entitled "A Proposed Streamflow Program for [State Name]." A summary of the nationwide study, including recommendations for improving the program, is provided by Benson and Carter ( 1973 ).
The most recent nationwide evaluation was conducted during the mid-1980's to define the cost effectiveness of the operation of the stream-gaging program ( Thomas and Wahl, 1993 ). The objective of the nationwide study was to define and document the most cost-effective methods of furnishing streamflow information. The study involved the following phases: an analysis of the data uses and availability and documentation of the sources of funding for each station; an evaluation of the utility of using less costly alternative methods, such as hydrologic-flowrouting models and statistical methods, to provide the needed streamflow information; and an analysis of the cost-effective operation of the stream-gaging program that relates the accuracy of the streamflow records to various operating budgets. A prototype study for the nationwide analysis was described by Fontaine and others ( 1984 ). Statewide analyses were performed by hydrologists in the USGS State Offices. The reports that described the analyses for the individual States were summarized and referenced in Thomas and Wahl ( 1993 ).
The results of the poll on data uses are summarized in figures 14 and 15. The categories were described earlier in this report in the section on "Uses of Streamflow Data." The percentages in figure 14 total more than 100 percent because data uses for a given station may be included in more than one category.
Figure 14. Percentages of stations by category of use (from
Thomas and Wahl, 1993).
Figure 15. Distribution of stations as a function of the number of data-use
categories (from
Thomas and Wahl, 1993
).
Although stations usually are established for a specific reason, the data collected are useful for many purposes. On average, there are more than two categories of data use per station. The uses of data from only about 20 percent of the stations fall into a single data-use category. The greatest number of the stations in only a single data-use category are in regional hydrology (34.4 percent) and hydrologic systems (30.2 percent). More than 1,500 stations (25 percent) have four or more data-use categories.
The nationwide analysis documented that multiple uses were being made of data collected at stations in the stream-gaging program, simulated flows from hydrologic-flow-routing models and statistical methods generally were not of sufficient accuracy for most uses, and the program was being operated in an efficient and cost-effective manner ( Thomas and Wahl, 1993 ). Network analyses and program evaluation will continue to play a prominent role in the management of the program. Future directions will likely involve the development of techniques for a more coordinated analysis of water-quality, ground-water, and streamflow networks.
Perhaps the biggest challenge that confronts the stream-gaging program, and indeed of the hydrologic community, is that of maintaining long-term and consistent nationwide data sets. Agreement is widespread about the need for such data sets. Because of the manner in which the data program is funded, the networks of stations are dynamic. Interest in and the need for hydrologic information vary in time and space. This variation of interest, coupled with the budget limitations, means that all needs simply cannot be met. In some instances, monitoring activities at a particular site are discontinued because the needs of the supporting agencies have been met. In other cases, even though the needs have not yet been met, budget allocations dictate reductions for hydrologic data-collection activities. Since the late 1960's, reductions have been sharp in the number of stations that provide data that are appropriate for studies of climate variability (fig. 16). This is but one striking example of how budget constraints have caused a reduction in the availability of the streamflow data needed to address an important current issue.
The existing data networks should be viewed by hydrologic scientists as opportunities upon which they can build. To optimize these opportunities, it is first necessary to define the characteristics of the data sets that hydrologic scientists need. These characteristics include the variables to be measured and the locations, frequencies, durations, and accuracies of the measurements. They should be derived from knowledge about the hydrologic phenomena to be explored and from the hypotheses to be tested [National Research Council, 1991, p. 221].
Figure 16. Number of stations in a given year
with acceptable data for studying climate fluctuations
(from
Slack and Landwehr, 1992
).
Despite the increasingly recognized importance of data records of long
duration, only a few dedicated research organizations have
successfully maintained high-quality data collection efforts over
periods of 50 to 200 years. Furthermore, these organizations have
experienced difficulty in committing limited research monies year
after year to an activity that is frequently termed
"monitoring," often with pejorative overtones [National
Research Council, 1991, p. 222].
Maintaining the stations and equipment needed to gage the Nation's streams also will pose a large challenge for the future. Of the 7,292 stations in operation in 1994, many have been inplace for more than 20 years. Because of the streamside locations, the stations and cableways, as well as the associated recorders and equipment, require significant amounts of upkeep. In addition, the changes in technology related to water-level sensors and data recorders in recent years have been phenomenal. Replacing existing sensors and recorders with equipment based on the newer technology will not only be costly, but will require fundamental changes in modes of operation. Past improvements in sensors and recorders have increased the reliability of the streamflow records ( Thomas and Wahl, 1993 ) and decreased the frequency of visits to the station for equipment repair. Further improvements in technology should result in savings in labor required to operate the stream-gaging program and improved accuracy of streamflow records. For example, the use of satellite telemetry or cellular phone technology provides USGS staff with knowledge of equipment failure and unusual flow conditions so that costly field visits can be scheduled when they are most needed, thereby reducing the average number of visits to the station.
One of the most pressing and immediate challenges relates to the mechanisms for releasing interim data. Traditionally, the stream-gaging program has been oriented towards producing data to be placed in the archives for use in future analyses. Those persons or agencies with an immediate need for data generally participated in the collection of the data and, therefore, had ready access to the interim data. With the advent of data transmission by means of satellites, the needs for and uses of realtime data have significantly increased. Forecasters and managers now rely on interim data received in near realtime to make operational decisions. The data upon which those decisions are based must be the best that can possibly be produced in a short timeframe.
A related problem is that of access to the archived data. Historically, data have been archived in the WATSTORE data base. It evolved from the computer technology of 20 to 30 years ago and was designed initially for experienced users of the data base. Interest in and need for access to that data base is much broader than is possible through the WATSTORE system and technology. As noted earlier, the USGS will soon make the data accessible by means of Internet. When that is accomplished, many potential users would have ready access to the archived data.
URL for this page is <URL=http://txwww.cr.usgs.gov/pubs/circular/1123/evaluate.html>