link to main US Geological Survey website
125 Years of Science for America - 1879 to 2004
link to Western Coastal and Marine Geology website

CHAPTER 2. METHODOLOGY

Peter Hamilton, Burton Jones, John Largier, Marlene Noble, Leslie Rosenfeld, and Jingping Xu

Skip Navigation Links


| 2.1. Surfzone Bacteria Measurements | 2.1.1. Sampling and Laboratory Methodology | 2.1.2. Interpretation of Microbiological Data | 2.2. Hydrographic Measurements | 2.2.1. Sampling Grid and Data Collection Practices | 2.2.2. Post-processing Methods | 2.2.3. Towed Underwater Vehicle (TUV) Measurements and Methods | 2.2.4. Preparation of Three-dimensional Figures | 2.2.5. Preparation of Two-dimensional Transect Figures and Three-dimensional Curtain Plots | 2.2.6. Temperature/Salinity Relationships and Salinity Anomaly | 2.3. Nearfield Plume Modeling | 2.4. Moored Array | 2.4.1. Water-column Measurements | 2.4.1.1. Naming Conventions | 2.4.2. Wind and Wave Measurements | 2.4.3. Near-bed Instruments and Methods | 2.4.3.1. Acoustic Doppler Velocimeter | 2.4.3.2. Video Camera | 2.5. Ancillary Data | 2.5.1. Sea Level and Astronomical Data | 2.5.2. Airport Winds | 2.6. Data Quality and Processing Procedures and Analysis Methods | 2.6.1. General Data Quality and Processing Procedures | 2.6.2. Analysis Methods | 2.6.2.1. Times Series | 2.6.2.2. Tidal Analysis | 2.6.2.3. Complex Demodulation | 2.6.2.4. Empirical Orthogonal Functions | 2.7. References |

Home

Chapter 1. Introduction and Background

Chapter 2. Methodology

Chapter 3. Surfzone Bacteria Patterns

Chapter 4. Subtidal Circulation Pathways

Chapter 5. Newport Canyon Transport Pathway

Chapter 6. Sea Breeze

Chapter 7. Tidal Transport Pathways

Chapter 8. Sediment Resuspension and Transport near the OCSD Outfall

Chapter 9. Nearshore Circulation and Transport Pathways

Chapter 10. Spatial and Temporal Patterns of Plume Tracers

Chapter 11. Temporal and Spatial Patterns for Surfzone Bacteria before and after Disinfection of the OCSD Outfall

Acknowledgements

2.1. Surf zone Bacteria Measurements

Three types of fecal indicator bacteria (FIB) were sampled: (1) total coliform (includes the genera Escherichia, Citrobacter, Enterobacter, and Klebsiella); (2) fecal coliform (includes the thermotolerant genera Escherichia and Klebsiella) or alternately Escherichia coli; and (3) enterococci (includes Enterococcus faecalis, E. faecium, E. gallinarum, and E. avium). It was estimated that E. coli represents roughly 90% of fecal coliform values, so E. coli densities were adjusted up by a factor of 1.1. These organisms are commonly found in the feces of humans and other warm-blooded animals. Although some strains are ubiquitous and not related to fecal pollution, their presence in water is used as an indication of fecal pollution and the possible presence of enteric pathogens. Epidemiological studies have indicated that swimming-associated gastroenteritis is directly related to the quality of bathing water. FIBs have been demonstrated to be valuable in determining the extent of fecal contamination in recreational surface waters, and their density in recreational water samples has been shown to have a predictive relationship with swimming-associated gastroenteritis at marine and fresh-water bathing beaches.

2.1.1. Sampling and Laboratory Methodology

Routine surfzone samples were collected in ankle-depth water 5 days/week, including one weekend day. Samples were generally collected between 0500 and 1000 PST, with sampling proceeding from the northernmost station to the southernmost. Based on 5 years of daily sampling at 24 Los Angeles area sites, Leecaster and Weisberg (2001) estimated that 70% of AB411 single-sample exceedances last only one day, and less than 10% last more than 3 days. Sampling five times per week, as OCSD does, probably misses about 20% of the total and fecal coliform exceedances.

During the six HB PIII hydrographic surveys, hourly surfzone samples were collected for analysis of FIB. This sampling began at 1200 PST on Day 1 of the surveys and concluded at 1200 PST on Day 3.

All FIB samples were collected in sterile 100 mL sample jars and stored on ice at a temperature of 4-10°C and transported to OCSD’s laboratory within 6 hours. Sample analysis began as soon as possible after collection, but within 2 hours after its arrival at the laboratory. Detailed sample collection, preservation, and handling procedures are described in detail in OCSD (2001).

During this study OCSD used three methods to quantify coliform bacteria. For daily permit sampling, a multiple tube fermentation (MTF) test was done for total and fecal coliform bacteria. For the total coliform group this includes all aerobic and facultative anaerobic, gram-negative, non-spore forming, rod-shaped bacteria that ferment lactose within 48 ± 3 hours at an incubation temperature of 35.0 ± 0.5°C. For the fecal coliform group, this includes all aerobic and facultative anaerobic, gram-negative, non-spore forming, rod-shaped bacteria that ferment lactose within 24 ± 2 hours at an elevated incubation temperature of 44.5 ± 0.2°C. Results of MTF tests are reported in terms of the Most Probable Number (MPN) of organisms per 100 mL. The MPN is based on specific probability formulas, and is an estimate of the mean density of coliform bacteria in the sample (Table 2-1).

For the HB PIII project, OCSD used a chromogenic substrate coliform test, commonly known as Colilert-18â. The chromogenic substrate test utilizes defined substrate technology for the simultaneous detection of enzymes of total coliform bacteria and E. coli based upon both a color change to indicate the presence of total coliform bacteria, and fluorescence under ultraviolet light to indicate the presence of E. coli. Quantification is determined by the number of cells that test positive for either FIB and is reported as MPN/100 mL. Detailed laboratory and QA/QC methodology for both methods is contained in OCSD (2001). Some samples collected by the Orange County Health Department were analyzed using membrane filter (MF) technique.

An MF test method for enterococci in water is the standard test methodology used for OCSD’s daily permit-compliance testing. The MF test method provides a direct count of bacteria in water based on the development of colonies on the surface of the membrane filter. For the HB PIII samples, OCSD used a chromogenic substrate coliform test, commonly known as Enterolertâ. Detailed sample analysis procedures are described in OCSD (2001). The MF method was also used to determine fecal coliform concentrations in some of the samples.

Minimum and maximum detection limits vary depending on the method used, and the sample dilution. Minimum detection limits for total and fecal coliform are generally 10 or 20 MPN/100 mL, and 2 or 10 MPN/100 mL for enterococci. Maximum detection limit for fecal coliform is always 16,000 in this data set. The maximum detection limit for total coliform is most commonly at 16,000, but occasionally at other values (e.g., 24192, 2005). The most common maximum detection limit for enterococci is 400 MPN/100 mL, but sometimes values between 240 and 2005 are at the maximum detection limit.

Figure 2-1 shows the 95% confidence limits as a function of concentration for the MTF method and the Colilertâ and Enterolertâ methods. Noble et al. (2003) compared results from microbiological analyses of identical formulated samples carried out in 22 laboratories in southern California. They found no significant variation among the three methods described above, except that membrane filtration underestimates fecal coliform. They found that the variability was greatest for the MTF method, and that for all three measurement methods, intra-lab variability exceeded inter-lab variability. Table 2-2 adapted from Noble et al. (2003) shows the error bars around the AB411 single-sample standards (SS) determined from their study.

2.1.2. Interpretation of Microbiological Data

The interpretation of FIB analyses is complicated by the uncertainty in die-off rates of these bacteria and multiple sources of bacteria in the study area. In addition to the effects of dispersion and dilution, bacteria discharged into the environment do not behave in a conservative fashion. Fecal coliform bacteria are best adapted to living in a warm, dark, isotonic, pH-balanced environment with an abundance of food. Following discharge into environmental waters, bacteria levels begin to decline. Environmental factors such as salinity, heavy metals, sedimentation, coagulation and flocculation, solar radiation, nutrient deficiencies, predation, bacteriophages, algae, and bacterial toxins all impact the ultimate fate of the bacteria.

Numerous studies have been conducted examining the die-off rates of E. coli, the largest member of the fecal coliform group, in water. Of all the factors affecting survival, the two most important ones appear to be temperature and solar radiation, with cooler temperatures and low solar radiation the most conducive to survival. Light is not as significant in extremely turbid waters, or once the coliform are deposited in the sediment. It is not clear, however, whether light only makes the organisms more susceptible to inactivation by any of the other factors. Because of the interaction of factors and geographic differences in these factors, there is no single die-off rate that can be universally applied to bacteria die-off. In general, though, the literature supports a time range for 90% of the E. coli to die off (T90 ) to be as short as four to six hours to several days (Chamberlin and Mitchell, 1993).

The total and fecal coliform and enterococci concentrations measured in the surfzone between July 1, 1998, and December 31, 2001, provided by OCSD, were analyzed with respect to their temporal and spatial variability. Station locations are shown in Figures 2-2 and 2-3. The AB411 beach sanitation standards are listed in Table 1-1.

Several types of bacterial “events” were defined in order to reduce 3 FIBs times 18 stations into a few variables, and take advantage of spatial patterns appearing naturally in the data, which indicated that coliform contamination events were more localized and enterococci events more widespread (Figure 2-4). While the event definitions were formulated to ensure that all FIB concentrations exceeding AB411 standards at Huntington Beach were included, it turns out that, taken together, the three event types capture all but a handful of days on which there was an AB411 exceedance anywhere in the HB PIII data set. This approach also has the advantage that it allows treatment of the high temporal resolution sampling periods during the summer of 2001, in a manner consistent with no more than daily sampling during the remainder of 1998-2001. Furthermore, it avoids pitfalls associated with the minimum and maximum detection limits, which vary with the method and dilution used to determine bacteria concentration. Finally, by calculating sets of these events with both the AB411 SS as the triggering level, and the AB411 monthly geometric mean standards (MM) applied to single samples as the triggering level, it is demonstrated in Chapter 3 that the timing of the bacterial events is not highly sensitive to the choice of triggering level.

When total or fecal coliform exceeded AB411 standards at one or more of the stations between, and including stations 3N and 12N during a 24-hour period starting at 0000 PST and ending at 2359 PST, then it was designated that a type 1 event occurred that day. This was done using both the SS, and the MM applied to single samples.

· Total coliform > 10,000 MPN/100 mL (SS); 1,000 MPN/100 mL (MM)

· Total coliform > 1000 MPN/100 mL and total coliform/fecal coliform <10 (SS)

· Fecal coliform > 400 MPN/100 mL (SS); 200 MPN/100 mL (MM)

When, during a 24-hour period from 0000 PST to 2359 PST, enterococci exceeded AB411 standards [(104 MPN/100 mL (SS); 35 MPN/100 mL (MM)] at 3 or more of the numbered stations from 39S to 39N (SAR, TM, D2 AES not included), including at least one of stations 3N to 12N, inclusive, then it was designated that a type 2 event occurred that day. A third type of event was added to ensure that all days on which any of the SS were exceeded in the Huntington Beach area would be included in at least one type of event. A type 3 event was defined as occurring on any day (PST) during which enterococci exceeded AB411 at any station between, and including stations 3N and 12N, on which there is not a type 1 or type 2 event.

Since there was hourly round-the-clock sampling done during the six hydrographic cruise survey periods during the HB PIII study, there was concern that those days could preferentially show up as having bacterial contamination events due to the nighttime and/or higher frequency sampling, so a surfzone bacterial data set subsampled to daily values was also created. If multiple samples were available for a station for a given PST day, the sample closest in time to that of the average sampling time for that station (calculated over May 1-October 31, 2001) was used. The event analysis described above was also carried out on this daily subsampled data set.


TOP OF PAGE


2.2. Hydrographic Measurements

Hydrographic studies were conducted for one 24-hour survey and five 48-hour surveys. Samples were collected at a 40-station sampling grid using a CTD (conductivity-temperature-depth) profiling system and also along 6 transects (labeled Towyo lines) using a towed undulating vehicle (TUV) (Figure 2-2). Onshore-offshore spacing between adjacent stations along the same transect was about 1 km, with about 2 km between adjacent stations in the alongshore direction (Figure 2-2). Surveys during the study period were conducted on May 21-22, June 19-21, July 5-7, July 19-21, August 19-21, and September 15-17, 2001. For each survey, sampling began and ended at 1200 PST.

2.2.1. Sampling Grid and Data Collection Practices

Measurements with a CTD profiling system were conducted using either a Sea-Bird Electronics SBE9-03/SBE 11 Deck Unit (SBE 9/11) or SBE 25 underwater unit equipped with Sea-Bird temperature and conductivity sensors, and a digiquartz pressure sensor according to OCSD standard operating procedure (OCSD, 2001/ECMSOP 1500). The WetLabs C-Star transmissometer and WetLabs Wet-Star chlorophyll fluorometer were powered by the CTD and logged through the CTD’s analog-to-digital converter. Sea-Bird SEASOFT software was used to facilitate data acquisition, data display, and sensor calibration. Parameters measured using the profiling system include water temperature, conductivity, salinity, dissolved oxygen, pH, chlorophyll fluorescence, and light-beam transmission. Sensors on each CTD were precalibrated prior to field sampling. Once the CTD was deployed, it was lowered to 3 to 5 m where a 3-minute equilibration period was used at the first station and 90 seconds at subsequent stations. After equilibration, the CTD was brought back to the surface, then lowered to obtain the profile for the station. The CTDs were lowered to within 2 m of the bottom with a profiling rate of approximately 1.0 m/s, yielding a vertical resolution of about 12 cm, then gently lowered to set the rosette sampler on the bottom.

Discrete water samples for ammonium and FIBs were collected using a Sea-Bird Electronics Carousel Water Sampler (SBE 32/SBE 33) multi-bottle array at 1 m, at 5-m depth intervals throughout the water column, and at the bottom. These samples were taken during the upcast portion of the CTD cast (OCSD, 2001/SOP1501). Surface-water samples were collected at selected stations using a dip-pole.

Because of the intensive nature and duration of each sampling event, a considerable amount of coordination was required to get sampling equipment and crews on site, samples back to the laboratory, and laboratory staff to do the sample analysis. The offshore study area was divided into two subareas: downcoast CTD and upcoast CTD. Each sub-area was created so that sampling could be completed in a 4-hour time span. Each sub-area was sampled by a rotating schedule of three boats, which were managed to ensure that all of the sampling requirements were met and sampling crews and skippers were not fatigued. Tables 2-3a and 2-3b list the offshore sampling sequence and timing for each survey sub-area.

Four different CTDs were used in the offshore portion of this project. Data were captured at the highest rate possible with the various instruments. At the high end this meant that data were recorded at 24 scans per second. At the low end this meant that data was recorded at 8 scans per second. Initially, these data were processed using the software provided by the CTD manufacturer to apply small time offsets between different sensors due to delays in water reaching sensors through pumping systems, or to adjust for certain sensors with a response delay. Sea-Bird SEASOFT software was used to apply calibration/conversion functions to produce final engineering units from raw sensor signals. Data were processed according to OCSD SOP 1505.1. At this point ASCII data files containing all recorded data at each unique sampling location were generated.

To eliminate data from before and after the downcast, reduce what was a tremendous amount of data to a more manageable level, and to prepare a data set amenable to graphical and statistical analysis, a further data reduction process was completed in post-processing. This downcast portion initially includes all data at up to 24 scans per second, which at typical descent rates of 1 m/s could mean discrete readings at as close as 5 cm vertically.

2.2.2. Post-processing Methods

Data post-processing was accomplished by loading ASCII data into the Interactive Graphical Ocean Data Systems (IGODS) software. The process involves retaining a site’s downcast data for the water column and identifying possible outliers based upon the difference between the downcast standard deviation and a five-point (midpoint is the evaluated datum) running average standard deviation. Datum was flagged if it exceeded the criteria limit for a measured parameter: temperature (0.5), salinity (0.3), dissolved oxygen (0.5), and transmissivity (0.5). The vertical profiles for each parameter were graphically and statistically evaluated looking for outlier points. These points were removed from the raw downcast data and documented in a text file that listed the points removed from each station. Upon removal, the cast was re-evaluated until all outliers were removed and the site’s data was accepted. The outlier criteria were only guidelines, so points exceeding the limits were not automatically discarded and points below the limits were not automatically retained. The next data-reduction step was to produce a value at each integer 1-m depth interval. For each parameter the data were scanned to find the nearest data point in depth immediately above and below the integer depth and these two values were averaged together. After 1-m depth averaging was completed, missing data were recovered by examining the upcast data and using the appropriate depth value for any missing data. The final step was a review of the graphical representations of each parameter to determine whether further outlier removal was necessary. A minimal number of points were removed during this review.

2.2.3. Towed Undulating Vehicle (TUV) Measurements and Methods

This study also utilized a Guildline Minibat TUV equipped with a Sea-Bird 9/11 CTD and an in situ pump that pumped at a rate of 7 to 8 liters per minute using a vane pump with a graphite impeller attached to a well-pump motor. The water was pumped through a nylon tube running through the center of the tow cable to ship’s deck and into the laboratory of the ship. In the laboratory the flow from the pump system ran through a Sea-Bird 25 CTD for measurement of temperature and conductivity.

Batch FIB samples were obtained from the flow stream and time-stamped with bar-code labels. The samples were placed on ice, kept in the dark, and transported back to OCSD’s microbiology laboratory. A portion of the in situ pump flow stream was drawn continuously into a two-channel Technicon Autoanalyzer segmented flow system for analysis of nitrate and ammonium. Because nitrite was not measured separately, the nitrate measurement represented nitrate+nitrite. We assumed that nitrite was relatively low throughout the area and that the measurement represented predominantly nitrate, based on Mann and Lazier (1991) and Lalli and Parsons (1993).

Transit time of the water from the in situ pump on the tow fish to the laboratory was determined by comparing the salinity signals between the in situ CTD conductivity sensor and the laboratory CTD at the end of the flow tube. The transit time in the tube was the difference between the peaks in the two CTD salinity measurements. The approximate transit time was one minute. Samples taken from the in situ flow stream were aligned with the in situ measurements using the delay times from tow vehicle to the shipboard laboratory and additional lag times introduced by the Autoanalyzer. Once the delay times were accounted for, they were compared with other correlating variables for validation of the delay times. For example, nitrate is known to increase with depth so it is expected that the highest concentrations would be found in the deepest, densest water. Similarly, ammonium concentrations were compared with the salinity signal, expecting that the highest ammonium concentrations would be associated with the lowest salinities, especially near the outfall. In some cases small corrections to the delay times were made in order to compensate for the differences.

Mapping tracks were designed with a primary axis in the cross-shelf direction centered on the OCSD outfall diffuser. The TUV was towed at approximately 4 kts and undulated between the surface and 5 m above the bottom or about 70-100 m, depending on which was deeper.

2.2.4. Preparation of Three-dimensional Figures

To prepare three-dimensional volumetric images of the data sets, data from the CTD and Towyo transects for each survey were combined and interpolated to a three-dimensional grid using GMS software. Interpolations were done using an inverse distance-weighted algorithm that estimated the concentrations for each cell in the grid from the actual measurements. This interpolation provided three-dimensional curvilinear portrayals of surfaces. Only surfaces within the grid are shown. FIB concentrations were log10 transformed prior to interpolation, then backtransformed for display purposes. All other measures were interpolated based on the recorded values. For water-quality monitoring purposes, ammonium is usually reported in concentrations of milligrams per liter (mg/L). Oceanographic concentrations are usually reported in microMolar (µM). One mg/L of ammonium (NH4) corresponds to 55.6 µM. Iso-surfaces (surfaces of constant concentration) were plotted for each measure at a concentration that portrayed the patterns of the measure on each sampling day. For all measures except salinity, the iso-surface encloses the volume with concentrations equal to or greater than the iso-surface concentration. Salinity iso-surfaces enclose volumes of equal or lesser salinity to display the freshwater signal of the wastewater plume.


TOP OF PAGE


2.2.5. Preparation of Two-Dimensional Transect Figures and Three-dimensional Curtain Plots

Contoured sections of the data from each CTD and Towyo transect were created using standard contouring tools available within Matlab™. After the track distances were calculated for the data set, the data were gridded to a standard contouring grid using a horizontal spacing of 0.4 km (400 m) and a vertical scale of 2 m. Matlab™ interpolates non-uniformly spaced data to a uniformly spaced grid using Delaunay triangulation. The physical and bio-optical variables were contoured in their linear units (i.e., no transformations were performed on the data before gridding and contouring). The bacteria data were transformed to log10(concentration) for each data point before the data were gridded. Where bacteria concentrations were below the detection limit (<10 MPN/100 mL), they were assigned a value of one. Therefore, in the contoured data figures contour lines less than “10” MPN/100 mL are below the detection limit. These gridded data were then used for the preparation of the curtain plots.

2.2.6. Temperature/Salinity Relationships and Salinity Anomaly

Salinity anomaly, as used here, has been developed by the USC Ocean Outfall Research Group. It is based on the idea that there are characteristic ambient water masses that are apparent in a temperature-salinity (T-S) diagram (Figure 2-5) and that the presence of effluent within the water column will cause a decrease in the salinity below ambient concentrations. Step one is to determine the lines that define the lower bound of salinity for the ambient water. The black lines in Figure 2-5 delineate the lower salinity boundary for the ambient seawater determined from the sections upcurrent and away from the outfall. The second step is to calculate Samb(T), the estimated ambient salinity based on the temperature for a given T-S data pair from the equations developed in the first step. The third step is to calculate the salinity anomaly (Sanom) which is calculated as the difference between the measured salinity (Smeas) and the estimated ambient salinity (Samb(T)). Thus, Sanom = Smeas - Samb(T). The region where Sanom is less than -0.01 is shown in the vertical profile of salinity for Tow 9 from cruise 5 in Figure 2-6 with red circles indicating the regions of negative salinity anomaly. In this case there are clearly defined regions of low salinity anomaly. The effluent plume is indicated by the red circles below 30 m. Although surface runoff was evident in some of the tows for this cruise, no surface runoff was detected in this tow, and therefore there are no red circles in the nearsurface region. When both effluent plume and surface runoff plumes are present their vertical separation clearly indicates the different sources.

2.3. Nearfield Plume Modeling

For each survey we evaluated the nearfield plume characteristics using the “RSB” outfall plume model developed by Roberts, Snyder, and Baumgartner (1989a,b) and Roberts (1995). The plume was modeled at hourly intervals for a 10-day period centered on the hydrographic sampling period. The basic outfall characteristics used in the model are described in Table 2-4. The flow rate from the pipe was modeled as the hourly flow rate. Other parameters that were input to the model include the density profile and the current speed and direction. The density profile was calculated from the hourly temperature measurements from Mooring HB12, nearest to the outfall. Density was calculated from temperature based on an empirical temperature-density relationship developed during the Plume Mapping SPS (MEC, 2001) for the period between August 1999 and September 2000. The current speed and direction were calculated from the hourly current profiles throughout the sampling period. The average speed and direction for the currents between 30- and 55-m depth were used because this is the region of flow that directly affects the discharge from the outfall.

2.4. Moored Array

2.4.1. Water-column Measurements

An array of moorings was deployed at 12 sites across the shelf, in water depths from 10 to 205 m, for 4 months in the summer of 2001 (Figure 2-2). Related field activity identification numbers and their metadata url are as follows:

S-1-00-SC http://walrus.wr.usgs.gov/infobank/s/s100sc/html/s-1-00-sc.meta.html

S-2-00-SC http://walrus.wr.usgs.gov/infobank/s/s200sc/html/s-2-00-sc.meta.html

S-2-01-SC http://walrus.wr.usgs.gov/infobank/s/s201sc/html/s-2-01-sc.meta.html

S-3-01-SC http://walrus.wr.usgs.gov/infobank/s/s301sc/html/s-3-01-sc.meta.html

S-4-01-SC http://walrus.wr.usgs.gov/infobank/s/s401sc/html/s-4-01-sc.meta.html

Each site hosted one or more moorings, and some sites included bottom tripods as well (Figures 2-7a and 2-7b). Most moorings were in the water from mid- or late June through mid-October. A set of moorings was deployed at 7 sites along the principle cross-shelf transect (HB01, HB02, HB03, HB05, HB06, HB07, HB08-mooring HB04 was never occupied because instrumentation was not received in time). The shallowest site in this array, HB01, was in 10 m of water between shore stations 3N and 6N. This site was also near the AES intake and outflow pipes. One site, HB07, was near the shelf break, 1.5 km upcoast of the center of the OCSD outfall. A single Acoustic Doppler Current Profiler (ADCP) was deployed at site HB08, which is on the slope in 205 m of water.

Three other sites (HB09, HB10, HB11) were aligned with the 15-m site at HB03, approximately 30° to the coastline (Figure 2-2). Because the shelf break turns toward the shoreline south of the outfall, this line was also approximately perpendicular to the local orientation of the shelf-break isobaths. The deepest site on the line, HB11, was in 55 m of water, near the edge of the shelf.

Two additional sites completed the shelf-wide array. HB12, near the shelf break, was very close to the OCSD outfall (Figure 2-2). This mooring was in the same location as one previously deployed in 1999/2000 for one year. HB13 was deployed upcoast of Newport Canyon in 15 m of water.

The mooring sites shared many common characteristics. Currents were measured at all but one location (HB02). At 7 of the 12 sites, currents were monitored over the entire water column by an upward-looking ADCP. Five of these ADCPs were deployed along the principle cross-shelf transect at HB03, HB05, HB06, HB07, and HB08 (Figures 2-7c, 2-7d, 2-7e, 2-7f, 2-7g). The other 2 ADCPs were deployed along the secondary cross-shelf transect at sites HB10 and HB11 (Figures 2-7i, 2-7j). Currents were monitored at the remaining sites, except HB02, by single-point current meters (Figures 2-7a, 2-7h, 2-7k, 2-7l). Most sites also measured temperature and salinity over the water column and at the bed. Ancillary measurements, such as water clarity, bottom pressure, resuspended sediments, near-bed currents from surface waves and photographs of the sea bed were measured at selected sites. Wind velocity and surface-wave characteristics were telemetered to shore from site HB07.

Current, temperature and salinity measurements were generally sampled every 2-5 minutes over the 4-month deployment period. In particular, most ADCPs on the shelf sampled at 3-minute intervals. Hence, internal waves with periods larger than 6 minutes were monitored at 5 of the 7 ADCP sites. The sampling interval for most single-point current meters over the shelf was 5 minutes. The slope currents were measured every 15 minutes. Joint temperature and salinity measurements were generally recorded every 2 minutes. Separate temperature measurements were generally recorded every 3-5 minutes. Detailed sampling intervals for each instrument is given in Tables 2-5a, b.

Most instruments recorded data for the full deployment period (Figures 2-8a, 2-8b, 2-8c). Near-bed current records were sporadic at HB01, but temperature, salinity, and pressure were recorded for the entire deployment.

A nearshore array of moorings was deployed in water depths shallower than 12 m after the principal array was in the water (Figure 2-9). Some instruments were deployed in July, others in August, and most remained in the water for a month or more (Figure 2-10). ADCPs were deployed inshore of HB01, near the AES Corporation outfall, and along a line upcoast of HB01 (Figure 2-9). Temperature chains and near-bed temperature sensors were deployed in this same general area. The sampling intervals for these instruments were generally less than 3 minutes. Except for one or two temperature sensors, most of these nearshore instruments had fairly good data return (Figure 2-10).

2.4.1.1. Naming Conventions

All records were assigned a unique ID and filename. These are listed in Tables 2-5a and 2-5b under “SAIC ID” and “Filename Prototype”, respectively. The IDs are of the form:

HBnm-#,

where HB (for Huntington Beach) is the program designator,

nm is the mooring number for the main array (e.g. 01, 02, ... 13), or

n = N for the nearshore array and m is the SIO mooring number, or

A, B, or C, if from the AES moorings, or

n = S for the SIO surf zone thermistors, or

n = M for the NPS meteorological buoy, and

# is the position of the instrument on the mooring, counting down from the surface. In this scheme, the compound moorings of the main array (surface mooring, sub-surface mooring, and bottom tripod) are treated as a single mooring for the purpose of assigning the position number and IDs.

This report uses these IDs to identify instruments and data records. Often the mooring ID will be shortened to mooring number (e.g., mooring 3) or the “nm” part of the designator (e.g., N2 or N3). The IDs and filenames and their associated metadata (e.g., latitude, longitude, water depth, instrument depth, start and end dates, time intervals) are stored in a relational database system that is interfaced with SAIC’s analysis system. The tables of the database allow efficient management and searching of the data records.


TOP OF PAGE


2.4.2. Wind and Wave Measurements

Winds and surface waves were measured from a surface buoy at HB07. Table 2-6 lists the instrumentation that was used. The meteorological data are 1-minute averages of 1 Hz measurements, although there were periodic small gaps, generally lasting less than 2 minutes. A uniform 1-minute time base was created and the data linearly interpolated onto that time base. Of the three sea-surface temperature instruments, only the bulk 1-m measurements were included in the final data set, since the interpretation of the IR surface temperature is not trivial and the floating thermistor went bad during the deployment. The sonic (Handar) and rotor-and-vane (RM Young) anemometers had very similar wind speeds, but the wind directions differed by about 15° when the winds were out of the WNW, with the sonic anemometer recording more northerly winds than the rotor-and-vane [i.e., if the sonic has winds from 290° (toward 110°), rotor-and-vane has winds from 275° (toward 95°)]. A post-deployment calibration of the buoy compass, used for both anemometers, was applied to both data sets. Data from the sonic anemometer were used for all the analyses in this report.

In order to compute surface-wave spectra, the north-south, east-west, and vertical buoy displacement time series were computed by rotating the measured buoy linear accelerations obtained from the three-dimensional motion sensor into the earth reference frame and then double integrating. Every fifth available data point (sampled at 10 Hz) was used in the wave-spectra computation, resulting in a 2-Hz sampling rate. One-dimensional wave-height spectra were obtained by computing FFTs from consecutive 256-point blocks of the vertical displacement time series. These 256-point wave height spectra were then averaged by frequency bins into approximately 1-hour spectra, from which time series of significant wave height and dominant wave direction and period were determined. Data were interpolated onto a uniform hourly time base, and 3-hour gaps that appeared every 70 hour were filled by linear interpolation.

Directional wave spectra, with directional resolution of 1°, and frequency resolution of 1/256 Hz, were computed from the earth-referenced north-south, east-west, and vertical displacement time series using the Maximum Entropy Method (MEM) described by Lygre and Krogstad (1986).

2.4.3. Near-bed Instruments and Methods

The instrumentation described in this section were designed for monitoring bottom boundary-layer dynamics, sediment suspension and transport, and morphology of the sea bed. They provided critical data in an attempt to answer the questions originated from one of the hypotheses: sediment transport (Chapter 1 and Chapter 8). Sediment traps collected particles for analyses of physical and chemical properties of the material. The initial processing of the sediment trap samples are still ongoing.

2.4.3.1. Acoustic Doppler Velocimeter (ADV)

Three ADVs (Sontek/YSI, San Diego) were deployed at sites HB05, HB07, and HB11 during the Phase III measurements. They were mounted on bottom tripods to measure three-dimensional (u, v, w) velocities at 65 cm above the bed. In addition, the ADV data loggers also recorded temperature, pressure, optical backscatter (OBS)/transmissometer data, and salinity/conductivity. Some of these sensors were external so they could be mounted at different locations on the tripods (Table 2-5a).

The ADVs were sampled in burst mode. A dual-burst-type sampling scheme was employed for all three ADVs:

Sample Rate (Hz) Sample Interval (sec) Samples Per Burst
Burst Type 1 1 1200 20
Burst Type 2 2 3600 2048

Burst type 1 was designed for mean current measurement – it recorded 20 seconds of data every 20 minutes, from which 20-minute-averaged current time-series were generated. Burst type 2 was for wave measurement. The 17.1 minutes of data it logged at 2 Hz every hour were used to compute directional wave spectra. The data loggers on the ADVs started at 2001-6-13 07:51:03 (GMT) so the wave bursts are centered on the hour. A typical ADV data file had the following burst sequence:

Burst No. Burst Type Start Time
1 1 2001-6-13 07:51:03
2 2 2001-6-13 07:51:28
3 1 2001-6-13 08:11:03
4 1 2001-6-13 08:31:03
5 1 2001-6-13 08:51:03
6 2 2001-6-13 08:51:28
7 1 2001-6-13 09:11:03
8 1 2001-6-13 09:31:03

Five 3-minute averaged current data points were also calculated from the wave burst. The time records of these 3-minute averages lined up with the ADCPs, which were also sampled at a 3-minute interval. In the above example, the 3-minute averages were respectively centered on (2001-6-13) 08:54:00, 08:57:00, 09:00:00, 09:03:00, 09:06:00.

Burst-averaged data from burst type 1 and type 2 were created for the Phase III report. MATLAB™ routines developed inhouse were used to read the binary data from the ADV recorders, to convert to engineering units, to rotate velocities to the true North and East, to compute the statistics, and to write the data into NetCDF files. All time-series files were trimmed to contain in-water data only:

Site Start Time End Time Total Length
HB05 Hourly 2001-6-14, 02:00:00 2001-10-12, 21:00:00 2900
20-minute avg. 2001-6-14, 01:31:15 2001-10-12, 21:31:15 8700
HB07 Hourly 2001-6-13, 23:00:00 2001-10-30, 19:00:00 3333
20-minute avg. 2001-6-13, 22:31:15 2001-10-30, 19:11:15 9999
HB11 Hourly 2001-6-13, 14:00:00 2001-9-12, 20:00:00 2191
20-minute avg. 2001-6-13, 13:51:15 2001-9-12, 20:51:15 6573

The tripod at HB05 was recovered on June 15 2001; the tripods at HB07 and HB11 were recovered on October 31, 2001.

The ADV systems at sites HB05 and HB11 had problems during the deployment. For unknown reasons, the correlation level (an indicator that measures the data quality) signals from the three acoustic beams intermittently became very low, so the velocity data were very erratic. A MATLAB™ routine was written to clean these data records and to compute the statistics from only the good data. Not a Number (NaN) was assigned for all the bad data points. Also, the ADV at HB11 was apparently interrupted on September 12, 2001, and it started a different logging file. It was determined that the data from this second binary file were not usable, so only the data from the first logging file were provided in the report.

2.4.3.2. Video Cameras

Video cameras were mounted on the tripods at HB03, HB05, HB07, and HB11 to take footage of the sea bed. Digital 8 SONY Handycams were used at HB03 and HB11. Analog SONY Handycams were used at HB05 and HB07. The video cameras were customized with a controller board that determines the timing and length of the footage. All four controller boards had the same setting: they turned the camera and strobes on for taping for 15 seconds every 6 hours. The digital tape had a capacity of 90 minutes, and the analog tape had 180 minutes. The glass bottom of the cylinders that housed the cameras at the two shallow sites (HB03 and HB05) were covered by barnacles and other biological growth after one month or so. Images from the digital cameras appeared to have better quality.

2.5. Ancillary Data

2.5.1. Sea Level and Astronomical Data

Hourly (6-minute) sea-level data for Los Angeles (33°43.2'N., 118°16.3'W., station ID 9410660) for 1998-2001 (2001) were obtained through the National Ocean Service website (http://co-ops.nos.noaa.gov/). Sea levels are given as height in meters above mean lower low water (MLLW). While, technically, the term spring tide denotes the largest tidal range in a fortnightly (14-day) period, in order to assign a time unambiguously to each spring tide, the highest high water in a window of 10-20 days (9.4 days in one case) after the previous spring tide, with a neap tide in between, was picked as the date/time and height of “spring tide”. Note that by using the actual sea level, rather than the phase of the moon, there is some variance in the time between “spring tides” due to meteorological effects, although on average it is 14.7 days, as would be expected.

Times of new and full moons and sunset and sunrise were obtained from the U.S. Naval Observatory website (http://mach.usno.navy.mil).

2.5.2. Airport Winds

Wind data, measured with a cup and vane system, were also obtained from the Automated Station Observing System (ASOS) from two nearby airports, Long Beach (LBH; 33° 48' 42" N, 118° 08' 47" W) and John Wayne (JWA; 33° 40' 48" N 117° 51' 59" W). The elevation of the ASOS station at LBH is 9.4 m above sea level, while the one at JWA is 16.5 m above sea level. Hourly wind speed and direction, based on 2-minute averages immediately preceding observation time, are recorded in increments of integer knots and 10°, respectively. The reported cutoff speed for ASOS is 2 kts, though the data here shows a minimum speed of 3 kts (anything less than that is recorded as 0 wind speed and 000 wind direction). If the wind direction varies by 60o or more during the 2-minute observation period, ASOS reports variable winds. Directions reported as variable were replaced with a null value, before low-pass filtering and subsequent analyses.

Wind data for 2001 from OCSD's plants 1 and 2 were also obtained and examined, but determined not to be of useable quality.



2.6. Data Quality and Processing Procedures and Analysis Methods

2.6.1. General Data Quality and Processing Procedures

The initial processing of data from the moored array, including extraction of data from the instrument, converting it to engineering units, and applying any necessary calibrations, were performed by the institution that owned the instruments, under the supervision of the appropriate principal investigator. Initial data quality inspections were performed by the principal investigators, and standard minimal cleanup of the data was performed in order to preserve the high temporal resolution of the data. This included flagging and removing suspect data points and interpolating short gaps of 1 or 2 points. Longer gaps were filled by special procedures discussed below. Current velocity records were corrected for magnetic variation at this stage.

After the initial data quality procedures, the data were transferred to SAIC and entered into the database management system. Data records resulting from multiple deployments at a single location and depth have been concatenated into continuous time series. Data gaps in these records, and other gaps resulting from a variety of causes, were less than 2 days in length, and were filled with data that are spectrally consistent with the rest of the record. In the case of ADCPs, an effort has been made to maintain the vertical coherence of the records. These records have been further cleaned up to remove spikes and obvious noise up to ~ 2 hours in length. The greatest amount of cleanup was needed for salinity and the near-surface bins of the ADCP velocity records.

Concatenation of bottom-pressure records was handled differently from the method described above, used for the other variables. Bottom-pressure data were averaged to 60 minutes, tidal analysis was performed, and the predicted tide was used to fill the gaps. Some minor leveling between deployments has been done for some records so that the deployment means are the same. Thus, the tidal phase is preserved across the gaps; however, the subtidal pressure is reduced to a linear trend across the gap.

The resulting concatenated, gap-filled records were then low-pass filtered to produce two sets of records–one retaining energy with periods longer than 3 hours, and one with energy at periods longer than 40 hours.

The 3-HLP filter uses a Lanzcos kernel with half-power point at 3 hours, and greater than 95% suppression at periods less than 1 hour. It removes 8 hours from the ends of the original time series. It is essentially equivalent to performing 1-hour averages of the original data. The 40-HLP filter suppresses fluctuations with periods less than 30 hours and removes 4 days from the ends of the 3-HLP series. The 3- and 40-HLP time series are decimated to time intervals of 1 and 6 hours, respectively.

The coordinate systems of the velocity and wind records were rotated so that the V-component (y-axis) is directed along the general trend of the isobaths. The directions were chosen by consensus of the PIs after considering the positions of the moorings, local isobath directions, and the principal axes of the velocity records. The rotations for each mooring that measured currents are given in Table 2-7. The conventions adopted are that positive, rotated U and V components are directed towards the shore and up-coast (towards the northwest), respectively.

The ADCPs used for the AES Corporation sites were deployed in magnetic frames. This was realized at the time and the bottom mounts were deployed so that the ADCP heads were aligned with magnetic north as determined by a diver-held compass. After retrieval, the correction to magnetic north of the transducer-head compass reading was applied to the velocity data by AES Corporation’s contractors. Unfortunately, tide and wave action on the surface buoy, used to mark the deployment position, caused the bottom mounts to move and rotate so that the frame was no longer in its original orientation. The three frames were affected to differing degrees during their four deployments. Once the frames had moved substantially off magnetic north, the current directions were no longer accurate. Since the heading was recorded, the parts of the records with headings close to zero degrees were extracted, and merged between deployments. Almost all of AES #3 (HBNC-7) and two substantial sections of AES #2 (HBNB-7) were useable. After processing, the 40-HLP records were compared with the nearby SIO ADCP #2 (HBN2-2). It was apparent that the AES #3 record was rotated anticlockwise relative to HBN2-2. Comparison of principal axes for the common period of overlap for the upper part of the water column indicates the differences were about 23°. This is close to twice the magnetic variation, indicating a possible error in applying this correction. Therefore, the AES #3 data has been rotated 23° anticlockwise, since it is nominally on the same isobath and only a short distance away from HBN2-2. The AES #2 data records did not need a correction. The AES temperature data from their thermistor strings were not considered reliable and have not been used for this study.

In shallow water, the 1- to 2-m range of the tide is a substantial proportion of the total water depth. Therefore, the ADCP velocity data, from the near-shore moorings (including valid AES ADCPs), have been processed to use a proportional surface following a vertical coordinate system. The ADCP measures velocities at fixed distances from the head, and thus at different stages of the tide a varying number of bins will have valid data. For time-series analysis, however, continuous time series are required with no gaps. Using fixed-depth levels, this restricts analysis to bins that are always valid (effectively 0.5- to 1-m below MLLW). Therefore, it is more useful, for shallow water instruments, to use a fixed number of depth levels, with the uppermost level tracking the sea surface, and the other levels, proportionally equally spaced between the surface bin and the (first) bin nearest the head. This is similar to the sigma vertical coordinate system used by many numerical hydrodynamic models. The number of depth levels chosen for this sigma coordinate scheme is the number of fixed-level measurement bins between the head and MLLW. The definition of the free surface is from nearby pressure gauges that are corrected to the MLLW datum by comparing with the Los Angeles Harbor tide gauge data. The 3- and 40-HLP files use the appropriately filtered pressure data.

The data returns, from all the moorings in the study, are given in Figures 2-8a, 2-8b, 2-8c, and Figure 2-10. The instrument IDs can be cross-referenced with Tables 2-5a,b for further details on the mooring platforms, sampling rates, etc.

2.6.2. Analysis Methods

2.6.2.1. Time Series

Standard techniques of time-series analysis are extensively used in this report. This section summarizes briefly these techniques and gives references to more detailed treatments found in the literature. It is assumed that the reader is familiar with basic statistics and the use of spectra for scalar and vector time series (Press et al., 1992; Priestley, 1981).

2.6.2.2. Tidal Analysis

Tidal analysis is the result of performing a least-square fit of the observed series to amplitudes and phases of specific frequencies that are derived from the motions of the sun and moon. The methods are given in Godin (1972) and the implementation for sea level and current records uses the programs written by Foreman (1977; 1978). The major constituents of interest are grouped into two frequency bands, the semidiurnal (M2, N2, and S2) and diurnal (K1, P1, and O1). The amplitudes and phases of tidal currents are often represented as hodographs, where the velocity vector tip describes an ellipse as the vector rotates clockwise or anticlockwise with the period of the constituent. A good discussion of tidal analysis applied to oceanographic time series can be found in Foreman et al. (1995).

The barotropic tidal currents were calculated at sites where currents were measured with an ADCP (HB11, HB10, HB08, HB07, HB06, HB05, HB03 and HBN2), thus providing good vertical resolution. The currents in each 2-m bin were high-pass filtered to remove periods longer than 66 hours. The high-pass filtered currents at each site were then averaged over the entire water column. The amplitude of the barotropic tides were then calculated from the depth-averaged records at each site using the Foreman tidal analysis programs. The characteristics of the internal tidal currents were calculated from the high-pass-filtered ADCP records that had the depth-averaged barotropic tidal currents removed.

2.6.2.3. Complex Demodulation

Standard spectral and tidal analysis assumes that the time series are stationary. However, when the observations are substantially influenced by stochastic forcing mechanisms such as the wind, the amplitudes of the response can vary with time. Thus, it is often of interest to determine the time dependence of the amplitude of a well-defined frequency associated with the forcing. On the San Pedro shelf, sea breezes with a 24-hour period are a persistent feature of the atmospheric circulation that vary in strength from day to day. Complex demodulation is a technique to estimate the time-dependent amplitude and phase of a signal at a given frequency. It is most useful when the signal shows high energy in a narrow frequency band (e.g., diurnal, or 24-hour period). The method is given in Chapter 11 of Priestley (1981). The usual method is to remove a running mean from the 3-HLP series of length 2T, where T is the period of interest. The resulting complex demodulated series is then either averaged over the period 2T or low passed with a filter with greater than 95% suppression for periods shorter than 2T. For daily period motions, a four-day low-pass filter is often used. The filtering smoothes the often noisy amplitudes and phases of the complex demodulation at the expense of resolution in time. A least-squares algorithm (Emery and Thomson, 2001) was applied to compute the complex demodulation displayed in Figure 7-6.

2.6.2.4. Empirical Orthogonal Functions

Empirical orthogonal functions (EOF) are a method to extract coherent signals from a spatial array. EOF analysis is often called principal component analysis, and Preisendorfer (1988) has given a comprehensive treatise on the methods used in oceanography. The method decouples spatial variability, en(x), from temporal variability, An(t), and extracts statistical modes, ordered by the amount of the total variance of all the data that they can explain. The modes are orthogonal, and thus uncorrelated in time and space. Any time series in the spatial array is related to the modes by


where n is summed over all the time series in the analysis. The input time series, U, are normally demeaned for the EOF analysis. If the time series, U, have different measurement units, then the usual practice is to normalize to unit variance. If U are all of the same type, then normalizing the input series is not necessary or even desirable. There are various rules for determining how many modes are significant, or differ from random noise (Preisendorfer, 1988; North et al., 1982). If a mode accounts for a large fraction of the total variance, and a reasonable proportion of the input series are correlated with the mode, then the mode is usually physically significant.

For time-domain analyses, described by (2.1), the input series are scalars. For velocity time series, the quantities in (2.1) are complex. This is called complex EOF analysis (CEOF), and it has the advantage that it does not artificially separate the along- and cross-shelf velocity modes. Since the eigenvectors and amplitudes in CEOF analysis are complex, they have both magnitude and direction. The orientation of the temporal amplitudes and spatial patterns are relative to an arbitrary reference. The usual practice, following Merrifield and Winant (1989), is to rotate the spatial pattern into the frame of the semi-major principal axis of the corresponding mode time series. In (2.1), either A or e can be normalized so that their variance is unity and the other quantity has the units of the input time series. The convention here is to normalize the amplitude time series.

The EOF analysis of (2.1) can also be used in frequency space, where An and U become functions of frequency (i.e., spectra). This is used where one wishes to analyze the spatial patterns for a process that has a response in a restricted frequency band. On the San Pedro Shelf, frequency domain EOF analysis is used for diurnal and semidiurnal band processes. The advantage of this type of an analysis is that phase relations of the measurements across the array are calculated for each mode, since the spatial eigenvector, en, is a complex function of frequency. Therefore, propagating waves are better described than with time domain analysis. Time lags are not accommodated by the time domain versions of (2.1).



2.7. References

Chamberlin, C.E. and R. Mitchell, 1993. A decay model for enteric bacteria in natural waters. Water Pollution Microbiology, John Wiley & Sons.

Emery, W.J. and R.E. Thomson, 2001. Data analysis methods in physical oceanography. Elsevier Science B.V., Amsterdam, The Netherlands.

Foreman, M.G.G., 1977. Manual for tidal heights analysis and prediction. Pacific Marine Science. Institute of Ocean Sciences, Patricia Bay, Sydney, B.C., Report 77-10, 97 p.

--- 1978. Manual for tidal currents analysis and prediction. Pacific Marine Science. Institute of Ocean Sciences, Patricia Bay, Sydney, B.C., Report 78-6, 70 p.

Foreman, M.G.G., W.R Crawford, and R.F. Marsden, 1995. De-tiding: Theory and practice, in D.R. Lynch and A.M. Davies (eds.), Quantitative skill assessment for coastal numerical models. Coastal and Estuarine Studies, American Geophysical Union, Washington, D.C., v. 47, p. 203-240.

Godin, G., 1972. The analysis of tides. University of Toronto Press, Toronto, 264 p.

Lalli, C. M. and T.R. Parsons, 1993. Biological oceanography: an introduction. Pergamon Press, Oxford University Press, 301 p.

Leecaster, M.K. and S.B. Weisberg, 2001. Effect of sampling frequency on shoreline microbiology assessments. Marine Pollution Bulletin, v. 42, p. 1150-1154.

Lygre, A. and H.E. Krogstad, 1986. Maximum entropy estimation of the directional distribution in ocean wave spectra. Journal of Physical Oceanography, v. 16, p. 2052-2060.

Mann, K.H. and J.R.N. Lazier, 1991. Dynamics of marine ecosystems: biological-physical interactions in the ocean. Blackwell Science, Cambridge, Massachusetts, 466 p.

MEC (MEC Analytical Systems, Inc), 2001. Strategic Process Study, Plume tracking. June 1999 to Septemter 2000, Final Report, v. I – Executive Summary.

Merrifield, M.A., and C.D. Winant, 1989. Shelf circulation in the Gulf of California: A description of the variability. Journal of Geophysical Research, v. 94, p. 18133-18160.

Noble, R.T., S.B. Weisberg, M.K. Leecaster, C.D. McGee, K. Ritter, K.O. Walker, and P.M. Vainik, 2003. Comparison of beach bacterial water quality indicator measurement methods. Environmental Monitoring and Assessment. v. 81, p. 301-312.

North, G.R., T.L. Bell, R.F. Cahalan, and F.J. Moeng, 1982. Sampling errors in the estimation of empirical orthogonal functions. Mon. Weath. Rev., v. 110, p. 699-706.

Orange County Sanitation District (OCSD), 2001. Laboratory operating procedure method 9060.

Press, W.H., S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery, 1992. Numerical recipes. Cambridge University Press, 2nd ed., 963 p.

Priesendorfer, R.W., 1988. Principal component analysis in meteorology and oceanography, in C.D. Mobley (ed.), Developments in atmospheric science. Elsevier, New York, v. 17, 425 p.

Priestley, M.B., 1981. Spectral analysis and time series. Academic Press, London, 890 p.

Roberts, P.J.W., 1995. Near-field modeling of the Mamala Bay outfalls. Water Science Technology, v. 32, p. 159-166.

Roberts, P.J.W., W.H. Snyder, and D.J. Baumgartner, 1989a. Ocean outfalls: I. Submerged wastefield formation. Journal of Hydraulic Engineering, ASCE, v. 115, p. 1-25.

---, 1989b. Ocean outfalls: II. Spatial evolution of submerged wastefield. Journal of Hydraulic Engineering, ASCE, v. 115, p. 26-48.


U.S. Department of the Interior, U.S. Geological Survey, Western Region Coastal and Marine Geology
URL of this page: http://pubs.usgs.gov/of/2004/1019/chap2.html
Maintained by: Mike Diggles
Created: September 15, 2004
Last modified: October 13, 2004 (md)