Skip past header information
Link to USGS home page.
U.S. Geological Survey Open-File Report 2004-1435

Early to Middle Jurassic Salt in Baltimore Canyon Trough


Skip past Table of Contents to main text Title Page
Abstract
Introduction
Geologic Setting
Data Collection
Data Processing
Geophysical Analysis
Discussion
Conclusions
Bibliography
List of Figures
List of Tables

Data Processing

The processing sequence is shown in Table 1. The Line 6 data showed transient, yet coherent noise, which an early stack section indicated originated in shallow side-scatterers. It was suppressed using a two-dimensional, frequency vs. wavenumber (F-K) dip filter (Swift and others, 1988). All the data had a wavelet deconvolution filter applied, which collapsed the complex original source signature to a shorter, more symmetric wavelet that retained the same frequency bandwidth as the original. The use of this transformed wavelet allowed for significant comparisons to be made of amplitude and peak-to-trough variations and preserved a reflector's polarity.

No source signature data were available for these lines so the wavelet was estimated from a CMP (common mid-point) gather by a variable norm deconvolution method (Gray, 1978). On the basis of the extracted wavelet, an inverse filter was designed using a zero-phase band-pass filter as the desired wavelet. This inverse filter was applied to the data after a routine sort by CMP of the demultiplexed traces. The assumption that the source signature did not change significantly along the section was required, and the appropriateness of this uniform source wavelet assumption was evaluated by the consistency of the water bottom reflection. The onset of a peak amplitude for the water bottom reflection throughout the deep part of the line indicated that the wavelet processing was adequate in collapsing the source signature and preserving the polarity information. Spherical divergence and amplitude decay corrections were made followed by either a spiking (Line 2) or a predictive (Lines 6 and 10) deconvolution, and subsequently by standard sequences of normal moveout corrections, mute, stack, a (second) predictive deconvolution, bandpass filtering, and AGC (automatic gain control). The sections shown in Figure 2 (a,b,c) had an amplitude modulation function applied in order to compress the line for display purposes. Along with conventional processing shown in Table 1, we used a true-amplitude processing technique in order to investigate the anomalous zone quantitatively (Figure 3). In true-amplitude processing, using the method of Lee and Hutchinson (1990), vertical and lateral amplitude variations were preserved by compensating for propagation effects, source and receiver variances, and near-surface inhomogeneities. The processing used an automatic editing procedure akin to that presented by Mayrand and Milkereit (1988), which optimizes the signal-to-noise ratio by comparing different subsets of stacked traces. In the automatic trace-edit routine, traces with a signal strength less than a certain level of a reference trace or with a noise strength greater than a derived portion of the reference trace were rejected (Lee and Hutchinson, 1990).

Figure 2A, Line 2.
Figure 2A, Line 2. Click on figure for larger image.
Figure 2B, Line 6. .
Figure 2B, Line 6. Click on figure for larger image.
Figure 2C, Line 10. .
Figure 2C, Line 10. Click on figure for larger image..
Figure 3A.
Figure 3A. Click on figure for larger image.
Figure 3B. Figure 3B.
Figure 3B. Click on figure for larger image.
Figure 3C.
Figure 3C. Click on figure for larger image..

Skip Footer Information
FirstGov button  Take Pride in America button