%\documentstyle{article}
%%%%%%%%%%%%
%\input tcilatex
%\QQQ{Language}{
%American English
%}
%\begin{document}
\sectd{3. DATA PROCESSING}{DATA PROCESSING}
\vspace{-0.3in}
\subsect{Refraction Processing}
The data processing for 3-D refraction traveltime tomography is minimal except for the organization and picking of the first arrivals. Once this task is completed the data must undergo a quality control procedure to search for data busts, bad traces, or cycle skips in the picking. Then a refraction traveltime tomography algorithm can be applied to the traveltime picks.\
\subsubsect{3-D Traveltime Algorithm}
Traveltimes associated with direct and turning rays were inverted by a nonlinear inversion method, where the traveltimes were computed by a finite-difference solution to the eikonal equation (Qin et al., 1992). The model velocities are updated each iteration by a simultaneous iterative reconstruction technique (SIRT) (Gilbert, 1972). In this scheme the data misfit function is defined as:
\begin{equation}
\label{n1}\epsilon=\frac{1}{2} \sum_i(t_i^{obs} - t_i^{cal})^2,
\end{equation}
where the summation is over the $i_{th}$ raypaths, $t_i^{obs}$ is the associated first-arrival traveltime pick, and $t_i^{cal}$ is the calculated traveltime. The gradient of the misfit function is given by:
\begin{equation}
\label{grad}\gamma_j= \frac{\delta \epsilon}{\delta s_j} = \sum_i \delta t_i \frac{\delta t_i}{\delta s_j},
\end{equation}
where $\delta t_i$ is the traveltime residual and $\delta s_j$ is the slowness in the $jth$ cell. Equation ~\ref{grad} can be written as:
\begin{equation}
\label{grad2}\gamma_j= \sum \delta t_i \l_{ij} \approx \l \sum \delta t_i,
\end{equation}
where $\l_{ij}$ is the segment length of the $i_{th}$ ray that visits the $j_{th}$ cell, and $l$ is the width of a square cell. By assuming that all the ray segments within a cell are of equal length (Qin and Cai, 1993) the model updating direction is given by the normalized formula:
\begin{equation}
\label{n2}\gamma_j=-\frac{1}{N_j}\sum_{i=1}\delta t_i,
\end{equation}
where $\gamma_j$ is the negative gradient of the misfit function in the $j_{th}$ cell, $N_j$ is the number of rays that visit the $j_{th}$ cell, $\delta t_i$ is the traveltime residual and the summation is over the indices associated with raypaths that visit the $j_{th}$ cell (Cai and Qin, 1993).\
If a cell has no raypaths passing through, i.e., $N_j$ = 0, it is assigned the same gradient value as the cell just above it. This is called gradient downward extrapolation (Qin and Cai, 1993) because it extends the gradient field downward from the deepest point where rays can reach for a given survey geometry. This technique is based on the fact that we do not have information below the maximum depth of ray penetration so we assume a downward continuation of known velocities. This point must be kept in mind when interpreting an image because the edges of the image will display vertical striping associated with shallow ray penetration.\
When an inversion problem is underdetermined, it is desirable to smooth the gradient field (Qin et al., 1994) after each set of iterations. In refraction traveltime tomography it is usually the case that the number of traveltimes exceeds the number of unknowns which would mean we have an overdetermined system, but due to irregular raypath coverage some parts of the model may still be underdetermined (Qin, 1994). A rectangular smoothing operator was applied after each iteration in the inversion process using a dynamic smoothing technique (Nemeth et al., 1997). Dynamic smoothing is the process of changing the size of the smoothing operator following a number of iterations.\
I used a multigrid 3-D tomography code developed by Qin and Cai (1994). The key idea in the multigrid technique (Nemeth et al., 1997) is to start with a coarse grid and reconstruct a long wavelength approximation to the velocity model. After a few iterations the decrease in the RMS traveltime residual stalls, and a finer grid spacing is then used for several more iterations to reveal finer details of the velocity model.\
This procedure is repeated for several more refinements until the physical resolution limits are achieved. In practice, the reconstructed velocity model is initially smoothed with a 20 ft x 20 ft x 10 ft smoothing filter whenever the grid is refined. Nemeth et al. (1997) suggests that the size of the smoothing filter should be iteratively diminished to achieve the best results. For the Oquirrh fault 3-D data the smoothing filter initially had a volume of ($\delta$X, $\delta$Y, $\delta$Z) = 20 ft x 20 ft x 10 ft and was iteratively reduced to a volume of 2 ft x 2 ft x 2 ft. The volume of the smoothing filter was then set to 10 ft x 2 ft x 2 ft, and reduced to 5 ft x 2 ft x 2 ft, and finally to 2 ft x 2 ft x 2 ft for one iteration each; repeating this procedure, except now the $\delta$ Y and $\delta$ Z values are seperately varied, enhanced the velocity contrasts in each dimension and helped to define boundaries in the image. With a filter volume of 2 ft x 2 ft x 2 ft, the maximum number of effective unknowns was 21,494 and 83,570 traveltime equations were used for the inversion. A more complete record of the smoothing operator size and the number of effective unknowns is given in Table 1.\
For the Oquirrh fault data the grid point interval was initially set to $dx$=2 feet and was changed to $dx$=1 feet after 8 iterations. A plot of RMS traveltime residual vs iteration can be seen in Figure \ref{fig:resid} where the large decrease in residual results from decreasing the grid interval from $dx$=2 to $dx$=1. The iterations were stopped when the decrease in traveltime residual stalled. The final RMS traveltime residual is 6.9 ms which is above the estimated picking error of 3 ms. Additional iterations beyond this point did not significantly reduce the residual. This multigrid technique is recommended because it sometimes converges quickly and so decreases the computational cost of the inversion (Qin and Cai, 1994).\
\pagebreak
\begin{center}
Table 1. Smoothing filter size vs iteration number for the 3-D inversion.
\vspace{0.1in}
\begin{tabular}{|l|c|c|c|} \hline
Iteration Number & Volume of & Grid Point & Effective Unknowns \\
& Smoothing Filter & Interval (feet) & \\
& ($\delta$X, $\delta$Y, $\delta$Z)& & \\ \hline
1 & 20'x20'x10' & 2 & 430 \\ \hline
6 & 10'x10'x5' & 2 & 860 \\ \hline
8 & 5'x5'x2' & 2 & 1791 \\ \hline
10 & 2'x2'x2' & 1 & 21,493 \\ \hline
12 & 10'x2'x2' & 1 & 12,282 \\ \hline
13 & 5'x2'x2' & 1 & 19,105 \\ \hline
14 & 2'x2'x2' & 1 & 21,493 \\ \hline
15 & 2'x10'x2' & 1 & 12,282 \\ \hline
16 & 2'x5'x2' & 1 & 19,105 \\ \hline
17 & 2'x2'x2' & 1 & 21,493 \\ \hline
18 & 2'x2'x10' & 1 & 12,282 \\ \hline
19 & 2'x2'x5' & 1 & 19,105 \\ \hline
20 & 2'x2'x2' & 1 & 21,493 \\ \hline
\end{tabular}
\end{center}
\figsp{resid}{Plot of RMS residual vs iteration number. The large drop in RMS residual after iteration number 8 is caused by refining the grid from $dx=2'$ to $dx=1'$.}{/home/dmmorey/thesis/thesis/figs/resid.ps}{\hsize}{5.0in}
\subsubsect{Quality Control of Traveltime Picks}
Quality control (QC) of the data is an important step prior to computing 3-D tomograms because correct data organization and accurate traveltime picking are crucial for a reliable inversion. An important QC procedure is the reciprocity test. That is if the picked traveltime $t(i,j)$ for the $ith$ shot and $jth$ receiver is approximately equal to the reciprocal $t(j,i)$, then the reciprocity test is passed. If not then the traveltime picks for the $i-j$ and $j-i$ traces should be reexamined.\
Reciprocity tests applied to the traveltime data are effective in detecting "data busts." Figure \ref{fig:recip2d} shows an example of a 2-D reciprocity test. Reciprocity tests for the 3-D data were also conducted. The picked traveltime for a given source-receiver pair was compared to its reciprocal twin and if the traveltime difference exceeded a certain threshold the traveltime was excluded from the inversion. Since every receiver point was also a shot point then every traveltime had a reciprocal twin, so all traveltimes were subject to the reciprocity test. The threshold used for this reciprocity QC was 6 ms, which is two times the estimated picking error of $1/4$ period. A total of 1880 traveltimes out of 85,450 picked traveltimes were rejected following the reciprocity test, and amounts to 2.2 $\%$ of the total traveltimes tested. The traveltime discrepancies that exceeded the threshold criteria are plotted in Figure \ref{fig:recip3d}.\
%
\figsh{recip2d}{Reciprocity test. Dashed line represents first arrival times, $
t(i,j)$ for the ith stationary shot point and the jth receiver location. Solid
line represents the first arrival times for $t(j,i)$. Most of the above travelt
imes satisfy the reciprocity test, i.e. $t(i,j)=t(j,i)$ to within a tolerance o
f $\pm$ 6 ms.}{/home/dmmorey/thesis/thesis/figs/recip2d.ps}{\hsize}{4.0in}
%
\figsh{recip3d}{3-D reciprocity test. Plot of disprecancy times $\mid$t(i,j) - t(j,i)$\mid$ that exceeded 6 ms vs the receiver offset.}
{/home/dmmorey/thesis/thesis/figs/3drecip.ps}{\hsize}{4.0in}
An additional concern for this data set is the possibility of traveltime discrepancies associated with cable shifts. A test was conducted on line 4 where 13 receivers were overlapping following a cable shift along line 4. Figure \ref{fig:shift} shows a plot of traveltimes for a single shot location before and after shifting the cables. The differences between the solid and dashed lines are within the assumed picking error of 6 ms. The traveltimes were examined in this way for each cable shift.\
\figsh{shift}{Comparison of first-arrival traveltimes for two shots in the same
position following a cable shift. The maximum separation of the two lines is less than 2.5 ms which is below the estimated picking error.}{ /home/dmmorey/thesis/thesis/figs/shift.ps}{\hsize}{4.0in}
\subsubsect{Synthetic Tomography Results}
A test using synthetic traveltimes was conducted to inspect the reliability of the tomographic image. This test was similar to those conducted by Dueker et al. (1993). The input model was a checkerboard of variable velocities (see Figure \ref{fig:23dcheck}) and had the same dimensions as the area investigated with the 3-D Oquirrh fault data.
%
\figsp{23dcheck}{Checkerboard test results: comparison of the upper 40 ft of (top) a slice of the 3-D tomogram, (middle) the model, and (bottom) the 2-D tomogram. The 3-D tomogram appears to have fewer artifacts than the 2-D tomogram at the 25-30 ft levels.}{/home/dmmorey/thesis/thesis/figs/23dcheck.ps}{\hsize}{6.0in}
%
The model was constructed by defining the background velocity to be similar to that found in the Oquirrh fault tomogram (see section 4). The velocity at the surface was defined to be 1000 ft/s and the vertical velocity gradient was assigned as 60 ft/s/ft (see Figure \ref{fig:23dcheck}). The background velocity was then varied by superimposing a checkerboard of velocities, where the velocity in each 10 ft x 10 ft x 10 ft cube differed from its neighboring velocity by 10 $\%$. There was no variation of velocity in the Y direction. The source and receiver geometry for the synthetic data was identical to that of the 3-D Oquirrh fault data set, which consisted of 48 inline stations with a 3 feet station spacing and 7 crosslines with a 5-ft line spacing. Each station was occupied by a source and a receiver. These synthetic tests helped define reliable regions within the tomographic images (Dueker et al., 1993).\
Figure \ref{fig:23dcheck} shows the results from the tomography tests on synthetic travetime data, and suggest that 10 $\%$ velocity variations in checkerboards of width 10 ft can be resolved to a depth of about 25-30 ft. These synthetic results also show areas where the subsurface image is distorted along the dominant direction of the raypaths. At depths greater than 20 ft and near the edges of the model the checkerboard begins to be stretched in the dominant direction of the raypaths. Figure \ref{fig:3dcheckray} shows an image of the raypaths for the 3-D synthetic results and suggests that, at the deep levels, acute stretching effects are due to a paucity of rays.\
%
\figsh{3dcheckray}{Raypath density image associated with the 3-D checkerboard test. This shows that rays penetrated to a depth of about 40 feet for this shooting geometry and velocity gradient.}{/home/dmmorey/thesis/thesis/figs/3dcheckray.ps}{\hsize}{4.0in}
%
Figure \ref{fig:ray} shows an X-Z image of the raypaths associated with the 3-D tomogram and Figure \ref{fig:ray2d} shows the raypath coverage for the 2-D result. Notice the scale of the plots and the increased ray coverage for the 3-D result.\
%
\figsh{ray}{X-Z slice of the raypath coverage for the 3-D tomographic result sliced at $y=15 ft$. Compare with Figure 16 and notice the increased ray coverage for the 3-D result.}{/home/dmmorey/thesis/thesis/figs/ray3d.ps}{\hsize}{5.0in}
%
\figsh{ray2d}{Raypath coverage for the 2-D tomographic result. Compare with Figure 15.}{/home/dmmorey/thesis/thesis/figs/ray2d.ps}{\hsize}{5.0in}
%
Two-dimensional tests were also conducted and the resulting 2-D tomograms were compared to the 3-D tomogram. Figure \ref{fig:23dcheck} shows the upper 40 feet of an X-Z slice of the 3-D tomogram, the model, and the 2-D tomogram. The 2-D tomogram shows similar characteristics but the checkerboard pattern is slightly less resolved at depths of about 30 feet. Artifacts that are similar in shape to the raypath density variations are also observed in the 2-D result and in some areas are more severe than in the 3-D case. This is because the 3-D raypath coverage is more azimuthally isotropic than the 2-D raypath coverage.
\subsect{2-D CDP Reflection Processing}
The goal of CDP reflection processing is to manipulate the seismic reflection data into an approximate reflectivity image of the subsurface geology. Processing of high-resolution shallow seismic data generally does not follow the usual steps found in most processing flows used in oil and gas exploration. This is because the depth of investigation for the oil and gas industry is much greater and so near-surface scattering, statics, and surface waves are not so dominant.\
\pagebreak
\subsubsect{Theoretical Resolution Limits}
For a wavelength of 10 ft, the reflection image at a depth of 10 ft, in principle, should have a vertical resolution limit of about 3 feet and a horizontal resolution limit of 3 ft (Yilmaz, 1987). In comparison, the tomographic results should have a horizontal resolution of about 10 ft and a vertical resolution of about 2 ft (Schuster, 1996). Both of these estimates are based on the physical limitations of band-limited interference patterns.\
\pagebreak
\subsubsect{Data Sorting}
The first step in CDP data processing was to convert the data format from Bison seismograph format to society of exploration geophysicists Y (SEG-Y) format so processing could be done with the seismic-unix (SU) processing package. This was accomplished by the BIS2SEG.f program written by Y. Sun. Defining the survey geometry is the next step and includes defining the shot and receiver locations, shot and receiver offsets, CMP locations, and other known parameters that affect the data processing. This was accomplished with the Fortran program SETHEADER.f which assigned the geometry parameters to the headers of each seismic trace. The CDP reflection processing flow for the Oquirrh 2-D data can be seen in Figure \ref{fig:flow}.\
\figsp{flow}{Chart of the CDP processing flow for the Oquirrh fault 2-D data.}{/home/dmmorey/thesis/thesis/figs/flow.ps}{\hsize}{6.0in}
\subsubsect{Automatic Gain Control}
An instantaneous automatic gain control (AGC) was applied to the data to correct for the decrease in amplitude due to geometric spreading and attenuation (Yilmaz, 1987). This type of gain modifies the amplitudes in a time-varying manner and may not give true amplitudes in the final stacked section, but it does make deeper reflections more visible in the shot and CMP gathers. For this investigation AGC applied to the data is justified because we are interested in mapping the lateral coherency of the seismic reflectors and not the amplitude of the reflected energy. Figure \ref{fig:shot30} shows a raw shot gather with AGC applied.\
\figsp{shot30}{Common shot gather for a shot at station 30 with AGC applied and a trace spacing of 3 ft. The data are dominated by ground roll at the far offset traces.}{/home/dmmorey/thesis/thesis/figs/shot30.ps}{\hsize}{7.0in}
\subsubsect{Shot and Receiver Statics}
Shot statics were noticeable in the shot gathers. Here, I define the shot static as the timing error associated with the early or late triggering of the seismograph at each shot. The actual source is a 16-pound sledgehammer impacting an eight-inch square plate where the trigger is a piezoelectric crystal securely tapped to the handle of the hammer. The piezoelectric crystal sends a signal to the seismograph when the hammer impacts the plate. Discrepancies (i.e., shot statics) were discovered in the timing of the first arrivals at zero offset due to a signal delay to the seismograph at the exact instant the hammer strikes the plate. The shot static was calculated by sorting the data into the common offset gather (COG) domain and picking the arrival time of the air wave at an offset of 9 ft (see Figure \ref{fig:cog9}).
%
\figsp{cog9}{Common offset gather for an offset of 9 ft. The arrow indicates the air wave, which should be flattened after a shot statics correction.}{/home/dmmorey/thesis/thesis/figs/cog9.ps}{\hsize}{7.0in}
%
The arrival times of the air waves were corrected to the time of 9 ft$\div$1100 ft/s = 0.0085 s. This correction is justified because the air wave travels at a constant velocity and is not dependent on the geology of the region; it should therefore arrive at a constant time for a constant offset on level ground. The shot static correction is then applied to all traces in the shot gather.\
The above procedure can also be used to calculate a receiver static which is defined as the arrival time error due to a mislocation of the receiver. If the receiver is located closer to the shot than expected, then the air wave will arrive at an earlier time than expected. The receiver static is calculated by sorting the data into the common receiver domain and then repeating the above steps, making the static correction to all traces in the common receiver gather.\
\subsubsect{Common Midpoint Sorting and Muting}
The data must be sorted into the common midpoint domain before it can be converted into a stacked section. This is important because a zero offset section can be obtained by summing normal moveout (NMO) corrected traces in a common midpoint (CMP) gather so that energy from a reflecting horizon will be enhanced while noise is canceled (Mayne, 1962). Muting of the CMP gathers was performed to remove coherent noise such as ground-roll and the air wave. Muting was accomplished by zeroing the trace amplitudes above a polygonal curve, defined by offset-time pairs. Figure \ref{fig:cmp179} shows a raw CMP gather and Figure \ref{fig:cmpmut} shows the CMP gather after muting. Muting such as this is also important in removing postcritical reflections from the stacked section.\
\figsp{cmp179}{Raw CMP 179 with a trace interval of 1.5 ft.}{/home/dmmorey/thesis/thesis/figs/cmp179.ps}{\hsize}{7.0in}
\figsp{cmpmut}{CMP 179 after muting to be compared with Figure 20.}{/home/dmmorey/thesis/thesis/figs/cmp179mut.ps}{\hsize}{7.0in}
\subsubsect{Velocity Analysis}
Velocity analysis was conducted in the CMP domain using two techniques. Semblance analysis (Yilmaz, 1987) was conducted on every 50th CMP to obtain a velocity model for the data set. The semblance function is defined as:
\begin{equation}
\label{n3}S_t=(\sum_{j=0}^{n-1} q_{t,j})^2/[n\sum_{j=0}^{n-1} (q_{t,j})^2].
\end{equation}
Here, $n$ is the number of non-zero traces to be summed, and $q_{t,j}$ is the amplitude of the $jth$ seismic trace at time $t$. Figure \ref{fig:sem} shows an example of two velocity semblance panals.
\figsp{sem}{Semblance panals for CMP 250 (left) and 650 (right) where the CMP interval is 0.75 ft. Darker areas correspond to larger semblance values and, typically, better stacking velocities. The solid line indicates the estimated velocity profile for that CMP.}{/home/dmmorey/thesis/thesis/figs/sem.ps}{\hsize}{6.0in}
A second velocity analysis was conducted by applying a normal moveout (NMO) correction to a portion of the data and stacking with a constant NMO velocity (Yilmaz, 1987). The NMO correction time $t_{nmo}(x)$ is defined by:
\begin{equation}
\label{n4}t_{nmo}(x)=\sqrt{t^2(0)+x^2/V^2_{nmo}}-t(0),
\end{equation}
where $t(0)$ is the zero-offset traveltime, $x$ is the offset, and $V_{nmo}$ is the moveout velocity. The initial NMO velocity model was computed from the semblance analysis, and did not vary by more than 10 $\%$ from this initial estimate. Figure \ref{fig:velan} shows a portion of the stacked section for several different NMO velocities. The NMO velocities used in the final stacked section (see Figure \ref{fig:velocity}) ranged from 800 ft/s to 4000 ft/s although velocities deeper than 100 ft were not very well constrained.\
\figsp{velan}{Velocity analysis of the stacked seismic section with CMPs between 100-150 (top) and 400-450 (bottom). The reflectors appear to be most coherent at stacking velocities of 1000 ft/s (top) and 1150 ft/s (bottom).}{/home/dmmorey/thesis/thesis/figs/velan.ps}{\hsize}{6.0in}
\figsh{velocity}{Contour plot of the velocities used to produce the final stacked section. Contours are in ft/s.}{/home/dmmorey/thesis/thesis/figs/velocity.ps}{\hsize}{4.0in}
\subsubsect{Residual Statics}
The source and receiver statics discussed earlier were insufficient to correct for the statics due to near surface velocity variations. Due to the breakdown of the surface consistency assumption (all rays are vertically incident at the near surface) we cannot apply the usual residual statics methods. As an alternative we applied the residual statics correction known as maximum energy stacking (Qin, 1993). Maximum energy stacking was applied to correct for statics that remained in the data upon completion of the above procedures. It was discovered that maximum energy stacking removed many of the remaining statics problems in the data.\
The maximum energy stacking process described by Qin (1993) is an iterative one. The trace is first divided into a small number of time windows for cross-correlation, time shifts are applied to obtain the maximum correlation with a master trace, and a new stacked section is generated. The time shift was limited to $\pm$ 10 samples (which is equivalent to 0.002 seconds) to restrict the shift from skipping a wave cycle. The process is repeated, except the new stacked section provides the "master traces" for correlation and a shorter time window is used. For the Oquirrh fault data set, four iterations of maximum energy stacking were applied with decreasing window lengths of 100 ms, 200 ms, 100 ms, and 50 ms. After the final iteration of maximum energy stacking, the final stacked section was generated and band-pass filtered from 80-300 Hz to smooth the zones around adjacent time windows. Figure \ref{fig:cmpmaxen} shows a comparison of an NMO-corrected CMP gather and the same CMP gather after four iterations of maximum energy stacking. Figure \ref{fig:maxen} shows a comparison of a portion of the stacked section before and after maximum energy stacking. Maximum energy stacking improved the coherency of events in the stacked section without distorting the geologic structure.\
%
\figsp{cmpmaxen}{Comparison of a CMP with a NMO correction applied before (left) and after (right) maximum energy stacking. The reflection events apear more "flat" in the CMP with maximum energy stacking.}{/home/dmmorey/thesis/thesis/figs/cmpmaxen.ps}{\hsize}{7.0in}
%
\figsp{maxen}{Comparison of a portion of the stacked section with (top) and without (bottom) maximum energy stacking. Notice the improved coherency of reflectors in the rectangles in the bottom figure. See Figure 5 for the ground locations associated with CMP values.}{/home/dmmorey/thesis/thesis/figs/maxena.ps}{\hsize}{6.0in}
%
\subsubsect{Poststack Migration}
Although the stacked section displays coherent events it is necessary to migrate the data in order to move dipping reflectors into their correct positions and collapse diffractions. This is accomplished by poststack migration, which in this case is a time migration in the time-wavenumber (T-K) domain (Hale, 1991) where interval velocities are specified as a function of time. Time migration is appropriate as long as lateral velocity variations are mild to moderate (Yilmaz, 1987). This method only allows variation of velocity with time so the velocity model is a 1-D model. The velocities used were similar to the NMO correction velocities (see Figure \ref{fig:velocity}). Lateral velocity variations were applied by seperating the stacked section into smaller sections and migrating each section independently. Figure \ref{fig:migsta} shows the migrated section and the final stacked seismic section, where both have been converted to depth.
%
\figsp{migsta}{Migrated section (top) and stacked seismic section (bottom). The migrated section provides a more accurate dip estimate of the subsurface horizons. The CMP interval is 0.75 ft.}{/home/dmmorey/thesis/thesis/figs/migsta.ps}{\hsize}{7.0in}
%
The migrated section does improve the image in the vicinity of the antithetic fault as well as provide more accurate dip measurements on the subsurface horizons, but does display migration artifacts.\
%
%\end{document}