Habitat_poly - prediction of benthic habitat distribution.

Metadata also available as - [Outline]

Frequently-anticipated questions:


What does this data set describe?

Title:
Habitat_poly - prediction of benthic habitat distribution.
Abstract:
Polygon shape file with modified Greene et al 1999 habitat attributes. See methods for detailed description
Supplemental_Information:
Additional information about the field activities from which this data set was derived are available online at <http://walrus.wr.usgs.gov/research/projects/benthic_hab.html/>
Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Although this Federal Geographic Data Committee-compliant metadata file is intended to document the data set in nonproprietary form, as well as in ArcInfo format, this metadata file may include some ArcInfo-specific terminology.

  1. How should this data set be cited?

    Cochrane, Guy R. , and Sagy, Yael, 2007, Habitat_poly - prediction of benthic habitat distribution: USGS Data Series 320, U.S. Geological Survey, Santa Cruz, CA.

    Online Links:

    This is part of the following larger work.

    Cochrane, Guy R., Warrick, Jonathan, Sagy, Yael, Finlayson, David, and Harney, Jodi, 2008, Sea-Floor Mapping and Benthic Habitat GIS at the mouth of the Elwha River, Washington. USGS Data Series 320.

  2. What geographic area does the data set cover?

    West_Bounding_Coordinate: -123.636116
    East_Bounding_Coordinate: -123.542824
    North_Bounding_Coordinate: 48.170257
    South_Bounding_Coordinate: 48.137078

  3. What does it look like?

  4. Does the data set describe conditions during a particular time period?

    Beginning_Date: 16-Mar-2004
    Ending_Date: 18-Mar-2004
    Currentness_Reference: Ground Condition

  5. What is the general form of this data set?

    Geospatial_Data_Presentation_Form: map

  6. How does the data set represent geographic features?

    1. How are geographic features stored in the data set?

      This is a Vector data set. It contains the following vector data types (SDTS terminology):

      • G-polygon (59549)

    2. What coordinate system is used to represent geographic features?

      Grid_Coordinate_System_Name: Universal Transverse Mercator
      Universal_Transverse_Mercator:
      UTM_Zone_Number: 10
      Transverse_Mercator:
      Scale_Factor_at_Central_Meridian: 0.999600
      Longitude_of_Central_Meridian: -123.000000
      Latitude_of_Projection_Origin: 0.000000
      False_Easting: 500000.000000
      False_Northing: 0.000000

      Planar coordinates are encoded using coordinate pair
      Abscissae (x-coordinates) are specified to the nearest 0.000002
      Ordinates (y-coordinates) are specified to the nearest 0.000002
      Planar coordinates are specified in meters

      The horizontal datum used is D_WGS_1984.
      The ellipsoid used is WGS_1984.
      The semi-major axis of the ellipsoid used is 6378137.000000.
      The flattening of the ellipsoid used is 1/298.257224.

  7. How does the data set describe geographic features?

    Entity_and_Attribute_Overview:

    FID Alias: FID Data type: OID Width: 4 Precision: 0 Scale: 0 Definition: Internal feature number. Definition Source: ESRI
    Shape Alias: Shape Data type: Geometry Width: 0 Precision: 0 Scale: 0 Definition: Feature geometry. Definition Source: ESRI
    ID Alias: ID Data type: Number Width: 10
    GRIDCODE Alias: GRIDCODE Data type: Number Width: 10
    MEGA_ID Alias: MEGA_ID Data type: String Width: 10
    BOTTOM_ID Alias: BOTTOM_ID Data type: String Width: 10
    MSO_MCR_ID Alias: MSO_MCR_ID Data type: String Width: 10
    MDFR_ID Alias: MDFR_ID Data type: String Width: 10
    SLOPE_ID Alias: SLOPE_ID Data type: String Width: 10
    COMPLEX_ID Alias: COMPLEX_ID Data type: String Width: 10
    DEPTH_ID Alias: DEPTH_ID Data type: String Width: 10
    GEO_UNIT Alias: GEO_UNIT Data type: String Width: 10
    CODE Alias: CODE Data type: String Width: 20
    temp Alias: temp Data type: String Width: 5
    Sum_Area Alias: Sum_Area Data type: Number Width: 9
    Area_perce Alias: Area_perce Data type: Number Width: 9 Number of decimals: 7
    Code_sumar Alias: Code_sumar Data type: Float Width: 19 Number of decimals: 11
    Code_perce Alias: Code_perce Width: 19 Number of decimals: 11

    Benthic habitat classification attributes: megahabitat, bottom induration, meso-macrohabitat, and modifiers from Green and others, 1999. CODE is a combination of the habitat attributes. MEGA_ID is S for “Shelf.” BOTTOM_ID is h for hard bottom, m for mixed hard and soft bottom, or s for soft sediment bottom MSO_MRC_ID are macrohabitats described in Greene and others 1999. MDFR_ID are modifiers to describe the texture or lithology of the seafloor and appear in the code preceded by an underscore (_). Including; anthropogenic (_a).
    Entity_and_Attribute_Detail_Citation:
    Habitat attribute types are Modified after Greene, G.H., Yoklavich, M.M., Starr, R.M., O'Connell, V.M., Wakefield, W.W., Sullivan, D.E., McRea, J.E., and Cailliet, G.M., 1999. A classification scheme for deep seafloor habitats. Oceanologica Acta, 22, 663-678.


Who produced the data set?

  1. Who are the originators of the data set? (may include formal authors, digital compilers, and editors)

  2. Who also contributed to the data set?

    The authors would like to thank, Larry Kooker, Mike Boyle, Gerry Hatcher, Dave Hogg, and Hank Chezar at the USGS Marine Facility (Redwood City, CA) contributed field support and logistical support. Andrew Stevenson from the USGS Coastal and Marine Program. The R/V Karluk was piloted by Katherine Peet from NOAA.

  3. To whom should users address questions about the data?

    Guy R. Cochrane
    United States Geological Survey (USGS) Coastal and Marine Geology Program (CMGP)
    Geophysicist
    USGS, 400 Natural Bridges Drive
    Santa Cruz, CA 95060-5792
    USA

    (831) 427-4754 (voice)
    (831) 427-4748 (FAX)
    gcochrane@usgs.gov


Why was the data set created?

These data are intended for science researchers, students, policy makers, and the general public. The data can be used with geographic information systems (GIS) software to display geologic and oceanographic information.


How was the data set created?

  1. From what previous works were the data drawn?

  2. How were the data generated, processed, and modified?

    Date: 2007 (process 1 of 2)

    General description: Habitat_poly was created using bathymetry and backscatter grids, as the input grids. The processing steps for the creation of those grids are described in the Meta-Data files of those grids. All grids mentioned in the following are ESRIGRID format unless mentioned otherwise. The bathymetry (1m pixel) and backscatter (0.25 m pixel) grids (bat_total and amp2texscal respectively) were clipped to encompass just the west side of the survey area, where we chose to perform the supervised classification.
    ********************Processing of the bathymetry grid***************************************** Creating a rugosity grid from bat_total using Benthic Terrain Modeler (to down load from the internet if needed). Chose Rugosity Builder and create rugosity grid – output: bat_tw_rug. Rugosity can best defined as the ratio of surface area to planar area. Basically, rugosity is a measure of terrain complexity or the "bumpiness" of the terrain. In benthic environment, rugosity ca be used to aid in the indentification of areas with high biodiversity, depending on the scale of the input bathymetry.
    ********************Processing of the backscatter grid*****************************************
    *** Step 1: import (to rouse)**************************************************
    General file = /work4/ysagy/K-1-05-PS/new_proc/amp_2_texscal_1.tif Size (bytes) = 418578958
    Output dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_2_texscal Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_2_texscal.img Size (bytes) = 418457330 Type = UNSIGNED_BYTE
    Parameters selected :
    TIFF = Yes
    *** Step2: filter (applying a low pass filter of 3 by 3)***********************
    Input dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_2_texscal Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_2_texscal.img Size (bytes) = 418457330 Type = UNSIGNED_BYTE
    Output dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_filt Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_filt.img Size (bytes) = 418457330 Type = UNSIGNED_BYTE

    Parameters selected : LPF = Yes LPFZ = No LPFV = No HPF = No DIV = No TRIM = No NOISE = No
    NL = 3 NS = 3 LOW = 0 HIGH = 255 PRESERVE = No PERCENT = 0 MULT = 1
    *** Step3: Stretch*************************************************************
    Input dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_filt Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_filt.img Size (bytes) = 418457330 Type = UNSIGNED_BYTE
    Output dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_filt_strch Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_filt_strch.img Size (bytes) = 418457330 Type = UNSIGNED_BYTE
    Parameters selected : UBYTE = Yes SWORD = No UWORD = No FLOAT = No STRETCH = 157;1 219;255
    Number of Lines 15079, Number of samples 27751
    *** Step4: texscal ***********************************************************
    Program: TexScal vers. 2e USGS Guy R. Cochrane gcochrane@usgs.gov W = 11 D = 2 Input Matrix filename: amp_filt_strch.img LINES: 15079 SAMPLES: 27751
    bmin = 0 bmax = 255 emin = 0.000000 emax = 5.654908 hmin = 0.000000 hmax = 1.762250 entropy correction = 44.916747 homogeneity correction = 144.133918
    *** Step5: texgen ************************************************************
    Program TexGen vers. 2e USGS Guy R. Cochrane gcochrane@usgs.gov
    Input Matrix filename: amp_filt_strch.img
    Entropy minimum value: 0.000000 Homogeneity minimum value: 0.000000
    Entropy correction value: 44.916747 Homogeneity correction value: 144.133918
    Output Entropy filename: amp_filt_strchent.img Output Homogeneity filename: amp_filt_strchhom.img
    LINES: 15079 SAMPLES: 27751 W = 11 D = 2

    *** Step6: creating a new set of grids from amp_2_texscal.img with 1 m pixel size****
    Using scale we divide the number of cells by four, and enter those numbers to scale program as follow,
    ****************************** scale ****************************** Execution date: Thu Oct 26 13:39:18 2006
    Output dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_ag_1m Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_ag_1m.img Size (nl,ns) = 3770,6938 Type = UNSIGNED_BYTE
    Input dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_2_texscal Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_2_texscal.img Size (nl,ns) = 15079,27751 Type = UNSIGNED_BYTE
    Parameters selected : NN = No AVERAGE = Yes NL = 3770 NS = 6938 LSCALE = 0.2500165793487632 SSCALE = 0.2500090086843718
    *** Step7: Stretch*************************************************************
    Input dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_ag_1m Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_ag_1m.img Size (bytes) = 26156261 Type = UNSIGNED_BYTE
    Output dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_1m_strch Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_1m_strch.img Size (bytes) = 26156261 Type = UNSIGNED_BYTE
    Parameters selected : UBYTE = Yes SWORD = No UWORD = No FLOAT = No STRETCH = 159;1 216;255
    Number of Lines 3770, Number of samples 6938
    *** Step8: texscal ***********************************************************
    Program: TexScal vers. 2e USGS Guy R. Cochrane gcochrane@usgs.gov Process Started 10/26/2006 13:53:03 W = 5 D = 1 wd=6 wd2=3 Input Matrix filename: amp_1m_strch.img LINES: 3770 SAMPLES: 6938
    bmin = 0 bmax = 255 emin = 0.406397 emax = 5.531759 hmin = 0.000898 hmax = 1.857363 entropy correction = 49.557474 homogeneity correction = 136.819181 Process Finished 10/26/2006 14:25:55
    *** Step9: texgen *********************************************************** Program TexGen vers. 2e USGS Guy R. Cochrane gcochrane@usgs.gov 10/30/2006 09:21:13 Input Matrix filename: amp_1m_strch.img Entropy minimum value: 0.406397 Homogeneity minimum value: 0.000898 Entropy correction value: 49.557474 Homogeneity correction value: 136.819181 Output Entropy filename: amp_1m_strchent.img Output Homogeneity filename: amp_1m_strchhom.img LINES: 3770 SAMPLES: 6938 W = 5 D = 1 Valid pixels processed: 12102714.000000
    *** Step10: export ************************************************************
    The following files were exported to .tiff and later on imported to ArcMap
    Input: amp_filt_strch.img, Output: amp_w11_strch.tiff Input: amp_filt_strchhom.img, Output: amp_w11_hom.tiff Input: amp_filt_strchent.img, Output: amp_w11_ent.tiff Input: amp_1m_strch.img, Output: amp_1m_strch.tiff Input: amp_1m_strchhom.img, Output: amp_1m_hom.tiff Input: amp_1m_strchent.img, Output: amp_1m_ent.tiff
    The file were exported using the export command, here is an example
    **************************** export ******************************
    Input dataset = /work4/ysagy/K-1-05-PS/new_proc/amp_filt_strch Image file = /work4/ysagy/K-1-05-PS/new_proc/amp_filt_strch.img Size (bytes) = 418457330 Type = UNSIGNED_BYTE
    General file = /work4/ysagy/K-1-05-PS/new_proc/amp_w11_strch.tiff Size (bytes) = 418459165
    Parameters selected : TIFF = Yes

    Date: 2007 (process 2 of 2)
    Processing for all habitat polygon regions:
    1. ArcGIS Create signatures tool: Created signature polygon file by identifying areas of verified (through video observation) hard, mixed, soft, and san waves bottom. Used 7 layer in the “stack” of files to create signatures. Layers included (and in that order): -euclidean distance grid of track lines -rugosity (derived from mosaiced bathy using the “Benthic Terrain Modeling Tool’s Rugosity Builder”) -enthropy from backscatter at 1 meter resolution -homogeneity of backscatter at 1 meter resolution -mosaiced backscatter at 0.25 meter resolution -enthropy from backscatter at 0.25 meter resolution -homogeneity of backscatter at 0.25 meter resolution
    2. Using the output from the first round of the ArcGIS “Create Signatures Tool,” we edit some of the polygons in the signature file, and add 2 additional classes: nadir soft and nadir rock. Also we created a buffer of no data along the track line (nadir), which has been converted to a grid. Used same above 7 layers and re ran “Create Signatures Tool.” Using the new output file, assigned classes nadir rock and nadir soft to signature “soft.”
    3. Painted out residual rocky noises and sand-wave noises, using Arc Scan and Raster Painting tool. Since this tool can only work with binary files (and no data is also a value), we reclassified the grid, then used "clean up" tool to erase, once the areas of rock, and once the areas of sand-waves. Later on we re-merged all the grids together (the ones without the rock and the sand-waves classification with the ones we just painted out (edited)).
    4. Filling in the areas of no data in the grid by merging ("mosaic to new raster") other grids which were computed using Block statistics (under Spatial Analyst), select Neighbrhood, rectangle of 5 by 5, and statistic type: Majority. Repeat this step to creates grids of 10 by 10, 20 by 20, 30 by 30 and so on, until all the areas of no data will be interpolated (i.e. no areas of no data in the final mosaiced grid).
    3. Converted 4 class (soft, mixed, rock, and sand-waves) grid to polygon. This is done using Raster to Poly in 3D Analyst (make sure that "Simplify Polygon" is UNCHECKED!).
    6. In order to decrease the number of polygons, i.e., we want to remove polygons from classes 1, and 2 (soft and mixed) that are smaller than 25 sq meters but to keep classes 3 and 4 (which are rock and sand-waves respectivley), we: Added field Area and in XToll Pro used Table operation to calculate Area (we chose only meter and area for calculation).Then we select by Area and GRIDCODE and calculated GRIDCODE = 0.
    7. Added all Green et al. (1999) habitat code ID and definition columns to polygons. Used select by attribute, location, and manual tools to query and assign habitat code attributes. The following steps follow the “Deep-water Marine Benthic Habitat Classification Scheme” (modified after Green et al., 1999) and provide the detail to add all Greene code fields to the newly classified, edited, and aggregated polygon. 8 fields will be added to the Attribute Table. Use uppercase letters for filed names All field types will be txt Length of all fields is 10 (except for the HAB_TYPE field which will be 20) Add fields in the following order: MEGA_ID, BOTTOM_ID, MSO_MRC_ID, SLOPE_ID, COMPLEX_ID, DEPTH_ID, GEO_UNIT, HAB_TYPE.
    MEGA_ID (Megahabitat) This category is based on depth and general physiographic boundaries and is used to distinguish regions and features on a scale of 10s of kilometers to kilometers. Depth ranges listed for category attributes in the key are given as generalized examples. This first category is denoted with a capital letter.
    Using the “Field Calculator” right-click option (for all rows in this column), enter the specific capital letter describing the megahabitat: A = Aprons, continental rise, deep fans and bajadas (3000-5000 m) B = Basin floors, Borderland types (floors at 1000-2500 m) F = Flanks, continental slope, basin/island-atoll flanks (200-3000 m) I = Inland seas, fiords (0-200 m) P = Plains, abyssal (>5000 m) R = Ridges, banks and seamounts (crests at 200-2500 m) S = Shelf, continental and island shelves (0-200 m)
    BOTTOM_ID (Seafloor Induration) Bottom induration refers to substrate hardness and is depicted by the second letter (a lower-case letter) in the code. Designations of hard, mixed, and soft seafloor can be further subdivided into distinct sediment types, and are then listed immediately afterwards in parentheses in alphabetical order or in order of relative abundance.
    Select by Attributes: Select all rows for gridcode = 1. Use Field Calculator right click option to enter “h” for hard bottom, rock outcrop, relic beach rock or sediment pavement. Select all rows for gridcode = 2. Use Field Calculator right click option to enter “m for mixed (hard & soft bottom). Select all rows for gridcode = 3. Use Field Calculator right click option to enter “s” for soft bottom, sediment covered. Add the Sediment types where there are video observations to verify that a polygon contains a specific sediment type. Add all sediments types that apply (were observed for each polygon). Use parentheses after the lower case seafloor induration type of “h”, “m”, or “s”. Sediment types (for above indurations): (b) = boulder (c) = cobble (g) = gravel (h) = halimeda sediment, carbonate (m) = mud, silt, clay (p) = pebble (s) = sand Check that there is no cell left without a value. To check for no empty values use the “Select by Attributes” and select BOTTOM_ID: Get Unique Value.
    MSO_MRC_ID (Meso/Macrohabitat) This distinction is related to scale of the habitat and consists of seafloor features ranging from 1 kilometer to 1 meter. Meso/Macrohabitats are noted as the third letter (a lower-case letter) in the code. If necessary, several Meso/Macrohabitats can be included alphabetically or in order of relative abundance and separated by a backslash.
    Add the Meso/Marohabitat where there are video observations to verify that a polygon contains a specific habitat. Add all habitat types that apply. Habitat may also be applied based on knowledge of the area. a = atoll b = beach, relic c = canyon d = deformed, tilted and folded bedrock e = exposure, bedrock f = flats g = gully, channel i = ice-formed feature or deposit, moraine, drop-stone depression k = karst, solution pit, sink l = landslide m = mound, depression n = enclosed waters, lagoon o = overbank deposit (levee) p = pinnacle r = rill s = scarp, cliff, fault or slump t = terrace w = sediment waves y = delta, fan z# = zooxanthellae hosting structure, carbonate reef 1 = barrier reef 2 = fringing reef 3 = head, bommie 4 = patch reef
    Note: To classify all the polygons that falls in a certain polygon region like a “Fan Delta”, use the Select By Location option: Select Feature from habitat poly shapefile that “Have their center in” the current layer and select from current selection.
    MODIFIER_ID The fourth letter in the code, a modifier, is noted with a lower-case subscript letter or separated by an underline in some GIS programs (e.g., ArcView). Modifiers describe the texture or lithology of the seafloor. If necessary, several modifiers can be included alphabetically or in order of relative abundance and separated by a backslash.
    Determine using the same deduction methods as Meso/Macrohabitat. Use an “underscore” before the lowercase letter. _a = anthropogenic (artificial reef/breakwall/shipwreck) _b = bimodal (conglomeratic, mixed [includes gravel, cobbles and pebbles]) _c = consolidated sediment (includes claystone, mudstone, siltstone, sandstone, breccia, or conglomerate) _d = differentially eroded _f = fracture, joints-faulted _g = granite _h = hummocky, irregular relief _i = interface, lithologic contact _k = kelp _l = limestone or carbonate _m = massive _p = pavement _r = ripples _s = scour (current or ice, direction noted) _u = unconsolidated sediment _v = volcanic rock

    SLOPE_ID (Seafloor Slope) The fifth category, listed by a number following the modifier, denotes slope. Slope is calculated for a survey area from x-y-z multibeam data.
    Create a slope raster using the Spatial Analyst drop down toolbar, Spatial Analyst Surface Analysis?Slope, to create a slope grid from the bathy raster. Add a new, temporary text field to the habitat polygon shapefile called TempID. Using the right-click “Field Calculator” option, populate the filed with the values from the FID column (the FID does not show up as an option for the zone field so we have to create a text version of that field). Open the Zonal Statistics tool from the Spatial Analyst Toolbar (not from ArcToolboxs), Spatial Analyst?Zonal Statistics: The Zone dataset is the habitat polygon shapefile. The Zone field is the “tempID” field. The value raster is the slope raster. Make sure that Ignore NoDate in calculation” is checked on. The table generated can be joined to the original habitat grid if you check the box to join the output table and the zone layer (in this case the output table is the habitat polygon and the habitat polygon table may need to be closed and reopened to refresh and show the join). Uncheck the “chart statistics” box to make the tool run faster. Create a new text field called "SLOPE_ID." Select the column from the joined “MEAN” zone field for each of the slope classifications (1-5) and populate the new field with the appropriate number. For example, from the Selection menu use Select by Attribute”; layer is the habitat polygon; method is to “Create a new selection”; choose the “MEAN” field; and in the equation box enter MEAN <= 1 AND MEAN >= 30”; view the selected rows only and use the right click “Field Calculator option” to populate the selected rows of “SLOPE_ID” field with the number 2.
    1 = Flat (0-1º) 2 = Sloping (1-30º) 3 = Steeply Sloping (30-60º) 4 = Vertical (60-90º) 5 = Overhang (> 90º)
    When completed for each of the five “SLOPE_ID” categories, remove the join from the habitat table by right clicking on habitat polygon shapefile and selecting Join and Relates ? Remove Joins ? Remove All Joins


    COMPLEX_ID (sea-floor complexity) Complexity is denoted by the sixth letter. Complexity is calculated from rugosity data using neighborhood statistics and reported in standard deviation units. Rugosity value based on the bathy is a better measurement of the seafloor?s complexity than the bathy value.
    The COMPLEX_ID is obtained from the rugosity raster in a similar process to how the “SLOPE_ID” is obtained from the slope raster. The one difference is that the complexity classifications are based on standard deviation of the rugosity raster and not mean values. Use the same “tempID” field in the Zonal Statistics dialogue box for the join.
    If you have not already created a rugosity raster, use the “Benthic Terrain Modeler” tool to calculate rugosity. The tool can be obtained on-line at: <http://dusk2.geo.orst.edu/djl/samoa/tools.html>. The install is fast and is explained in the accompanied ReadMe” file. After the Benthic Terrain Modeler is installed and the toolbar visible, create the rugosity raster using Benthic Terrain Modeler ?Rugosity Builder. Depending o the size of the file, the Rugosity Builder tool can take anywhere from 30 to 90 minutes to complete for one bathy raster. Open the Zonal Statistics tool from the Spatial Analyst Toolbar (not from Toolboxes), Spatial Analyst?Zonal Statistics. The Zone dataset is the habitat polygon shapefile. The Zone field is the “tempID” field. The value raster is the rugosity raster. Make sure that “Ignore NoDate in calculation” is checked on. The table generated can be joined to the original habitat grid if you check the box to join the output table and the zone layer (in this case the output table is the habitat polygon and the habitat polygon table may need to be closed and reopened to refresh and show the join). You can uncheck the “chart statistics” box to make the tool run faster. The chart is not used). Create a new text field called "COMPLEX_ID." Select from the joined “STD” zone value field (you may want to remove the other fields or just leave them until you remove the entire join) for each of the complexity classifications (A, B, C, D, E) and populate the new field with the appropriate letter. For example from the “Selection” menu use “Select by Attribute”; layer is the habitat polygon; method is to “Create a new selection”; choose the “STD” field; in equation box enter “STD >= 0 AND STD <= 1; view selected only and use the right click “Field Calculator” option” to populate the selected rows of “COMPLEX_ID” field with the letter “B.”
    A = Very Low Complexity (-1 to 0) B = Low Complexity (0 to 1) C = Moderate Complexity (1 to 2) D = High Complexity (2 to 3) E = Very High Complexity (3+)
    When completed for each of the five “COMPLEX_ID” categories, remove the join form the habitat table by right clicking on habitat polygon shapefile and selecting Join and Relates ? Remove Joins ? Remove All Joins.
    DEPTH_ID (Seafloor Depth) Depth is denoted by the seventh place holder and is listed using numbers in curly brackets. Depth is calculated from the bathy depth values.
    DEPTH_ID calculation is done differently than SLOPE_ID and COMPLEXITY_ID. A visualization of the final depth classes will create a clear visual boundary between the four classes. Classify the bathy raster using the DEPTH_ID categories: {1} = Intertidal (<0) {2} = Intertidal (<0) – 30m {3} = 30m – 100m {4} = 100m – 200m Create a new raster using the reclassify option in Spatial Analyst with the 4 depth classes above. Convert the new, reclassified bathy raster to a polygon shapefile using favorite method. Using the Select by location tool from the “Selection” drop down menu, select for all the polygons in the habitat polygon shapefile that are within each of the classes. Add appropriate DEPTH_ID values using the “Field Calculator.” For example, using the Select by Attribute” option from the “Selection” menu, select all the polygons that have a value of 2 representing DEPTH_ID < =0 and DEPTH_ID >= 30. Then use the “Select by Location” option form the “Selection” menu and select features from the habitat polygon shapefile that “are contained by” the bathy depth polygon (the selected polygons from the bathy depth polygons will automatically be the subset used).
    HABITAT TYPE The Habitat Type or “HAB_TYPE” field is the concatenation of all of the fields derived so far. The “HAB_TYPE” value can be obtained using the right click “Field Calculator” option. For example right click on the “HAB_TYPE” column for “Filed Calculator” option. In calculator list each field name (keep in the exact order of the fields as they appear in the attribute table) and the “&” symbol for concatenation between each filed: [MEGA_ID]&[BOTTOM_ID]&[SLOPE_ID]&[COMPLEX_ID]&[DEPTH_ID] The fields with no value will appear as a space in the code string using above method. There are several ways to deal with this: 1) Write an Access Database expression that escapes spaces—I have not yet done this successfully 2) Concatenate in subsets: First concatenate for all rows that have a value for all fields. Next concatenate for all rows that are missing values in the same column and leave that column out of the concatenation string. Repeat concatenating for rows with the same data fields missing until all rows have been calculated.
    GEO_UNIT When possible, the geologic unit is determined and listed in the habitat classification table but not in the final code. Geologic Unit is determined by scientist.

    8. Calculated the percentage of area of the polygons for each CODE, using Analysis Tool, Summary Statistics, select table field area - stat = sum, then select "case field" = code. Calculate statistics separately for each unique attribute.

  3. What similar or related data should the user be aware of?


How reliable are the data; what problems remain in the data set?

  1. How well have the observations been checked?

    Habitat polygons dervived in ArcGIS 9.1 from a georefereced sidescan sonar mosaic tiff.

  2. How accurate are the geographic locations?

    Highly variable on the order of 10 meters.

  3. How accurate are the heights or depths?

  4. Where are the gaps in the data? What is missing?

    The classification scheme comprehensively includes 62% of the total survey area, in its western side. All pixels in this part of the survey area have been classified, 59549 samples.

  5. How consistent are the relationships among the observations, including topology?

    No additional checks for topological consistency were performed on this data.


How can someone get a copy of the data set?

Are there legal restrictions on access or use of the data?

Access_Constraints: None
Use_Constraints: Not suitable for navigation

  1. Who distributes the data set? (Distributor 1 of 1)

    United States Geological Survey (USGS) Coastal and Marine Geology Program (CMGP)
    c/o Guy R. Cochrane
    Geophysicist
    USGS, 400 Natural Bridges Drive
    Santa Cruz, CA 95060-5792
    USA

    (831) 427-4754 (voice)
    (831) 427-4748 (FAX)
    gcochrane@usgs.gov

  2. What's the catalog number I need to order this data set?

  3. What legal disclaimers am I supposed to read?

    Please recognize the U.S. Geological Survey (USGS) as the source of this information.
    Although these data have been used by the U.S. Geological Survey, U.S. Department of the Interior, no warranty expressed or implied is made by the U.S. Geological Survey as to the accuracy of the data.
    The act of distribution shall not constitute any such warranty, and no responsibility is assumed by the U.S. Geological Survey in the use of this data, software, or related materials.

  4. How can I download or order the data?


Who wrote the metadata?

Dates:
Last modified: 2007
Last Reviewed: 2007
Metadata author:
United States Geological Survey (USGS) Coastal and Marine Geology Program (CMGP)
c/o Guy R. Cochrane
Geophysicist
USGS, 400 Natural Bridges Drive
Santa Cruz, CA 95060-5792
USA

(831) 427-4754 (voice)
(831) 427-4748 (FAX)
gcochrane@usgs.gov

Metadata standard:
FGDC Content Standards for Digital Geospatial Metadata ("CSDGM version 2") (FGDC-STD-001-1998)


Generated by mp version 2.9.3 on Wed Jul 25 15:20:43 2007