Updates to print statement in extractor. Prepare metadata files for official publication.

parent ccf72980
......@@ -13,6 +13,6 @@ into the project you can [fork this repository][2] and
[1]: https://github.com/usgs/esturdivant-usgs/BI-geomorph-extraction/issues
[1]: https://code.usgs.gov/cmgp/bi-transect-extractor/issues
[2]: https://help.github.com/articles/fork-a-repo/
[3]: https://help.github.com/articles/about-pull-requests/
# bi-transect-extractor
Author: Emily Sturdivant, U.S. Geological Survey | esturdivant@usgs.gov
### Versions
[Version 1.0.0][https://code.usgs.gov/cmgp/bi-transect-extractor/tree/v1.0.0] was approved for release in June 2019 and assigned a digital object identifier.
## Overview
This package is used to calculate coastal geomorphology variables along shore-normal transects. The calculated variables are used as inputs for modeling geomorphology using a Bayesian Network (BN). The resulting input variables to the Geomorphology BN are described in the table below.
The package is a companion to a USGS methods report entitled "Evaluating barrier island characteristics and piping plover (Charadrius melodus) habitat availability along the U.S. Atlantic coast - geospatial approaches and methodology" (Zeigler and others, in review) and various USGS data releases that have been or will be published (e.g. Gutierrez and others, in review). For more detail, please refer to the report by Zeigler and others.
The package is a companion to a USGS methods report entitled "Evaluating barrier island characteristics and piping plover (Charadrius melodus) habitat availability along the U.S. Atlantic coast - Geospatial approaches and methodology" ([Zeigler and others, 2019][1]) and various USGS data releases that have been or will be published (e.g. [Sturdivant and others, 2019][2]). For more detail, please refer to the methods report ([Zeigler and others, 2019][1]).
| BN variable, point value (5 m) | Format | Definition |
|-------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
......@@ -45,9 +48,9 @@ jupyter notebook
## How to implement:
1. Acquire all input datasets and save them into an Esri file geodatabase.
- National Assessment of Shoreline Change (NASC) transect lines. Long-term shoreline change rates transect file from the NASC ([U.S. Geological Survey Open-File Report 2010-1119](https://pubs.usgs.gov/of/2010/1119/data_catalog.html "U.S. Geological Survey Open-File Report 2010-1119"))
- Lidar-derived beach morphology points. These are published through the USGS National Assessment of Coastal Change Hazards [Beach Morphology (Dune Crest, Dune Toe, and Shoreline) for U.S. Sandy Coastlines] (https://coastal.er.usgs.gov/data-release/doi-F7GF0S0Z/). They need to be separated into shoreline, dune crest, and dune toe points.
- Digital elevation model (DEM). A good source for airborne lidar datasets is [NOAA's Digital Coast](https://coast.noaa.gov/dataviewer/). The lidar dataset should be the same as that used to derive the morphology points.
- National Assessment of Shoreline Change (NASC) transect lines. Long-term shoreline change rates transect file from the NASC. Access the most up-to-date rates from the [Coastal Change Hazards Portal][3] (navigate to Shoreline Change > Long-term shoreline change rates)
- Lidar-derived beach morphology points. These are published through the USGS National Assessment of Coastal Change Hazards [Lidar-derived Beach Morphology (Dune Crest, Dune Toe, and Shoreline) for U.S. Sandy Coastlines](https://coastal.er.usgs.gov/data-release/doi-F7GF0S0Z). They need to be separated into shoreline, dune crest, and dune toe points.
- Digital elevation model (DEM). A good source for airborne lidar datasets is [NOAA's Digital Coast](https://coast.noaa.gov/dataviewer). The lidar dataset should be the same as that used to derive the morphology points.
- boundary polygon <- DEM + shoreline points + inlet lines (+ manual)
- supplemented and sorted transects <- script + **manual**; Sorting is only semi-automated and tricky. See explanations below/in prepper.ipynb.
- 'tidied' extended transects <- script + **manual**
......@@ -66,3 +69,7 @@ jupyter notebook
- notebooks: extractor.ipynb is the Jupyter Notebook used to perform the processing.
- sample_scratch: data frames in pickle format that were saved in the scratch directory during Fire Island extraction to use for testing.
- docs: files for use in the display of the package.
[1]: https://doi.org/10.3133/ofr20191071
[2]: https://doi.org/10.5066/P944FPA4
[3]: https://marine.usgs.gov/coastalchangehazardsportal
......@@ -4,7 +4,7 @@
"organization": "U.S. Geological Survey",
"description": "Extracts barrier island metrics along transects for Barrier Island Geomorphology Bayesian Network",
"version": "v1.0.0",
"status": "Release Candidate",
"status": "Production",
"permissions": {
"usageType": "openSource",
......@@ -13,8 +13,8 @@
"homepageURL": "https://code.usgs.gov/cmgp/bi-transect-extractor",
"dowloadURL": "https://code.usgs.gov/cmgp/bi-transect-extractor/master.zip",
"disclaimerURL": "https://code.usgs.gov/cmgp/bi-transect-extractor/LICENSE.md",
"repositoryUrl": "https://code.usgs.gov/cmgp/bi-transect-extractor.git",
"disclaimerURL": "https://code.usgs.gov/cmgp/bi-transect-extractor/DISCLAIMER.md",
"repositoryURL": "https://code.usgs.gov/cmgp/bi-transect-extractor.git",
"vcs": "git",
"laborHours": null,
......@@ -41,7 +41,7 @@
},
"date": {
"metadataLastUpdated": "2018-07-30"
"metadataLastUpdated": "2019-06-28"
}
}
]
......@@ -84,7 +84,12 @@ sitemap = {
'code': 'met',
'MHW':0.34, 'MLW':-0.56,
'id_init_val':190000,
'morph_state': 12}
'morph_state': 12},
'CapeHatteras':{'region': 'NorthCarolina', 'site': 'CapeHatteras', # transects extended manually
'code': 'caha',
'MHW':0.26, 'MLW':-0.45,
'id_init_val':400000,
'morph_state': 11},
}
########### Default Values ##########################
......
......@@ -498,14 +498,14 @@ def SpatialSort(in_fc: str, out_fc: str, sort_corner: str='LL',
def SortTransectPrep(spatialref):
"""Prepare to sort transects by conditionally creating a sort_lines FC or setting the sort corner."""
multi_sort = input("Do we need to sort the transects in batches to preserve the order? (y/n) ")
sort_lines = 'sort_lines'
# sort_lines = 'sort_lines'
if multi_sort == 'y':
arcpy.CreateFeatureclass_management(arcpy.env.scratchGDB, sort_lines, "POLYLINE", spatial_reference=spatialref)
sort_lines = arcpy.CreateFeatureclass_management(arcpy.env.scratchGDB, 'sort_lines', "POLYLINE", spatial_reference=spatialref)
arcpy.AddField_management(sort_lines, 'sort', 'SHORT', field_precision=2)
arcpy.AddField_management(sort_lines,'sort_corn', 'TEXT', field_length=2)
arcpy.AddField_management(sort_lines, 'reverse', 'TEXT', field_length=1)
print("MANUALLY: Add features to sort_lines. Indicate the order of use in 'sort', the sort corner in 'sort_corn', and the direction in 'reverse'.")
return(os.path.join(arcpy.env.scratchGDB, sort_lines))
return(sort_lines)
else:
# Corner from which to start sorting, LL = lower left, etc.
sort_corner = input("Sort corner (LL, LR, UL, UR): ")
......
......@@ -29,9 +29,7 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"import os\n",
......@@ -48,6 +46,25 @@
"import core.functions as fun"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"Date: {}\".format(datetime.date.today()))\n",
"# print(os.__version__)\n",
"# print(sys.__version__)\n",
"print('pandas version: {}'.format(pd.__version__))\n",
"print('numpy version: {}'.format(np.__version__))\n",
"print('matplotlib version: {}'.format(matplotlib.__version__))\n",
"# print(io.__version__)\n",
"# print(arcpy.__version__)\n",
"print('pyproj version: {}'.format(pyproj.__version__))\n",
"\n",
"# print(bi_transect_extractor.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
......@@ -137,7 +154,8 @@
"## Transect-averaged values\n",
"We work with the shapefile/feature class as a pandas DataFrame as much as possible to speed processing and minimize reliance on the ArcGIS GUI display.\n",
"\n",
"1. Create a pandas dataframe from the transects feature class. In the process, we remove some of the unnecessary fields. The resulting dataframe is indexed by __sort_ID__ with columns corresponding to the attribute fields in the transects feature class. \n",
"1. Add the bearing of each transect line to the attribute table from the LINE_BEARING geometry attribute.\n",
"1. Create a pandas dataframe from the transects feature class. In the process, remove some of the unnecessary fields. The resulting dataframe is indexed by __sort_ID__ with columns corresponding to the attribute fields in the transects feature class. \n",
"2. Add __DD_ID__.\n",
"3. Join the values from the transect file that includes the three anthropologic development fields, __Construction__, __Development__, and __Nourishment__. "
]
......@@ -150,9 +168,15 @@
},
"outputs": [],
"source": [
"# Add BEARING field to extendedTrans feature class\n",
"arcpy.AddGeometryAttributes_management (extendedTrans, 'LINE_BEARING')\n",
"print(\"Adding line bearing field to transects.\")\n",
"\n",
"# Copy feature class to dataframe.\n",
"trans_df = fwa.FCtoDF(extendedTrans, id_fld=tID_fld, extra_fields=extra_fields)\n",
"trans_df['DD_ID'] = trans_df[tID_fld] + sitevals['id_init_val']\n",
"trans_df.drop('Azimuth', axis=1, inplace=True)\n",
"trans_df.rename_axis({\"BEARING\": \"Azimuth\"}, axis=1, inplace=True)\n",
"\n",
"# Get anthro fields and join to DF\n",
"if 'tr_w_anthro' in locals():\n",
......@@ -281,7 +305,8 @@
"slpts_df.drop(sl_extra_flds, axis=1, inplace=True)\n",
"csv_fname = os.path.join(scratch_dir, pts_name.split('_')[0] + '_SLpts.csv')\n",
"slpts_df.to_csv(csv_fname, na_rep=fill, index=False)\n",
"print(\"\\nOUTPUT: {} in specified scratch_dir.\".format(os.path.basename(csv_fname)))"
"sz_mb = os.stat(csv_fname).st_size/(1024.0 * 1024.0)\n",
"print(\"\\nOUTPUT: {} (size: {:.2f} MB) in specified scratch_dir.\".format(os.path.basename(csv_fname), sz_mb))"
]
},
{
......@@ -347,7 +372,8 @@
"dlpts_df.drop(dl_extra_flds, axis=1, inplace=True)\n",
"csv_fname = os.path.join(scratch_dir, pts_name.split('_')[0] + '_DTpts.csv')\n",
"dlpts_df.to_csv(csv_fname, na_rep=fill, index=False)\n",
"print(\"\\nOUTPUT: {} in specified scratch_dir.\\n\".format(os.path.basename(csv_fname)))"
"sz_mb = os.stat(csv_fname).st_size/(1024.0 * 1024.0)\n",
"print(\"\\nOUTPUT: {} (size: {:.2f} MB) in specified scratch_dir.\".format(os.path.basename(csv_fname), sz_mb))"
]
},
{
......@@ -379,7 +405,8 @@
"dhpts_df.drop(dh_extra_flds, axis=1, inplace=True)\n",
"csv_fname = os.path.join(scratch_dir, pts_name.split('_')[0] + '_DCpts.csv')\n",
"dhpts_df.to_csv(csv_fname, na_rep=fill, index=False)\n",
"print(\"\\nOUTPUT: {} in specified scratch_dir.\".format(os.path.basename(csv_fname)))"
"sz_mb = os.stat(csv_fname).st_size/(1024.0 * 1024.0)\n",
"print(\"\\nOUTPUT: {} (size: {:.2f} MB) in specified scratch_dir.\".format(os.path.basename(csv_fname), sz_mb))"
]
},
{
......@@ -1058,7 +1085,7 @@
"outputs": [],
"source": [
"trans_4pubdf = fwa.FCtoDF(trans_4pub)\n",
"xmlfile = os.path.join(scratch_dir, trans_df + '_eainfo.xml')\n",
"xmlfile = os.path.join(scratch_dir, trans_4pub + '_eainfo.xml')\n",
"trans_df_extra_flds = fun.report_fc_values(trans_4pubdf, field_defs, xmlfile)"
]
},
......@@ -1155,7 +1182,7 @@
"pts_df4csv.to_csv(csv_fname, na_rep=fill, index=False)\n",
"\n",
"sz_mb = os.stat(csv_fname).st_size/(1024.0 * 1024.0)\n",
"print(\"OUTPUT: {} [{} MB] in specified scratch_dir. \".format(os.path.basename(csv_fname), sz_mb))"
"print(\"\\nOUTPUT: {} (size: {:.2f} MB) in specified scratch_dir.\".format(os.path.basename(csv_fname), sz_mb))"
]
},
{
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment