Skip to content
Snippets Groups Projects
dataRetrieval.Rnw 60 KiB
Newer Older
  • Learn to ignore specific revisions
  • %\VignetteIndexEntry{Introduction to the dataRetrieval package}
    
    %\VignetteEngine{knitr::knitr}
    
    %\VignetteDepends{}
    
    %\VignetteSuggests{xtable,EGRET}
    %\VignetteImports{zoo, XML, RCurl}
    %\VignettePackage{dataRetrieval}
    
    
    \documentclass[a4paper,11pt]{article}
    
    \usepackage{amsmath}
    \usepackage{times}
    \usepackage{hyperref}
    \usepackage[numbers, round]{natbib}
    \usepackage[american]{babel}
    \usepackage{authblk}
    
    \usepackage{subfig}
    
    \usepackage{placeins}
    
    \usepackage{footnote}
    \usepackage{tabularx}
    
    \usepackage{threeparttable}
    \usepackage{parskip}
    
    
    \usepackage{csquotes}
    \usepackage{setspace}
    
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    % \doublespacing
    
    \renewcommand{\topfraction}{0.85}
    \renewcommand{\textfraction}{0.1}
    \usepackage{graphicx}
    
    
    
    \usepackage{mathptmx}% Times Roman font
    \usepackage[scaled=.90]{helvet}% Helvetica, served as a model for arial
    
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    % \usepackage{indentfirst}
    % \setlength\parindent{20pt}
    
    \setlength{\parskip}{0pt}
    
    \usepackage{courier}
    
    \usepackage{titlesec}
    \usepackage{titletoc}
    
    \titleformat{\section}
      {\normalfont\sffamily\bfseries\LARGE}
      {\thesection}{0.5em}{}
    \titleformat{\subsection}
      {\normalfont\sffamily\bfseries\Large}
      {\thesubsection}{0.5em}{}
    \titleformat{\subsubsection}
      {\normalfont\sffamily\large}
      {\thesubsubsection}{0.5em}{}
      
    \titlecontents{section}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    [2em]                 % adjust left margin
    
    {\sffamily}             % font formatting
    {\contentslabel{2.3em}} % section label and offset
    {\hspace*{-2.3em}}
    {\titlerule*[0.25pc]{.}\contentspage}
      
    \titlecontents{subsection}
    [4.6em]                 % adjust left margin
    {\sffamily}             % font formatting
    {\contentslabel{2.3em}} % section label and offset
    {\hspace*{-2.3em}}
    {\titlerule*[0.25pc]{.}\contentspage}
      
    \titlecontents{subsubsection}
    [6.9em]                 % adjust left margin
    {\sffamily}             % font formatting
    {\contentslabel{2.3em}} % section label and offset
    {\hspace*{-2.3em}}
    {\titlerule*[0.25pc]{.}\contentspage}
    
    \titlecontents{table}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    [0em]                 % adjust left margin
    
    {\sffamily}             % font formatting
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    {Table\hspace*{2em} \contentslabel {2em}} % section label and offset
    
    {\hspace*{4em}}
    {\titlerule*[0.25pc]{.}\contentspage}
    
    \titlecontents{figure}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    [0em]                 % adjust left margin
    
    {\sffamily}             % font formatting
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    {Figure\hspace*{2em} \contentslabel {2em}} % section label and offset
    
    {\hspace*{4em}}
    {\titlerule*[0.25pc]{.}\contentspage}
    
    %Italisize and change font of urls:
    \urlstyle{sf}
    \renewcommand\UrlFont\itshape
    
    \usepackage{caption}
    \captionsetup{
      font={sf},
      labelfont={bf,sf},
      labelsep=period,
      justification=justified,
      singlelinecheck=false
    }
    
    
    \textwidth=6.2in
    \textheight=8.5in
    \parskip=.3cm
    \oddsidemargin=.1in
    \evensidemargin=.1in
    \headheight=-.3in
    
    
    %------------------------------------------------------------
    % newcommand
    %------------------------------------------------------------
    \newcommand{\scscst}{\scriptscriptstyle}
    \newcommand{\scst}{\scriptstyle}
    \newcommand{\Robject}[1]{{\texttt{#1}}}
    \newcommand{\Rfunction}[1]{{\texttt{#1}}}
    \newcommand{\Rclass}[1]{\textit{#1}}
    \newcommand{\Rpackage}[1]{\textit{#1}}
    \newcommand{\Rexpression}[1]{\texttt{#1}}
    \newcommand{\Rmethod}[1]{{\texttt{#1}}}
    \newcommand{\Rfunarg}[1]{{\texttt{#1}}}
    
    \begin{document}
    
    
    <<openLibrary, echo=FALSE>>=
    library(xtable)
    options(continue=" ")
    options(width=60)
    library(knitr)
    
    @
    
    
    \renewenvironment{knitrout}{\begin{singlespace}}{\end{singlespace}}
    \renewcommand*\listfigurename{Figures}
    
    \renewcommand*\listtablename{Tables}
    
    
    
    %------------------------------------------------------------
    \title{The dataRetrieval R package}
    %------------------------------------------------------------
    \author[1]{Laura De Cicco}
    \author[1]{Robert Hirsch}
    \affil[1]{United States Geological Survey}
    
    
    
    <<include=TRUE ,echo=FALSE,eval=TRUE>>=
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    opts_chunk$set(highlight=TRUE, tidy=TRUE, keep.space=TRUE, keep.blank.space=FALSE, keep.comment=TRUE, tidy=FALSE,comment="")
    
    knit_hooks$set(inline = function(x) {
       if (is.numeric(x)) round(x, 3)})
    knit_hooks$set(crop = hook_pdfcrop)
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    bold.colHeaders <- function(x) {
      x <- gsub("\\^(\\d)","$\\^\\1$",x)
      x <- gsub("\\%","\\\\%",x)
      x <- gsub("\\_"," ",x)
      returnX <- paste("\\multicolumn{1}{c}{\\textbf{\\textsf{", x, "}}}", sep = "")
    }
    
    addSpace <- function(x) ifelse(x != "1", "[5pt]","")
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \noindent{\huge\textsf{\textbf{The dataRetrieval R package}}}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \noindent\textsf{By Laura De Cicco and Robert Hirsch}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \noindent\textsf{\today}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    % \maketitle
    % 
    % \newpage 
    
    \tableofcontents
    
    \listoffigures
    \listoftables
    
    \newpage
    
    
    %------------------------------------------------------------
    \section{Introduction to dataRetrieval}
    %------------------------------------------------------------ 
    
    Jessica Thompson's avatar
    Jessica Thompson committed
    The dataRetrieval package was created to simplify the process of loading hydrologic data into the R environment. It has been specifically designed to work seamlessly with the EGRET R package: Exploration and Graphics for RivEr Trends. See: \url{https://github.com/USGS-R/EGRET/wiki} for information on EGRET. EGRET is designed to provide analysis of water quality data sets using the Weighted Regressions on Time, Discharge and Season (WRTDS) method as well as analysis of discharge trends using robust time-series smoothing techniques.  Both of these capabilities provide both tabular and graphical analyses of long-term data sets.
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    The dataRetrieval package is designed to retrieve many of the major data types of U.S. Geological Survey (USGS) hydrologic data that are available on the Web. Users may also load data from other sources (text files, spreadsheets) using dataRetrieval.  Section \ref{sec:genRetrievals} provides examples of how one can obtain raw data from USGS sources on the Web and load them into dataframes within the R environment.  The functionality described in section \ref{sec:genRetrievals} is for general use and is not tailored for the specific uses of the EGRET package.  The functionality described in section \ref{sec:EGRETdfs} is tailored specifically to obtaining input from the Web and structuring it for use in the EGRET package.  The functionality described in section \ref{sec:userFiles} is for converting hydrologic data from user-supplied files and structuring it specifically for use in the EGRET package.
    
    For information on getting started in R and installing the package, see (\ref{sec:appendix1}): Getting Started. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
    
    A quick workflow for major dataRetrieval functions:
    
    
    <<workflow, echo=TRUE,eval=FALSE>>=
    
    library(dataRetrieval)
    
    # Choptank River near Greensboro, MD
    
    siteNumber <- "01491000" 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    ChoptankInfo <- getNWISSiteInfo(siteNumber)
    
    parameterCd <- "00060"
    
    #Raw daily data:
    
    rawDailyData <- getNWISdvData(siteNumber,parameterCd,
    
                          "1980-01-01","2010-01-01")
    
    # Data compiled for EGRET analysis
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Daily <- getNWISDaily(siteNumber,parameterCd,
    
                          "1980-01-01","2010-01-01")
    
    
    # Sample data Nitrate:
    parameterCd <- "00618"
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Sample <- getNWISSample(siteNumber,parameterCd,
    
                          "1980-01-01","2010-01-01")
    
    # Metadata on site and nitrate:
    
    INFO <- getNWISInfo(siteNumber,parameterCd)
    
    # Merge discharge and nitrate data to one dataframe:
    Sample <- mergeReport()
    
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \section{USGS Web Retrievals}
    
    \label{sec:genRetrievals}
    
    %------------------------------------------------------------ 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    In this section, five examples of Web retrievals document how to get raw data. This data includes site information (\ref{sec:usgsSite}), measured parameter information (\ref{sec:usgsParams}), historical daily values(\ref{sec:usgsDaily}), unit values (which include real-time data but can also include other sensor data stored at regular time intervals) (\ref{sec:usgsRT}), and water quality data (\ref{sec:usgsWQP}) or (\ref{sec:usgsSTORET}). We will use the Choptank River near Greensboro, MD as an example.  Daily discharge measurements are available as far back as 1948.  Additionally, nitrate has been measured since 1964. 
    
    % %------------------------------------------------------------
    % \subsection{Introduction}
    % %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    The USGS organizes hydrologic data in a standard structure.  Streamgages are located throughout the United States, and each streamgage has a unique ID (referred in this document and throughout the dataRetrieval package as \enquote{siteNumber}).  Often (but not always), these ID's are 8 digits.  The first step to finding data is discovering this siteNumber. There are many ways to do this, one is the National Water Information System: Mapper \url{http://maps.waterdata.usgs.gov/mapper/index.html}.
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Once the siteNumber is known, the next required input for USGS data retrievals is the \enquote{parameter code}.  This is a 5-digit code that specifies the measured parameter being requested.  For example, parameter code 00631 represents \enquote{Nitrate plus nitrite, water, filtered, milligrams per liter as nitrogen}, with units of \enquote{mg/l as N}. A complete list of possible USGS parameter codes can be found at \url{http://nwis.waterdata.usgs.gov/usa/nwis/pmcodes?help}.
    
    
    Not every station will measure all parameters. A short list of commonly measured parameters is shown in Table \ref{tab:params}.
    
    
    
    <<tableParameterCodes, echo=FALSE,results='asis'>>=
    
    pCode <- c('00060', '00065', '00010','00045','00400')
    
    shortName <- c("Discharge [ft$^3$/s]","Gage height [ft]","Temperature [C]", "Precipitation [in]", "pH")
    
    
    data.df <- data.frame(pCode, shortName, stringsAsFactors=FALSE)
    
    
    print(xtable(data.df,
           label="tab:params",
           caption="Common USGS Parameter Codes"),
           caption.placement="top",
           size = "\\footnotesize",
           latex.environment=NULL,
    
           sanitize.text.function = function(x) {x},
    
           sanitize.colnames.function =  bold.colHeaders,
           sanitize.rownames.function = addSpace
          )
    
    A complete list (as of September 25, 2013) is available as data attached to the package. It is accessed by the following:
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    
    <<tableParameterCodesDataRetrieval>>=
    library(dataRetrieval)
    parameterCdFile <-  parameterCdFile
    names(parameterCdFile)
    @
    
    
    
    For unit values data (sensor data measured at regular time intervals such as 15 minutes or hourly), knowing the parameter code and siteNumber is enough to make a request for data.  For most variables that are measured on a continuous basis, the USGS also stores the historical data as daily values.  These daily values are statistical summaries of the continuous data, e.g. maximum, minimum, mean, or median. The different statistics are specified by a 5-digit statistics code.  A complete list of statistic codes can be found here:
    
    
    \url{http://nwis.waterdata.usgs.gov/nwis/help/?read_file=stat&format=table}
    
    
    Some common codes are shown in Table \ref{tab:stat}.
    
    <<tableStatCodes, echo=FALSE,results='asis'>>=
    
    StatCode <- c('00001', '00002', '00003','00008')
    shortName <- c("Maximum","Minimum","Mean", "Median")
    
    data.df <- data.frame(StatCode, shortName, stringsAsFactors=FALSE)
    
    
    print(xtable(data.df,label="tab:stat",
               caption="Commonly used USGS Stat Codes"),
           caption.placement="top",
           size = "\\footnotesize",
           latex.environment=NULL,
           sanitize.colnames.function = bold.colHeaders,
           sanitize.rownames.function = addSpace
          )
    
    Examples for using these siteNumber's, parameter codes, and stat codes will be presented in subsequent sections.
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    
    
    %------------------------------------------------------------
    \subsection{Site Information}
    \label{sec:usgsSite}
    %------------------------------------------------------------
    
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \subsubsection{getNWISSiteInfo}
    
    \label{sec:usgsSiteFileData}
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Use the \texttt{getNWISSiteInfo} function to obtain all of the information available for a particular USGS site such as full station name, drainage area, latitude, and longitude. \texttt{getNWISSiteInfo} can also access information about multiple sites with a vector input.
    
    <<getSite, echo=TRUE>>=
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    siteNumbers <- c("01491000","01645000") 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    siteINFO <- getNWISSiteInfo(siteNumbers)
    
    A specific example piece of information can be retrieved, in this case a station name, as follows:
    
    <<siteNames2, echo=TRUE>>=
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    siteINFO$station.nm
    
    @
    Site information is obtained from \url{http://waterservices.usgs.gov/rest/Site-Test-Tool.html}
    \FloatBarrier
    
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \subsubsection{getNWISDataAvailability}
    
    \label{sec:usgsDataAvailability}
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    To discover what data is available for a particular USGS site, including measured parameters, period of record, and number of samples (count), use the \texttt{getNWISDataAvailability} function. It is possible to limit the retrieval information to a subset of types (\texttt{"}dv\texttt{"}, \texttt{"}uv\texttt{"}, or \texttt{"}qw\texttt{"}). In the following example, we limit the retrieved Choptank data to only daily data. Leaving the \texttt{"}type\texttt{"} argument blank returns all of the available data for that site.
    
    <<getSiteExtended, echo=TRUE>>=
    
    # Continuing from the previous example:
    # This pulls out just the daily data:
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    dailyDataAvailable <- getNWISDataAvailability(siteNumbers,
    
                        type="dv")
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    <<tablegda, echo=FALSE,results='asis'>>=
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    tableData <- with(dailyDataAvailable, 
          data.frame( 
          siteNumber= site_no,
          srsname=srsname, 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
          startDate=as.character(startDate), 
          endDate=as.character(endDate), 
          count=as.character(count),
    
          units=parameter_units,
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
          statCd = statCd,
    
          stringsAsFactors=FALSE)
    
    tableData$units[which(tableData$units == "ft3/s")] <- "ft$^3$/s"
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    tableData$units[which(tableData$units == "uS/cm @25C")] <- "$\\mu$S/cm @25C"
    
    
    print(xtable(tableData,label="tab:gda",
    
        caption="Daily mean data availabile at the Choptank River near Greensboro, MD. [Some columns deleted for space considerations]"),
    
           caption.placement="top",
           size = "\\footnotesize",
           latex.environment=NULL,
    
           sanitize.text.function = function(x) {x},
    
           sanitize.colnames.function =  bold.colHeaders,
           sanitize.rownames.function = addSpace
          )
    
    See Section \ref{app:createWordTable} for instructions on converting an R dataframe to a table in Microsoft\textregistered\ software Excel or Word to display a data availability table similar to Table \ref{tab:gda}. Excel, Microsoft, PowerPoint, Windows, and Word are registered trademarks of Microsoft Corporation in the United States and other countries.
    
    %------------------------------------------------------------
    \subsection{Parameter Information}
    \label{sec:usgsParams}
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    To obtain all of the available information concerning a measured parameter (or multiple parameters), use the \texttt{getNWISPcodeInfo} function:
    
    <<label=getPCodeInfo, echo=TRUE>>=
    
    # Using defaults:
    parameterCd <- "00618" 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    parameterINFO <- getNWISPcodeInfo(parameterCd)
    
    colnames(parameterINFO)
    @
    
    
    A specific example piece of information, in this case parameter name, can be obtained as follows:
    
    
    <<siteNames, echo=TRUE>>=
    parameterINFO$parameter_nm
    @
    
    Parameter information can obtained from \url{http://nwis.waterdata.usgs.gov/usa/nwis/pmcodes}
    
    \FloatBarrier
    %------------------------------------------------------------
    \subsection{Daily Values}
    \label{sec:usgsDaily}
    %------------------------------------------------------------
    
    To obtain daily records of USGS data, use the \texttt{getNWISdvData} function. The arguments for this function are siteNumber, parameterCd, startDate, endDate, statCd, and a logical (TRUE/FALSE) interactive. There are 2 default arguments: statCd (defaults to \texttt{"}00003\texttt{"}), and interactive (defaults to TRUE).  If you want to use the default values, you do not need to list them in the function call. By setting the \texttt{"}interactive\texttt{"} option to FALSE, the operation of the function will advance automatically. It might make more sense to run large batch collections with the interactive option set to FALSE. 
    
    The dates (start and end) must be in the format \texttt{"}YYYY-MM-DD\texttt{"} (note: the user must include the quotes).  Setting the start date to \texttt{"}\texttt{"} (no space) will prompt the program to ask for the earliest date, and setting the end date to \texttt{"}\texttt{"} (no space) will prompt for the latest available date.
    
    <<label=getNWISDaily, echo=TRUE, eval=TRUE>>=
    
    
    # Continuing with our Choptank River example
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    siteNumber <- "01491000"
    parameterCd <- "00060"  # Discharge
    
    startDate <- ""  # Will request earliest date
    endDate <- "" # Will request latest date
    
    
    discharge <- getNWISdvData(siteNumber, 
    
                        parameterCd, startDate, endDate)
    
    names(discharge)
    
    The column \texttt{"}datetime\texttt{"} in the returned dataframe is automatically imported as a variable of class \texttt{"}Date\texttt{"} in R. Each requested parameter has a value and remark code column.  The names of these columns depend on the requested parameter and stat code combinations. USGS remark codes are often \texttt{"}A\texttt{"} (approved for publication) or \texttt{"}P\texttt{"} (provisional data subject to revision). A more complete list of remark codes can be found here:
    
    \url{http://nwis.waterdata.usgs.gov/usa/nwis/pmcodes}
    
    
    Another example that doesn't use the defaults would be a request for mean and maximum daily temperature and discharge in early 2012:
    <<label=getNWIStemperature, echo=TRUE>>=
    
    parameterCd <- c("00010","00060")  # Temperature and discharge
    statCd <- c("00001","00003")  # Mean and maximum
    startDate <- "2012-01-01"
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    endDate <- "2012-05-01"
    
    temperatureAndFlow <- getNWISdvData(siteNumber, parameterCd, 
    
            startDate, endDate, statCd=statCd)
    
    Daily data is pulled from \url{http://waterservices.usgs.gov/rest/DV-Test-Tool.html}.
    
    The column names can be automatically adjusted based on the parameter and statistic codes using the \texttt{renameColumns} function. This is not necessary, but may be useful when analyzing the data. 
    
    
    <<label=renameColumns, echo=TRUE>>=
    names(temperatureAndFlow)
    
    temperatureAndFlow <- renameColumns(temperatureAndFlow)
    names(temperatureAndFlow)
    @
    
    
    An example of plotting the above data (Figure \ref{fig:getNWIStemperaturePlot}):
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    <<getNWIStemperaturePlot, echo=TRUE, fig.cap="Temperature and discharge plot of Choptank River in 2012.",out.width='1\\linewidth',out.height='1\\linewidth',fig.show='hold'>>=
    
    par(mar=c(5,5,5,5)) #sets the size of the plot window
    
    with(temperatureAndFlow, plot(
    
      datetime, Temperature_water_degrees_Celsius_Max_01,
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
      xlab="Date",ylab="Max Temperature [C]"
    
      ))
    par(new=TRUE)
    with(temperatureAndFlow, plot(
    
      datetime, Discharge_cubic_feet_per_second,
    
      col="red",type="l",xaxt="n",yaxt="n",xlab="",ylab="",axes=FALSE
      ))
    axis(4,col="red",col.axis="red")
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    mtext(expression(paste("Mean Discharge [ft"^"3","/s]",
                           sep="")),side=4,line=3,col="red")
    title(paste(siteINFO$station.nm[1],"2012",sep=" "))
    
    legend("topleft", c("Max Temperature", "Mean Discharge"), 
           col=c("black","red"),lty=c(NA,1),pch=c(1,NA))
    
    There are occasions where NWIS values are not reported as numbers, instead there might be text describing a certain event such as \enquote{Ice.}  Any value that cannot be converted to a number will be reported as NA in this package (not including remark code columns).
    
    %------------------------------------------------------------
    \subsection{Unit Values}
    \label{sec:usgsRT}
    %------------------------------------------------------------
    
    Any data collected at regular time intervals (such as 15-minute or hourly) are known as \enquote{unit values.} Many of these are delivered on a real time basis and very recent data (even less than an hour old in many cases) are available through the function \texttt{getNWISunitData}.  Some of these unit values are available for many years, and some are only available for a recent time period such as 120 days.  Here is an example of a retrieval of such data.  
    
    
    <<label=getNWISUnit, echo=TRUE>>=
    
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    parameterCd <- "00060"  # Discharge
    
    startDate <- "2012-05-12" 
    endDate <- "2012-05-13" 
    
    dischargeToday <- getNWISunitData(siteNumber, parameterCd, 
    
            startDate, endDate)
    @
    
    
    The retrieval produces the following dataframe:
    
    
    <<dischargeData, echo=FALSE>>=
    head(dischargeToday)
    @
    
    
    Note that time now becomes important, so the variable datetime is a POSIXct, and the time zone is included in a separate column. Data are retrieved from \url{http://waterservices.usgs.gov/rest/IV-Test-Tool.html}. There are occasions where NWIS values are not reported as numbers, instead a common example is \enquote{Ice.}  Any value that cannot be converted to a number will be reported as NA in this package.
    
    %------------------------------------------------------------
    \subsection{Water Quality Values}
    \label{sec:usgsWQP}
    %------------------------------------------------------------
    
    To get USGS water quality data from water samples collected at the streamgage or other monitoring site (as distinct from unit values collected through some type of automatic monitor) we can use the function \texttt{getNWISqwData}, with the input arguments: siteNumber, parameterCd, startDate, endDate, and interactive (similar to \texttt{getNWISunitData} and \texttt{getNWISdvData}). Additionally, the argument \texttt{"}expanded\texttt{"} is a logical input that allows the user to choose between a simple return of datetimes/qualifier/values (expanded=FALSE), or a more complete and verbose output (expanded=TRUE). Expaned = TRUE includes such columns as remark codes, value qualifying text, and detection level.
    
    
    
    <<label=getQW, echo=TRUE>>=
     
    # Dissolved Nitrate parameter codes:
    parameterCd <- c("00618","71851")
    
    startDate <- "1985-10-01"
    endDate <- "2012-09-30"
    
    dissolvedNitrate <- getNWISqwData(siteNumber, parameterCd, 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
          startDate, endDate, expanded=TRUE)
    
    names(dissolvedNitrate)
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    <<getQWtemperaturePlot, echo=TRUE, fig.cap=paste(parameterINFO$parameter_nm, "at", siteINFO$station.nm[1])>>=
    
    with(dissolvedNitrate, plot(
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
      dateTime, result_va_00618,
    
      xlab="Date",ylab = paste(parameterINFO$srsname,
          "[",parameterINFO$parameter_units,"]")
      ))
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    title(siteINFO$station.nm[1])
    
    %------------------------------------------------------------
    \subsection{URL Construction}
    \label{sec:usgsURL}
    %------------------------------------------------------------
    
    There may be times when you might be interested in seeing the URL (Web address) that was used to obtain the raw data. The \texttt{constructNWISURL} function returns the URL.  In addition to input variables that have been described, there is a new argument \texttt{"}service\texttt{"}. The service argument can be \texttt{"}dv\texttt{"} (daily values), \texttt{"}uv\texttt{"} (unit values), \texttt{"}qw\texttt{"} (NWIS water quality values), or \texttt{"}wqp\texttt{"} (general Water Quality Portal values).
    
     
    
    <<label=geturl, echo=TRUE, eval=FALSE>>=
    # Dissolved Nitrate parameter codes:
    pCode <- c("00618","71851")
    startDate <- "1964-06-11"
    endDate <- "2012-12-18"
    url_qw <- constructNWISURL(siteNumber,pCode,startDate,endDate,'qw')
    
    url_dv <- constructNWISURL(siteNumber,"00060",startDate,endDate,
                               'dv',statCd="00003")
    
    url_uv <- constructNWISURL(siteNumber,"00060",startDate,endDate,'uv')
    @
    
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    
    
    %------------------------------------------------------------
    \section{Water Quality Portal Web Retrievals}
    \label{sec:usgsSTORET}
    %------------------------------------------------------------
    
    There are additional water quality data sets available from the Water Quality Data Portal (\url{http://www.waterqualitydata.us/}).  These data sets can be housed in either the STORET (data from EPA), NWIS database (data from USGS), STEWARDS database (USDA), and additional databases are slated to be included.  Because only USGS uses parameter codes, a \texttt{"}characteristic name\texttt{"} must be supplied.  The \texttt{getWQPqwData} function can take either a USGS parameter code, or a more general cahracteristic name in the parameterCd input argument. The Water Quality Data Portal includes data discovery tools and information on characteristic names. The following example retrieves specific conductance from a DNR site in Wisconsin. 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    
    
    <<label=getQWData, echo=TRUE, eval=FALSE>>=
    
    specificCond <- getWQPqwData('WIDNR_WQX-10032762',
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
                    'Specific conductance','2011-05-01','2011-09-30')
    @
    
    
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \FloatBarrier
    
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \section{Generalized Retrievals}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \label{sec:general}
    %------------------------------------------------------------
    The previous examples all took specific input arguments: siteNumber, parameterCd (or characteristic name), startDate, endDate, etc. However, the Web services that supply the data can accept a wide variety of additional arguments. 
    
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \subsubsection{NWIS sites}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \label{sec:NWISGenSite}
    %------------------------------------------------------------
    The function \texttt{getNWISSites} can be used to discover NWIS sites based on any query that the NWIS Site Service offers. This is done by using the \texttt{"..."} argument, which allows the user to use any arbitrary input argument. We can then use the service here:
    
    \url{http://waterservices.usgs.gov/rest/Site-Test-Tool.html}
    
    to discover many options for searching for NWIS sites. For example, you may want to search for sites in a lat/lon bounding box, or only sites tidal streams, or sites with water quality samples, sites above a certain altitude, etc. The results of this site query generate a URL. For example, the tool provided a search within a specified bounding box, for sites that have daily discharge (parameter code = 00060) and temperature (parameter code = 00010). The generated URL is:
    
    \url{http://waterservices.usgs.gov/nwis/site/?format=rdb&bBox=-83.0,36.5,-81.0,38.5&parameterCd=00010,00060&hasDataTypeCd=dv}
    
    The following dataRetrieval code can be used to get those sites:
    
    <<siteSearch>>=
    sites <- getNWISSites(bBox="-83.0,36.5,-81.0,38.5", 
                          parameterCd="00010,00060",
                          hasDataTypeCd="dv")
    
    names(sites)
    nrow(sites)
    @
    
    
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \subsubsection{NWIS data}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \label{sec:NWISGenData}
    %------------------------------------------------------------
    For NWIS data, the function \texttt{getNWISData} can be used. The argument listed in the R help file is \texttt{"..."} and \texttt{"}service\texttt{"} (only for data requests). Table \ref{tab:NWISGeneral} describes the services are available.
    
    \begin{table}[!ht]
    \begin{minipage}{\linewidth}
    {\footnotesize
    \caption{NWIS general data calls} 
    \label{tab:NWISGeneral}
    \begin{tabular}{lll}
      \hline
    \multicolumn{1}{c}{\textbf{\textsf{Service}}} &
    \multicolumn{1}{c}{\textbf{\textsf{Description}}}  &
    \multicolumn{1}{c}{\textbf{\textsf{Reference URL}}} \\  [0pt]
      \hline
      daily values &  dv & \url{http://waterservices.usgs.gov/rest/DV-Test-Tool.html}\\
      [5pt]instantaneous & iv & \url{http://waterservices.usgs.gov/rest/IV-Test-Tool.html}\\
      [5pt]groundwater levels & gwlevels & \url{http://waterservices.usgs.gov/rest/GW-Levels-Test-Tool.html}\\
      [5pt]water quality & qwdata & \url{http://nwis.waterdata.usgs.gov/nwis/qwdata}\\
       \hline
    \end{tabular}
    }
    \end{minipage}
    \end{table}
    
    The \texttt{"..."} argument allows the user to create their own queries based on the instructions found in the web links above. The links provide instructions on how to create a URL to request data. Perhaps you want sites only in Wisconsin, with a drainage area less than 50 mi$^2$, and the most recent daily dischage data. That request would be done as follows:
    
    <<dataExample>>=
    dischargeWI <- getNWISData(stateCd="WI",
                               parameterCd="00060",
                               drainAreaMin="50",
                               statCd="00003")
    names(dischargeWI)
    nrow(dischargeWI)
    @
    
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \subsubsection{Water Quality Portal sites}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \label{sec:WQPGenSite}
    %------------------------------------------------------------
    
    Just as with NWIS, the Water Quality Portal (WQP) offers a variety of ways to search for sites and request data. The possible Web service arguments for WQP site searches is found here:
    
    \url{http://www.waterqualitydata.us/webservices_documentation.jsp}
    
    To discover available sites in the WQP in New Jersey that have measured Chloride, use the function \texttt{getWQPSites}.
    
    <<NJChloride, eval=FALSE>>=
    
    sitesNJ <- getWQPSites(statecode="US:34",
                           characteristicName="Chloride")
    
    @
    
    
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \subsubsection{Water Quality Portal data}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \label{sec:WQPGenData}
    %------------------------------------------------------------
    Finally, to get data from the WQP using generalized Web service calls, use the function \texttt{getWQPData}. For example, to get all the pH data in Wisconsin:
    
    <<phData, eval=FALSE>>=
    
    dataPH <- getWQPData(statecode="US:55", 
                     characteristicName="pH")
    
    @
    
    
    
    
    %------------------------------------------------------------
    \section{Data Retrievals Structured For Use In The EGRET Package}
    
    \label{sec:EGRETdfs}
    
    %------------------------------------------------------------ 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Rather than using the raw data as retrieved by the Web, the dataRetrieval package also includes functions that return the data in a structure that has been designed to work with the EGRET R package (\url{https://github.com/USGS-R/EGRET/wiki}). In general, these dataframes may be much more \enquote{R-friendly} than the raw data, and will contain additional date information that allows for efficient data analysis.
    
    In this section, we use 3 dataRetrieval functions to get sufficient data to perform an EGRET analysis.  We will continue analyzing the Choptank River. We retrieve essentially the same data that were retrieved in section \ref{sec:genRetrievals}, but in this case the data are structured into three EGRET-specific dataframes.  The daily discharge data are placed in a dataframe called Daily.  The nitrate sample data are placed in a dataframe called Sample.  The data about the site and the parameter are placed in a dataframe called INFO.  Although these dataframes were designed to work with the EGRET R package, they can be very useful for a wide range of hydrology studies that don't use EGRET.
    
    
    %------------------------------------------------------------
    \subsection{INFO Data}
    
    \label{INFOsubsection}
    
    %------------------------------------------------------------
    
    
    The \texttt{getNWISInfo}, \texttt{getWQPInfo}, and \texttt{getUserInfo} functions obtain metadata, or data about the streamgage and measured parameters. Any number of columns can be included in this dataframe. Table \ref{tab:INFOtable} describes fields are required for EGRET functions. 
    
    \begin{table}[!ht]
    \begin{minipage}{\linewidth}
    {\footnotesize
    \caption{INFO columns required in EGRET functions} 
    \label{tab:INFOtable}
    \begin{tabular}{lll}
      \hline
    \multicolumn{1}{c}{\textbf{\textsf{Column Name}}} &
    \multicolumn{1}{c}{\textbf{\textsf{Type}}} &
    \multicolumn{1}{c}{\textbf{\textsf{Description}}} \\  [0pt]
      \hline
      constitAbbrev & string & Constituent abbrieviation, used for saving the workspace in EGRET\\
      [5pt] drainSqKm & numeric & Drainage area in square kilometers \\
      [5pt] paramShortName & string & Parameter name to use on graphs \\
      [5pt] param.units & string & Parameter units \\
      [5pt] shortName & string & Station name to use on graphs\\
      [5pt] staAbbrev & string & Station Abbreviation \\
       \hline
    \end{tabular}
    }
    \end{minipage}
    \end{table}
    
    The function \texttt{getNWISInfo} combines \texttt{getNWISSiteInfo} and \texttt{getNWISPcodeInfo}, producing one dataframe called INFO.
    
    
    <<ThirdExample>>=
    parameterCd <- "00618"
    
    INFO <- getNWISInfo(siteNumber,parameterCd, interactive=FALSE)
    @
    
    It is also possible to create the INFO dataframe using information from the Water Quality Portal:
    
    <<WQPInfo, eval=FALSE>>=
    parameterCd <- "00618"
    INFO_WQP <- getWQPInfo("USGS-01491000",parameterCd)
    @
    
    Finally, the function \texttt{getUserInfo} can be used to convert comma separated files into an INFO dataframe. 
    
    Any supplemental column that would be useful can be added to the INFO dataframe. 
    
    <<addInfo, eval=TRUE, echo=TRUE>>=
    
    INFO$riverInfo <- "Major tributary of the Chesapeake Bay"
    INFO$GreensboroPopulation <- 1931
    
    
    %------------------------------------------------------------
    \subsection{Daily Data}
    
    \label{Dailysubsection}
    
    %------------------------------------------------------------
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    The \texttt{getNWISDaily} function retrieves the daily values (discharge in this case).  It requires the inputs siteNumber, parameterCd, startDate, endDate, interactive, and convert. Most of these arguments are described in section \ref{sec:genRetrievals}, however \texttt{"}convert\texttt{"} is a new argument (that defaults to TRUE). The convert argument tells the program to convert the values from cubic feet per second (ft\textsuperscript{3}/s) to cubic meters per second (m\textsuperscript{3}/s) as shown in the example Daily data frame in Table \ref{tab:DailyDF1}. For EGRET applications with NWIS Web retrieval, do not use this argument (the default is TRUE), EGRET assumes that discharge is always stored in units of cubic meters per second. If you don't want this conversion and are not using EGRET, set convert=FALSE in the function call. 
    
    
    <<firstExample>>=
    siteNumber <- "01491000"
    startDate <- "2000-01-01"
    endDate <- "2013-01-01"
    
    # This call will get NWIS (ft3/s) data , and convert it to m3/s:
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Daily <- getNWISDaily(siteNumber, "00060", startDate, endDate)
    
    <<colNamesDaily, echo=FALSE,results='asis'>>=
    
    ColumnName <- c("Date", "Q", "Julian","Month","Day","DecYear","MonthSeq","Qualifier","i","LogQ","Q7","Q30")
    Type <- c("Date", "number", "number","integer","integer","number","integer","string","integer","number","number","number")
    
    Description <- c("Date", "Discharge in m$^3$/s", "Number of days since January 1, 1850", "Month of the year [1-12]", "Day of the year [1-366]", "Decimal year", "Number of months since January 1, 1850", "Qualifying code", "Index of days, starting with 1", "Natural logarithm of Q", "7 day running average of Q", "30 day running average of Q")
    
    Units <- c("date", "m$^3$/s","days", "months","days","years","months", "character","days","numeric","m$^3$/s","m$^3$/s")
    
    
    DF <- data.frame(ColumnName,Type,Description,Units)
    
    
    print(xtable(DF, caption="Daily dataframe",label="tab:DailyDF1"),
           caption.placement="top",
           size = "\\footnotesize",
           latex.environment=NULL,
           sanitize.text.function = function(x) {x},
           sanitize.colnames.function =  bold.colHeaders,
           sanitize.rownames.function = addSpace
           
          )
    
    If discharge values are negative or zero, the code will set all of these values to zero and then add a small constant to all of the daily discharge values.  This constant is 0.001 times the mean discharge.  The code will also report on the number of zero and negative values and the size of the constant.  Use EGRET analysis only if the number of zero values is a very small fraction of the total days in the record (say less than 0.1\% of the days), and there are no negative discharge values.  Columns Q7 and Q30 are the 7 and 30 day running averages for the 7 or 30 days ending on this specific date. Table \ref{tab:DailyDF1} lists details of the Daily data frame.
    
    Notice that the \enquote{Day of the year} column can span from 1 to 366. The 366 accounts for leap years. Every day has a consistent day of the year. This means, February 28\textsuperscript{th} is always the 59\textsuperscript{th} day of the year, Feb. 29\textsuperscript{th} is always the 60\textsuperscript{th} day of the year, and March 1\textsuperscript{st} is always the 61\textsuperscript{st} day of the year whether or not it is a leap year.
    
    User-generated Sample dataframes can also be created using the \texttt{getUserDaily} function. This is discused in detail in section \ref{sec:DailyFile}.
    
    
    %------------------------------------------------------------
    \subsection{Sample Data}
    
    \label{Samplesubsection}
    
    %------------------------------------------------------------
    
    The \texttt{getNWISSample} function retrieves USGS sample data from NWIS. The arguments for this function are also siteNumber, parameterCd, startDate, endDate, interactive. These are the same inputs as \texttt{getWQPqwData} or \texttt{getWQPData} as described in the previous section.
    
    parameterCd <- "00618"
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Sample <-getNWISSample(siteNumber,parameterCd,
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    The \texttt{getWQPSample} function retrieves Water Quality Portal sample data (STORET, NWIS, STEWARDS). The arguments for this function are siteNumber, characteristicName, startDate, endDate, interactive. Table \ref{tab:SampleDataframe} lists details of the Sample data frame. 
    
    
    <<STORET,echo=TRUE,eval=FALSE>>=
    site <- 'WIDNR_WQX-10032762'
    characteristicName <- 'Specific conductance'
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Sample <-getWQPSample(site,characteristicName,
    
    User-generated Sample dataframes can also be created using the \texttt{getUserSample} function. This is discused in detail in section \ref{sec:SampleFile}.
    
    \pagebreak
    
    \begin{table}
    
    {\footnotesize
    
      \begin{threeparttable}[b]
      \caption{Sample dataframe}
      \label{tab:SampleDataframe}
      \begin{tabular}{llll}
    
    \multicolumn{1}{c}{\textbf{\textsf{ColumnName}}} & 
    \multicolumn{1}{c}{\textbf{\textsf{Type}}} & 
    \multicolumn{1}{c}{\textbf{\textsf{Description}}} & 
    \multicolumn{1}{c}{\textbf{\textsf{Units}}} \\ 
    
      Date & Date & Date & date \\ 
      [5pt]ConcLow & number & Lower limit of concentration & mg/L \\ 
      [5pt]ConcHigh & number & Upper limit of concentration & mg/L \\ 
      [5pt]Uncen & integer & Uncensored data (1=true, 0=false) & integer \\ 
      [5pt]ConcAve & number & Average of ConcLow and ConcHigh & mg/L \\ 
      [5pt]Julian & number & Number of days since January 1, 1850 & days \\ 
      [5pt]Month & integer & Month of the year [1-12] & months \\ 
      [5pt]Day & integer & Day of the year [1-366] & days \\ 
      [5pt]DecYear & number & Decimal year & years \\ 
      [5pt]MonthSeq & integer & Number of months since January 1, 1850 & months \\ 
      [5pt]SinDY & number & Sine of DecYear & numeric \\ 
      [5pt]CosDY & number & Cosine of DecYear & numeric \\ 
      [5pt]Q \tnote{1} & number & Discharge & m\textsuperscript{3}/s \\ 
      [5pt]LogQ \tnote{1} & number & Natural logarithm of discharge & numeric \\ 
    
       \hline
    \end{tabular}
    
      \begin{tablenotes}
    
        \item[1] Discharge columns are populated from data in the Daily dataframe after calling the \texttt{mergeReport} function.
    
      \end{tablenotes}
     \end{threeparttable}
    
    \end{table}
    
    Notice that the \enquote{Day of the year} column can span from 1 to 366. The 366 accounts for leap years. Every day has a consistent day of the year. This means, February 28\textsuperscript{th} is always the 59\textsuperscript{th} day of the year, Feb. 29\textsuperscript{th} is always the 60\textsuperscript{th} day of the year, and March 1\textsuperscript{st} is always the 61\textsuperscript{st} day of the year whether or not it is a leap year.
    
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Section \ref{sec:cenValues} is about summing multiple constituents, including how interval censoring is used. Since the Sample data frame is structured to only contain one constituent, when more than one parameter codes are requested, the \texttt{getNWISSample} function will sum the values of each constituent as described below.
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    
    
    %------------------------------------------------------------
    \subsection{Censored Values: Summation Explanation}
    
    \label{sec:cenValues}
    
    %------------------------------------------------------------
    
    In the typical case where none of the data are censored (that is, no values are reported as \enquote{less-than} values), the ConcLow = ConcHigh = ConcAve and Uncen = 1 are equal to the reported value.  For the most common type of censoring, where a value is reported as less than the reporting limit, then ConcLow = NA, ConcHigh = reporting limit, ConcAve = 0.5 * reporting limit, and Uncen = 0.
    
    To illustrate how the dataRetrieval package handles a more complex censoring problem, let us say that in 2004 and earlier, we computed total phosphorus (tp) as the sum of dissolved phosphorus (dp) and particulate phosphorus (pp). From 2005 and onward, we have direct measurements of total phosphorus (tp). A small subset of this fictional data looks like Table \ref{tab:exampleComplexQW}.
    
    <<label=tab:exampleComplexQW, echo=FALSE, eval=TRUE,results='asis'>>=
    
    cdate <- c("2003-02-15","2003-06-30","2004-09-15","2005-01-30","2005-05-30","2005-10-30")
    rdp <- c("", "<","<","","","")
    dp <- c(0.02,0.01,0.005,NA,NA,NA)
    rpp <- c("", "","<","","","")
    pp <- c(0.5,0.3,0.2,NA,NA,NA)
    rtp <- c("","","","","<","<")
    tp <- c(NA,NA,NA,0.43,0.05,0.02)
    
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    DF <- data.frame(cdate,rdp,dp,rpp,pp,rtp,tp,stringsAsFactors=FALSE)
    
    xTab <- xtable(DF, caption="Example data",digits=c(0,0,0,3,0,3,0,3),label="tab:exampleComplexQW")
    
    print(xTab,
           caption.placement="top",
           size = "\\footnotesize",
           latex.environment=NULL,
           sanitize.colnames.function =  bold.colHeaders,
           sanitize.rownames.function = addSpace
          )
    
    The dataRetrieval package will \enquote{add up} all the values in a given row to form the total for that sample when using the Sample dataframe. Thus, you only want to enter data that should be added together. If you want a dataframe with multiple constituents that are not summed, do not use getNWISSample, getWQPSample, or getUserSample. The raw data functions: \texttt{getWQPData}, \texttt{getNWISqwData}, \texttt{getWQPqwData}, \texttt{getWQPData} will not sum constituents, but leave them in their individual columns. 
    
    
    For example, we might know the value for dp on 5/30/2005, but we don't want to put it in the table because under the rules of this data set, we are not supposed to add it in to the values in 2005.
    
    For every sample, the EGRET package requires a pair of numbers to define an interval in which the true value lies (ConcLow and ConcHigh). In a simple uncensored case (the reported value is above the detection limit), ConcLow equals ConcHigh and the interval collapses down to a single point. In a simple censored case, the value might be reported as \verb@<@0.2, then ConcLow=NA and ConcHigh=0.2. We use NA instead of 0 as a way to elegantly handle future logarithm calculations.
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    For the more complex example case, let us say dp is reported as \verb@<@0.01 and pp is reported as 0.3. We know that the total must be at least 0.3 and could be as much as 0.31. Therefore, ConcLow=0.3 and ConcHigh=0.31. Another case would be if dp is reported as \verb@<@0.005 and pp is reported \verb@<@0.2. We know in this case that the true value could be as low as zero, but could be as high as 0.205. Therefore, in this case, ConcLow=NA and ConcHigh=0.205. The Sample dataframe for the example data would be:
    
    
    <<thirdExample,echo=FALSE>>=
    
      compressedData <- compressData(DF)
      Sample <- populateSampleColumns(compressedData)
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    @
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    <<thirdExampleView,echo=TRUE>>=
      Sample
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Section \ref{sec:userFiles} discusses inputting user-generated files. The functions \texttt{getUserSample} and \texttt{getNWISSample} assume summation with interval censoring inputs, and are discussed in sections \ref{sec:DailyFile} and \ref{sec:SampleFile}.
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    
    
    %------------------------------------------------------------ 
    \subsection{User-Generated Data Files}
    
    \label{sec:userFiles}
    
    %------------------------------------------------------------ 
    
    In addition to retrieving data from the USGS Web services, the dataRetrieval package also includes functions to generate the Daily and Sample data frame from local files.
    
    
    %------------------------------------------------------------ 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \subsubsection{getUserDaily}
    
    \label{sec:DailyFile}
    
    %------------------------------------------------------------ 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    The \texttt{getUserDaily} function will load a user-supplied text file and convert it to the Daily dataframe. The file should have two columns, the first dates, the second values.  The dates are formatted either mm/dd/yyyy or yyyy-mm-dd. Using a 4-digit year is required. This function has the following inputs: filePath, fileName,hasHeader (TRUE/FALSE), separator, qUnit, and interactive (TRUE/FALSE). filePath is a string that defines the path to your file, and the string can either be a full path, or path relative to your R working directory. The input fileName is a string that defines the file name (including the extension).
    
    Text files that contain this sort of data require some sort of a separator, for example, a \enquote{csv} file (comma-separated value) file uses a comma to separate the date and value column. A tab delimited file would use a tab (\verb@"\t"@) rather than the comma (\texttt{"},\texttt{"}). Define the type of separator you choose to use in the function call in the \texttt{"}separator\texttt{"} argument, the default is \texttt{"},\texttt{"}. Another function input is a logical variable: hasHeader.  The default is TRUE. If your data does not have column names, set this variable to FALSE.
    
    Finally, qUnit is a numeric argument that defines the discharge units used in the input file.  The default is qUnit = 1 which assumes discharge is in cubic feet per second.  If the discharge in the file is already in cubic meters per second then set qUnit = 2.  If it is in some other units (like liters per second or acre-feet per day), the user must pre-process the data with a unit conversion that changes it to either cubic feet per second or cubic meters per second.
    
    So, if you have a file called \enquote{ChoptankRiverFlow.txt} located in a folder called \enquote{RData} on the C drive (this example is for the Windows\textregistered\ operating systems), and the file is structured as follows (tab-separated):
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    % \singlespacing
    
    \begin{verbatim}
    date  Qdaily
    10/1/1999  107
    
    10/2/1999  85
    
    10/3/1999	76
    10/4/1999	76
    10/5/1999	113
    10/6/1999	98
    ...
    \end{verbatim}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    % \doublespacing
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    The call to open this file, convert the discharge to cubic meters per second, and populate the Daily data frame would be:
    
    <<openDaily, eval = FALSE>>=
    fileName <- "ChoptankRiverFlow.txt"
    filePath <-  "C:/RData/"
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Daily <-getFileDaily(filePath,fileName,
    
                        separator="\t")
    
    Microsoft\textregistered\ Excel files can be a bit tricky to import into R directly. The simplest way to get Excel data into R is to open the Excel file in Excel, then save it as a .csv file (comma-separated values). 
    
    %------------------------------------------------------------ 
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \subsubsection{getUserSample}
    
    \label{sec:SampleFile}
    
    %------------------------------------------------------------ 
    
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    The \texttt{getUserSample} function will import a user-generated file and populate the Sample dataframe. The difference between sample data and discharge data is that the code requires a third column that contains a remark code, either blank or \verb@"<"@, which will tell the program that the data were \enquote{left-censored} (or, below the detection limit of the sensor). Therefore, the data must be in the form: date, remark, value.   An example of a comma-delimited file is:
    
    
    \singlespacing
    
    \begin{verbatim}
    cdate;remarkCode;Nitrate
    10/7/1999,,1.4
    11/4/1999,<,0.99
    12/3/1999,,1.42
    1/4/2000,,1.59
    2/3/2000,,1.54
    ...
    \end{verbatim}
    
    The call to open this file, and populate the Sample dataframe is:
    
    <<openSample, eval = FALSE>>=
    fileName <- "ChoptankRiverNitrate.csv"
    filePath <-  "C:/RData/"
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Sample <-getUserSample(filePath,fileName,
    
                                    separator=",")
    
    When multiple constituents are to be summed, the format can be date, remark\_A, value\_A, remark\_b, value\_b, etc... A tab-separated example might look like the file below, where the columns are date, remark dissolved phosphate (rdp), dissolved phosphate (dp), remark particulate phosphorus (rpp), particulate phosphorus (pp), remark total phosphate (rtp), and total phosphate (tp):
    
    \singlespacing
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    \begin{verbatim}
    date  rdp	dp	rpp	pp	rtp	tp
    
    2003-02-15		0.020		0.500		
    2003-06-30	<	0.010		0.300		
    2004-09-15	<	0.005	<	0.200		
    2005-01-30						0.430
    2005-05-30					<	0.050
    2005-10-30					<	0.020
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    ...
    \end{verbatim}
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    <<openSample2, eval = FALSE>>=
    fileName <- "ChoptankPhosphorus.txt"
    filePath <-  "C:/RData/"
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
    Sample <-getUserSample(filePath,fileName,
    
    Laura A DeCicco's avatar
    Laura A DeCicco committed
                                    separator="\t")
    @
    
    
    
    %------------------------------------------------------------
    \subsection{Merge Report}
    %------------------------------------------------------------
    
    Finally, there is a function called \texttt{mergeReport} that will look at both the Daily and Sample dataframe, and populate Q and LogQ columns into the Sample dataframe. The default arguments are Daily and Sample, however if you want to use other similarly structured dataframes, you can specify localDaily or localSample. Once \texttt{mergeReport} has been run, the Sample dataframe will be augmented with the daily discharges for all the days with samples.  None of the water quality functions in EGRET will work without first having run the \texttt{mergeReport} function.
    
    
    
    <<mergeExample>>=
    siteNumber <- "01491000"
    parameterCd <- "00631"  # Nitrate