Introduction
TOPMODEL
Data Preparation
Stream network and Elevation data:
Elevation data was downloaded from the USGS
data distribution website;
http://seamless.usgs.gov/ . NED data sources have a variety of
elevation units, horizontal datums and map projections.Hence a number of
processing steps have to be undertaken to allow for correct analysis using
Arc/Info software. The processing steps undertaken in this project are as
outlined below.
The downloaded data was in
Geographic coordinates, and therefore the first step was to project the data
to UTM zone 17. I created a folder named 'Ohioutm' in my working directory.
I then proceeded to open ArcToolbox , project wizard, (coverages and grids)
and selected 'project my data to a specified coordinate system'. I chose NAD
1983 as the datum for my dataset. I then selected Ohioutm as my output
dataset with 'cubic' as the desired resampling method and saved it as a grid
named NED. I then opened ArcMap and added Ohioutm\ned to the project while
clicking ok to allow project to build pyramids.
2. Filling Sinks in the DEM
This was accomplished using the Arc/Info fill command.
The following sequence of Arc commands was executed in commandline Arc/Info
Workstation.
This procedure creates a new pit filled grid, nedfel, which is then used as the base DEM for TauDEM.
3. Run TauDEM for Watershed and Streamnetwork Delineation
( After Dr. Tarboton and The Utah Water
Research Laboratory, 2000).
Vegetation data:
Land Use Land Cover data was downloaded in ASCII file format from
USGS, EROS website.
http://edc.usgs.gov/products/landcover/lulc.html
This data consists of historical land use and land cover data based
primarily on the manual interpretation of 1970's and 1980's aerial
photography. The following data transformation procedure was used to
prepare the original ASCII file for input into ArcMap project:
The vegetation in the area is predominantly Row Crops with some Pasture/Hay.
Soils Data:
This was obtained from STATSGO soils data for Ohio.This dataset is a
general Soil association map developed by the National Cooperative Soil
Survey and distributed by the Natural Resources Conservation Service (NRCS).
It consists of a broad based inventory of soils and non soil areas that
occur in a repeatable pattern on the landscape and that can be
cartographically shown at the scale mapped. The soil maps from STATSGO are
compiled by generalizing more detailed soil survey maps. The spatial
component of the STATSGO database is archived and distributed in Arc/Info
export file format *.e00.
The soils data processing involved the following steps:
2. Create Tables to Relate Grid Layer toPolygon Data by Map Unit
Identifier (MUID)
This process extracts the majority
soil-code from each raster layer of soil texture that occurs in each polygon
of the STATSGO shape file.
Since the majority field gives the texture
class for the matching polygon, delete all fields except MUID, Zone_code and
majority.
Use Table/Properties to set thee alias for the field majority to designate
the texture class depth range. Now each majority column will be labled by
the layer to which it corresponds. Change the name of the majority field to
the layer name with which it corresponds.
Join all layers into one table
Join the resulting table to other tables with MUID as the join field. Export
resulting table as delimited text, Ohtxttab.txt.
Join the soil layer table to the soil
polygon layer
Add the Ohtxttab.txt to the ArcMap project. Join the table using MUID
field to the attribute table of the soils shape file.
Convert Polygon to Raster
Set Spatial Analyst options such that Analysis Output will be saved
in coordinate system of the active data frame and extent and cell size are
the same as the base DEM. Using Spatial Analyst, convert features to raster
from soils shape file, using the field 'zone_code' from joined attribute
table. Output result as as Ohiosoil.
Precipitation Radar Data
NEXRAD Stage III radar data was used in this modeling exercise. The
NEXRAD Stage III products offer high quality hourly rainfall estimates with
an approximate resolution of 4Km by 4Km cells. This data provides much more
information about how weather systems behave in space and time than can be
inferred from raingauges alone. Stage III data was created specifically for
the NWS river forecast centers which need rainfall estimates over a much
larger area than covered by an individual radar. Stage III mosiacs together
Stage II estimates from multiple radars onto a subset of the national HRAP
grid covering the river forecasting area of responsibility. (http://www.nws.noaa.gov/oh/hrl/papers/ams/ams9-1.htm).
Hourly NEXRAD Stage III products are in binary format and have the following
naming convention; 'xmrgMMDDYYhhz'. Each day, these xmrg products are
compressed then tarred into a daily file in the form 'SiiiMMDDYYRFCID.tar'.
At the end of the month these daily files are tarred into a monthly file and
posted on the web in the form 'SiiiMMYYRFCID.tar', where:
The data were downloaded from the following site (http://dipper.nws.noaa.gov/hdsb/data/nexrad/nexradiii.html)
Make_raindat program is used to read and convert raw radr data to TOPMODEL
ready data. It promts the user to select a beginning and ending monthly
dataset. It takes the compressed monthly data, decompresses to the daily
data, decompresses to the hourly data, reads the xmrg file from binary and
outputs a beginning and ending month.dat file. this output file should be
renamed rain.dat and copied into the model runs directory. make_raindat.exe
creates latlong.txt which is then converted into an event theme in
ArcCatalog, allpoints.shp. This shape file is then used to create a buffer
over the watershed of interest and from this, radar_pts.shp is created. The
.dbf from this file is then exported to a txt file radar_pts.txt. When this
file is present, make_raindat.exe will read these locations and write
output for only these locations.
Stream flow:
Used for model calibration and validation. This is historical data that I obtained from the NWS. TOPMODEL reads the Stream flow data in micro meters per time step. For this modeling exercise, I will be using an hourly time step and therefore some data transformations had to be carried out. A sample of the data conversion is as illustrated below;
From this file, runoff.xls, input data file runoff.dat is exported and is then one read by TOPMODEL.
Climate Forcing Data:
This is input into TOPMODEL from two files; tempar.dat and clipar.dat:
The first file contains temperature data in degree Celsius as well as diurnal temperature ranges, dew point temperature and date/time data, while the second file had data on the latitude and longitude of the Basin center, the standard time longitude, elevation of the temperature gages and monthly diurnal temperature range.
the climate forcing data was obtained from the University of Washington website at http://www.hydro.washington.edu/Lettenmaier/gridded_data
Data Assembly and Model Runs:
All the data needed to run TOPMODEL has been collected and preliminary data processing steps have been undertaken. TOPSETUP is then used to compile the spatial data to be input into TOPMODEL. This enables parameter calculation for each subwatershed in order to run TOPMODEL as a distributed model. the output from TOPSETUP is used as input into TOPRUN which is the vehicle used to run TOPMODEL. The files output by TOPSETUP include:
These two files are then input into TOPRUN which calls TopNet.exe, the executable that runs TOPMODEL.
The model runs have not been conducted so far as TOPSETUP keeps crushing. I will need to look at the data again and determine what the sources of error could be.