InVEST: Integrated Valuation of Ecosystem Services and Tradeoffs¶
Release: 3.3.1
InVEST (Integrated Valuation of Ecosystem Services and Tradeoffs) is a family of tools for quantifying the values of natural capital in clear, credible, and practical ways. In promising a return (of societal benefits) on investments in nature, the scientific community needs to deliver knowledge and tools to quantify and forecast this return. InVEST enables decision-makers to quantify the importance of natural capital, to assess the tradeoffs associated with alternative choices, and to integrate conservation and human development.
Older versions of InVEST ran as script tools in the ArcGIS ArcToolBox environment, but have almost all been ported over to a purely open-source python environment.
InVEST is licensed under a permissive, modified BSD license.
- For more information, see:
- InVEST on bitbucket
- The latest InVEST User’s Guide
- The Natural Capital Project website.
Getting Started¶
Installing InVEST¶
Note
The natcap.invest
python package is currently only supported in Python
2.7. Other versions of python may be supported at a later date.
Warning
Python 2.7.11 or later is required to be able to use the InVEST Recreation model on Windows.
Binary Dependencies¶
InVEST itself depends only on python packages, but many of these package dependencies depend on low-level libraries or have complex build processes. In recent history, some of these packages (notably, numpy and scipy) have started to release precompiled binary packages of their own, removing the need to install these packages through a system package manager. Others, however, remain easiest to install through a package manager.
Linux¶
Linux users have it easy, as almost every package required to use
natcap.invest is available in the package repositories. The provided
commands will install only the libararies and binaries that are needed, allowing
pip
to install the rest.
Ubuntu & Debian¶
Attention
The package versions in the debian:stable repositories often lag far behind the latest releases. It may be necessary to install a later version of a libarary from a different package repository, or else build the library from source.
$ sudo apt-get install python-setuptools python-gdal python-h5py python-rtree python-shapely python-matplotlib python-qt4
Fedora¶
$ sudo yum install python-setuptools gdal-python h5py python-rtree python-shapely python-matplotlib PyQt4
Mac OS X¶
The easiest way to install binary packages on Mac OS X is through a package manager such as Homebrew:
$ brew install gdal hdf5 spatialindex pyqt matplotlib
The GDAL, PyQt and matplotlib packages include their respective python packages.
The others will allow their corresponding python packages to be compiled
against these binaries via pip
.
Windows¶
While many packages are available for Windows on the Python Package Index, some may need to be fetched from a different source. Many are available from Christogh Gohlke’s unofficial build page: http://www.lfd.uci.edu/~gohlke/pythonlibs/
PyQt4 installers can also be downloaded from the Riverbank Computing website.
Python Dependencies¶
Dependencies for natcap.invest
are listed in requirements.txt
:
gdal>=1.11.2,<2.0
h5py>=2.3.0
matplotlib
natcap.versioner>=0.4.2
numpy>=1.11.0
pyamg>=2.2.1
pygeoprocessing>=0.3.0a17
rtree>=0.8.2
scipy>=0.14.0
shapely
setuptools>=8.0
Additionally, PyQt4
is required to use the invest
cli, but is not
required for development against natcap.invest
. PyQt4 is not currently
available from the Python Package Index, but other sources and package managers
allow for straightforward installation on Windows,
Mac OS X, and Linux.
Installing from Source¶
Note
Windows users will find best compilation results by using the MSVC compiler, which can be downloaded from the Microsoft website. See the python wiki page on compilation under Windows for more information.
Assuming you have a C/C++ compiler installed and configured for your system, and dependencies installed, the easiest way to install InVEST as a python package is:
$ pip install natcap.invest
If you are working within virtual environments, there is a documented issue
with namespaces
in setuptools that may cause problems when importing packages within the
natcap
namespace. The current workaround is to use these extra pip flags:
$ pip install natcap.invest --egg --no-binary :all:
Installing the latest development version¶
Pre-built binaries for Windows¶
Pre-built installers and wheels of development versions of natcap.invest
for 32-bit Windows python installations are available from
http://data.naturalcapitalproject.org/invest-releases/#dev, along with other
distributions of InVEST. Once downloaded, wheels can be installed locally via
pip:
> pip install .\natcap.invest-3.3.0.post89+nfc4a8d4de776-cp27-none-win32.whl
Installing from our source tree¶
The latest development version of InVEST can be installed from our Mercurial source tree:
$ pip install hg+https://bitbucket.org/natcap/invest@develop
The InVEST CLI¶
Installing¶
The invest
cli application is installed with the natcap.invest
python
package. See Installing InVEST
Usage¶
To run an InVEST model from the command-line, use the invest
cli single
entry point:
$ invest --help
usage: invest [-h] [--version] [--list] [model]
Integrated Valuation of Ecosystem Services and Tradeoffs.InVEST (Integrated
Valuation of Ecosystem Services and Tradeoffs) is a family of tools for
quantifying the values of natural capital in clear, credible, and practical
ways. In promising a return (of societal benefits) on investments in nature,
the scientific community needs to deliver knowledge and tools to quantify and
forecast this return. InVEST enables decision-makers to quantify the
importance of natural capital, to assess the tradeoffs associated with
alternative choices, and to integrate conservation and human development.
Older versions of InVEST ran as script tools in the ArcGIS ArcToolBox
environment, but have almost all been ported over to a purely open-source
python environment.
positional arguments:
model The model/tool to run. Use --list to show available
models/tools. Identifiable model prefixes may also be used.
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
--list List available models
To list the available models:
$ invest --list
To launch a model:
$ invest <modelname>
Changelog¶
3.3.1 (2016-04-14)¶
Refactored API documentation for readability, organization by relevant topics, and to allow docs to build on invest.readthedocs.io,
Installation of
natcap.invest
now requiresnatcap.versioner
. If this is not available on the system at runtime, setuptools will make it available at runtime.InVEST Windows installer now includes HISTORY.rst as the changelog instead of the old
InVEST_Updates_<version>
files.- Habitat suitability model is generalized and released as an API only accessible model. It can be found at
natcap.invest.habitat_suitability.execute
. This model replaces the oyster habitat suitability model. - The refactor of this model requires an upgrade to
numpy >= 1.11.0
.
- The refactor of this model requires an upgrade to
- Habitat suitability model is generalized and released as an API only accessible model. It can be found at
Fixed a crash in the InVEST CLI where calling
invest
without a parameter would raise an exception on linux-based systems. (Issue #3528)Patched an issue in Seasonal Water Yield model where a nodata value in the landcover map that was equal to
MAX_INT
would cause an overflow error/crash.InVEST NSIS installer will now optionally install the Microsoft Visual C++ 2008 redistributable on Windows 7 or earlier. This addresses a known issue on Windows 7 systems when importing GDAL binaries (Issue #3515). Users opting to install this redistributable agree to abide by the terms and conditions therein.
Removed the deprecated subpackage
natcap.invest.optimization
.Updated the InVEST license to legally define the Natural Capital Project.
Corrected an issue in Coastal Vulnerability where an output shapefile was being recreated for each row, and where field values were not being stored correctly.
Updated Scenario Generator model to add basic testing, file registry support, PEP8 and PEP257 compliance, and to fix several bugs.
Updated Crop Production model to add a simplified UI, faster runtime, and more testing.
3.3.0 (2016-03-14)¶
Refactored Wind Energy model to use a CSV input for wind data instead of a Binary file.
- Redesigned InVEST recreation model for a single input streamlined interface, advanced analytics, and refactored outputs. While the model is still based on “photo user days” old model runs are not backward compatable with the new model or interface. See the Recreation Model user’s guide chapter for details.
- The refactor of this model requires an upgrade to
GDAL >=1.11.0 <2.0
andnumpy >= 1.10.2
.
- The refactor of this model requires an upgrade to
Removed nutrient retention (water purification) model from InVEST suite and replaced it with the nutrient delivery ratio (NDR) model. NDR has been available in development relseases, but has now officially been added to the set of Windows Start Menu models and the “under development” tag in its users guide has been removed. See the InVEST user’s guide for details between the differences and advantages of NDR over the old nutrient model.
Modified NDR by adding a required “Runoff Proxy” raster to the inputs. This allows the model to vary the relative intensity of nutrient runoff based on varying precipitation variability.
Fixed a bug in the Area Change rule of the Rule-Based Scenario Generator, where units were being converted incorrectly. (Issue #3472) Thanks to Fosco Vesely for this fix.
InVEST Seasonal Water Yield model released.
InVEST Forest Carbon Edge Effect model released.
InVEST Scenario Generator: Proximity Based model released and renamed the previous “Scenario Generator” to “Scenario Generator: Rule Based”.
Implemented a blockwise exponential decay kernel generation function, which is now used in the Pollination and Habitat Quality models.
GLOBIO now uses an intensification parameter and not a map to average all agriculture across the GLOBIO 8 and 9 classes.
GLOBIO outputs modified so core outputs are in workspace and intermediate outputs are in a subdirectory called ‘intermediate_outputs’.
Fixed a crash with the NDR model that could occur if the DEM and landcover maps were different resolutions.
Refactored all the InVEST model user interfaces so that Workspace defaults to the user’s home “Documents” directory.
Fixed an HRA bug where stessors with a buffer of zero were being buffered by 1 pixel
HRA enhancement which creates a common raster to burn all input shapefiles onto, ensuring consistent alignment.
Fixed an issue in SDR model where a landcover map that was smaller than the DEM would create extraneous “0” valued cells.
New HRA feature which allows for “NA” values to be entered into the “Ratings” column for a habitat / stressor pair in the Criteria Ratings CSV. If ALL ratings are set to NA, the habitat / stressor will be treated as having no interaction. This means in the model, that there will be no overlap between the two sources. All rows parameters with an NA rating will not be used in calculating results.
Refactored Coastal Blue Carbon model for greater speed, maintainability and clearer documentation.
Habitat Quality bug fix when given land cover rasters with different pixel sizes than threat rasters. Model would use the wrong pixel distance for the convolution kernel.
Light refactor of Timber model. Now using CSV input attribute file instead of DBF file.
Fixed clipping bug in Wave Energy model that was not properly clipping polygons correctly. Found when using global data.
- Made the following changes / updates to the coastal vulnerability model:
- Fixed a bug in the model where the geomorphology ranks were not always being used correctly.
- Removed the HTML summary results output and replaced with a link to a dashboard that helps visualize and interpret CV results.
- Added a point shapefile output: ‘outputs/coastal_exposure.shp’ that is a shapefile representation of the corresponding CSV table.
- The model UI now requires the ‘Relief’ input. No longer optional.
- CSV outputs and Shapefile outputs based on rasters now have x, y coorinates of the center of the pixel instead of top left of the pixel.
Turning setuptools’ zip_safe to False for consistency across the Natcap Namespace.
GLOBIO no longer requires user to specify a keyfield in the AOI.
New feature to GLOBIO to summarize MSA by AOI.
New feature to GLOBIO to use a user defined MSA parameter table to do the MSA thresholds for infrastructure, connectivity, and landuse type
Documentation to the GLOBIO code base including the large docstring for ‘execute’.
3.2.0 (2015-05-31)¶
InVEST 3.2.0 is a major release with the addition of several experimental models and tools as well as an upgrade to the PyGeoprocessing core:
- Upgrade to PyGeoprocessing v0.3.0a1 for miscelaneous performance improvements to InVEST’s core geoprocessing routines.
- An alpha unstable build of the InVEST crop production model is released with partial documentation and sample data.
- A beta build of the InVEST fisheries model is released with documentation and sample data.
- An alpha unstable build of the nutrient delivery ratio (NDR) model is available directly under InVEST’s instalation directory at
invest-x86/invest_ndr.exe
; eventually this model will replace InVEST’s current “Nutrient” model. It is currently undocumented and unsupported but inputs are similar to that of InVEST’s SDR model. - An alpha unstable build of InVEST’s implementation of GLOBIO is available directly under InVEST’s instalation directory at
invest-x86/invest_globio.exe
. It is currently undocumented but sample data are provided. - DelinateIT, a watershed delination tool based on PyGeoprocessing’s d-infinity flow algorithm is released as a standalone tool in the InVEST repository with documentation and sample data.
- Miscelaneous performance patches and bug fixes.
3.1.3 (2015-04-23)¶
InVEST 3.1.3 is a hotfix release patching a memory blocking issue resolved in PyGeoprocessing version 0.2.1. Users might have experienced slow runtimes on SDR or other routed models.
3.1.2 (2015-04-15)¶
InVEST 3.1.2 is a minor release patching issues mostly related to the freshwater routing models and signed GDAL Byte datasets.
- Patching an issue where some projections were not regognized and InVEST reported an UnprojectedError.
- Updates to logging that make it easier to capture logging messages when scripting InVEST.
- Shortened water yield user interface height so it doesn’t waste whitespace.
- Update PyGeoprocessing dependency to version 0.2.0.
- Fixed an InVEST wide issue related to bugs stemming from the use of signed byte raster inputs that resulted in nonsensical outputs or KeyErrors.
- Minor performance updates to carbon model.
- Fixed an issue where DEMS with 32 bit ints and INT_MAX as the nodata value nodata value incorrectly treated the nodata value in the raster as a very large DEM value ultimately resulting in rasters that did not drain correctly and empty flow accumulation rasters.
- Fixed an issue where some reservoirs whose edges were clipped to the edge of the watershed created large plateaus with no drain except off the edge of the defined raster. Added a second pass in the plateau drainage algorithm to test for these cases and drains them to an adjacent nodata area if they occur.
- Fixed an issue in the Fisheries model where the Results Suffix input was invariably initializing to an empty string.
- Fixed an issue in the Blue Carbon model that prevented the report from being generated in the outputs file.
3.1.1 (2015-03-13)¶
InVEST 3.1.1 is a major performance and memory bug patch to the InVEST toolsuite. We recommend all users upgrade to this version.
- Fixed an issue surrounding reports of SDR or Nutrient model outputs of zero values, nodata holes, excessive runtimes, or out of memory errors. Some of those problems happened to be related to interesting DEMs that would break the flat drainage algorithm we have inside RouteDEM that adjusted the heights of those regions to drain away from higher edges and toward lower edges, and then pass the height adjusted dem to the InVEST model to do all its model specific calculations. Unfortunately this solution was not amenable to some degenerate DEM cases and we have now adjusted the algorithm to treat each plateau in the DEM as its own separate region that is processed independently from the other regions. This decreases memory use so we never effectively run out of memory at a minor hit to overall runtime. We also now adjust the flow direction directly instead of adjust the dem itself. This saves us from having to modify the DEM and potentially get it into a state where a drained plateau would be higher than its original pixel neighbors that used to drain into it.
There are side effects that result in sometimes large changes to un calibrated runs of SDR or nutrient. These are related to slightly different flow directions across the landscape and a bug fix on the distance to stream calculation.
- InVEST geoprocessing now uses the PyGeoprocessing package (v0.1.4) rather than the built in functionality that used to be in InVEST. This will not affect end users of InVEST but may be of interest to users who script InVEST calls who want a standalone Python processing package for raster stack math and hydrological routing. The project is hosted at https://bitbucket.org/richpsharp/pygeoprocessing.
- Fixed an marine water quality issue where users could input AOIs that were unprojected, but output pixel sizes were specified in meters. Really the output pixel size should be in the units of the polygon and are now specified as such. Additionally an exception is raised if the pixel size is too small to generate a numerical solution that is no longer a deep scipy error.
- Added a suffix parameter to the timber and marine water quality models that append a user defined string to the output files; consistent with most of the other InVEST models.
- Fixed a user interface issue where sometimes the InVEST model run would not open a windows explorer to the user’s workspace. Instead it would open to C:User[..]My Documents. This would often happen if there were spaces in the the workspace name or “/” characters in the path.
- Fixed an error across all InVEST models where a specific combination of rasters of different cell sizes and alignments and unsigned data types could create errors in internal interpolation of the raster stacks. Often these would appear as ‘KeyError: 0’ across a variety of contexts. Usually the ‘0’ was an erroneous value introduced by a faulty interpolation scheme.
- Fixed a MemoryError that could occur in the pollination and habitat quality models when the the base landcover map was large and the biophysical properties table allowed the effect to be on the order of that map. Now can use any raster or range values with only a minor hit to runtime performance.
- Fixed a serious bug in the plateau resolution algorithm that occurred on DEMs with large plateau areas greater than 10x10 in size. The underlying 32 bit floating point value used to record small height offsets did not have a large enough precision to differentiate between some offsets thus creating an undefined flow direction and holes in the flow accumulation algorithm.
- Minor performance improvements in the routing core, in some cases decreasing runtimes by 30%.
- Fixed a minor issue in DEM resolution that occurred when a perfect plateau was encountered. Rather that offset the height so the plateau would drain, it kept the plateau at the original height. This occurred because the uphill offset was nonexistent so the algorithm assumed no plateau resolution was needed. Perfect plateaus now drain correctly. In practice this kind of DEM was encountered in areas with large bodies of water where the remote sensing algorithm would classify the center of a lake 1 meter higher than the rest of the lake.
- Fixed a serious routing issue where divergent flow directions were not getting accumulated 50% of the time. Related to a division speed optimization that fell back on C-style modulus which differs from Python.
- InVEST SDR model thresholded slopes in terms of radians, not percent thus clipping the slope tightly between 0.001 and 1%. The model now only has a lower threshold of 0.00005% for the IC_0 factor, and no other thresholds. We believe this was an artifact left over from an earlier design of the model.
- Fixed a potential memory inefficiency in Wave Energy Model when computing the percentile rasters. Implemented a new memory efficient percentile algorithm and updated the outputs to reflect the new open source framework of the model. Now outputting csv files that describe the ranges and meaning of the percentile raster outputs.
- Fixed a bug in Habitat Quality where the future output “quality_out_f.tif” was not reflecting the habitat value given in the sensitivity table for the specified landcover types.
3.1.0 (2014-11-19)¶
InVEST 3.1.0 (http://www.naturalcapitalproject.org/download.html) is a major software and science milestone that includes an overhauled sedimentation model, long awaited fixes to exponential decay routines in habitat quality and pollination, and a massive update to the underlying hydrological routing routines. The updated sediment model, called SDR (sediment delivery ratio), is part of our continuing effort to improve the science and capabilities of the InVEST tool suite. The SDR model inputs are backwards comparable with the InVEST 3.0.1 sediment model with two additional global calibration parameters and removed the need for the retention efficiency parameter in the biophysical table; most users can run SDR directly with the data they have prepared for previous versions. The biophysical differences between the models are described in a section within the SDR user’s guide and represent a superior representation of the hydrological connectivity of the watershed, biophysical parameters that are independent of cell size, and a more accurate representation of sediment retention on the landscape. Other InVEST improvements to include standard bug fixes, performance improvements, and usability features which in part are described below:
InVEST Sediment Model has been replaced with the InVEST Sediment Delivery Ratio model. See the SDR user’s guide chapter for the difference between the two.
Fixed an issue in the pollination model where the exponential decay function decreased too quickly.
Fixed an issue in the habitat quality model where the exponential decay function decreased too quickly and added back linear decay as an option.
Fixed an InVEST wide issue where some input rasters that were signed bytes did not correctly map to their negative nodata values.
Hydropower input rasters have been normalized to the LULC size so sampling error is the same for all the input watersheds.
Adding a check to make sure that input biophysical parameters to the water yield model do not exceed invalid scientific ranges.
Added a check on nutrient retention in case the upstream water yield was less than 1 so that the log value did not go negative. In that case we clamp upstream water yield to 0.
A KeyError issue in hydropower was resolved that occurred when the input rasters were at such a coarse resolution that at least one pixel was completely contained in each watershed. Now a value of -9999 will be reported for watersheds that don’t contain any valid data.
An early version of the monthly water yield model that was erroneously included in was in the installer; it was removed in this version.
Python scripts necessary for running the ArcGIS version of Coastal Protection were missing. They’ve since been added back to the distribution.
Raster calculations are now processed by raster block sizes. Improvements in raster reads and writes.
Fixed an issue in the routing core where some wide DEMs would cause out of memory errors.
Scenario generator marked as stable.
Fixed bug in HRA where raster extents of shapefiles were not properly encapsulating the whole AOI.
Fixed bug in HRA where any number of habitats over 4 would compress the output plots. Now extends the figure so that all plots are correctly scaled.
Fixed a bug in HRA where the AOI attribute ‘name’ could not be an int. Should now accept any type.
Fixed bug in HRA which re-wrote the labels if it was run immediately without closing the UI.
Fixed nodata masking bug in Water Yield when raster extents were less than that covered by the watershed.
Removed hydropower calibration parameter form water yield model.
Models that had suffixes used to only allow alphanumeric characters. Now all suffix types are allowed.
A bug in the core platform that would occasionally cause routing errors on irregularly pixel sized rasters was fixed. This often had the effect that the user would see broken streams and/or nodata values scattered through sediment or nutrient results.
- Wind Energy:
- Added new framework for valuation component. Can now input a yearly price table that spans the lifetime of the wind farm. Also if no price table is made, can specify a price for energy and an annual rate of change.
- Added new memory efficient distance transform functionality
- Added ability to leave out ‘landing points’ in ‘grid connection points’ input. If not landing points are found, it will calculate wind farm directly to grid point distances
Error message added in Wave Energy if clip shape has no intersection
Fixed an issue where the data type of the nodata value in a raster might be different than the values in the raster. This was common in the case of 64 bit floating point values as nodata when the underlying raster was 32 bit. Now nodata values are cast to the underlying types which improves the reliability of many of the InVEST models.
3.0.1 (2014-05-19)¶
- Blue Carbon model released.
- HRA UI now properly reflects that the Resolution of Analysis is in meters, not meters squared, and thus will be applied as a side length for a raster pixel.
- HRA now accepts CSVs for ratings scoring that are semicolon separated as well as comma separated.
- Fixed a minor bug in InVEST’s geoprocessing aggregate core that now consistently outputs correct zonal stats from the underlying pixel level hydro outputs which affects the water yield, sediment, and nutrient models.
- Added compression to InVEST output geotiff files. In most cases this reduces output disk usage by a factor of 5.
- Fixed an issue where CSVs in the sediment model weren’t open in universal line read mode.
- Fixed an issue where approximating whether pixel edges were the same size was not doing an approximately equal function.
- Fixed an issue that made the CV model crash when the coastline computed from the landmass didn’t align perfectly with that defined in the geomorphology layer.
- Fixed an issue in the CV model where the intensity of local wave exposure was very low, and yielded zero local wave power for the majority of coastal segments.
- Fixed an issue where the CV model crashes if a coastal segment is at the edge of the shore exposure raster.
- Fixed the exposure of segments surrounded by land that appeared as exposed when their depth was zero.
- Fixed an issue in the CV model where the natural habitat values less than 5 were one unit too low, leading to negative habitat values in some cases.
- Fixed an exponent issue in the CV model where the coastal vulnerability index was raised to a power that was too high.
- Fixed a bug in the Scenic Quality model that prevented it from starting, as well as a number of other issues.
- Updated the pollination model to conform with the latest InVEST geoprocessing standards, resulting in an approximately 33% speedup.
- Improved the UI’s ability to remember the last folder visited, and to have all file and folder selection dialogs have access to this information.
- Fixed an issue in Marine Water Quality where the UV points were supposed to be optional, but instead raised an exception when not passed in.
3.0.0 (2014-03-23)¶
The 3.0.0 release of InVEST represents a shift away from the ArcGIS to the InVEST standalone computational platform. The only exception to this shift is the marine coastal protection tier 1 model which is still supported in an ArcGIS toolbox and has no InVEST 3.0 standalone at the moment. Specific changes are detailed below
- A standalone version of the aesthetic quality model has been developed and packaged along with this release. The standalone outperforms the ArcGIS equivalent and includes a valuation component. See the user’s guide for details.
- The core water routing algorithms for the sediment and nutrient models have been overhauled. The routing algorithms now correctly adjust flow in plateau regions, address a bug that would sometimes not route large sections of a DEM, and has been optimized for both run time and memory performance. In most cases the core d-infinity flow accumulation algorithm out performs TauDEM. We have also packaged a simple interface to these algorithms in a standalone tool called RouteDEM; the functions can also be referenced from the scripting API in the invest_natcap.routing package.
- The sediment and nutrient models are now at a production level release. We no longer support the ArcGIS equivalent of these models.
- The sediment model has had its outputs simplified with major changes including the removal of the ‘pixel mean’ outputs, a direct output of the pixel level export and retention maps, and a single output shapefile whose attribute table contains aggregations of sediment output values. Additionally all inputs to the sediment biophysical table including p, c, and retention coefficients are now expressed as a proportion between 0 and 1; the ArcGIS model had previously required those inputs were integer values between 0 and 1000. See the “Interpreting Results” section of sediment model for full details on the outputs.
- The nutrient model has had a similar overhaul to the sediment model including a simplified output structure with many key outputs contained in the attribute table of the shapefile. Retention coefficients are also expressed in proportions between 0 and 1. See the “Interpreting Results” section of nutrient model for full details on the outputs.
- Fixed a bug in Habitat Risk Assessment where the HRA module would incorrectly error if a criteria with a 0 score (meant to be removed from the assessment) had a 0 data quality or weight.
- Fixed a bug in Habitat Risk Assessment where the average E/C/Risk values across the given subregion were evaluating to negative numbers.
- Fixed a bug in Overlap Analysis where Human Use Hubs would error if run without inter-activity weighting, and Intra-Activity weighting would error if run without Human Use Hubs.
- The runtime performance of the hydropower water yield model has been improved.
- Released InVEST’s implementation of the D-infinity flow algorithm in a tool called RouteDEM available from the start menu.
- Unstable version of blue carbon available.
- Unstable version of scenario generator available.
- Numerous other minor bug fixes and performance enhacnements.
2.6.0 (2013-12-16)¶
The 2.6.0 release of InVEST removes most of the old InVEST models from the Arc toolbox in favor of the new InVEST standalone models. While we have been developing standalone equivalents for the InVEST Arc models since version 2.3.0, this is the first release in which we removed support for the deprecated ArcGIS versions after an internal review of correctness, performance, and stability on the standalones. Additionally, this is one of the last milestones before the InVEST 3.0.0 release later next year which will transition InVEST models away from strict ArcGIS dependence to a standalone form.
Specifically, support for the following models have been moved from the ArcGIS toolbox to their Windows based standalones: (1) hydropower/water yield, (2) finfish aquaculture, (3) coastal protection tier 0/coastal vulnerability, (4) wave energy, (5) carbon, (6) habitat quality/biodiversity, (7) pollination, (8) timber, and (9) overlap analysis. Additionally, documentation references to ArcGIS for those models have been replaced with instructions for launching standalone InVEST models from the Windows start menu.
This release also addresses minor bugs, documentation updates, performance tweaks, and new functionality to the toolset, including:
- A Google doc to provide guidance for scripting the InVEST standalone models: https://docs.google.com/document/d/158WKiSHQ3dBX9C3Kc99HUBic0nzZ3MqW3CmwQgvAqGo/edit?usp=sharing
- Fixed a bug in the sample data that defined Kc as a number between 0 and 1000 instead of a number between 0 and 1.
- Link to report an issue now takes user to the online forums rather than an email address.
- Changed InVEST Sediment model standalone so that retention values are now between 0 and 1 instead of 0 and 100.
- Fixed a bug in Biodiversity where if no suffix were entered output filenames would have a trailing underscore (_) behind them.
- Added documentation to the water purification/nutrient retention model documentation about the standalone outputs since they differ from the ArcGIS version of the model.
- Fixed an issue where the model would try to move the logfile to the workspace after the model run was complete and Windows would erroneously report that the move failed.
- Removed the separation between marine and freshwater terrestrial models in the user’s guide. Now just a list of models.
- Changed the name of InVEST “Biodiversity” model to “Habitat Quality” in the module names, start menu, user’s guide, and sample data folders.
- Minor bug fixes, performance enhancements, and better error reporting in the internal infrastructure.
- HRA risk in the unstable standalone is calculated differently from the last release. If there is no spatial overlap within a cell, there is automatically a risk of 0. This also applies to the E and C intermediate files for a given pairing. If there is no spatial overlap, E and C will be 0 where there is only habitat. However, we still create a recovery potential raster which has habitat- specific risk values, even without spatial overlap of a stressor. HRA shapefile outputs for high, medium, low risk areas are now calculated using a user-defined maximum number of overlapping stressors, rather than all potential stressors. In the HTML subregion averaged output, we now attribute what portion of risk to a habitat comes from each habitat-stressor pairing. Any pairings which don’t overlap will have an automatic risk of 0.
- Major changes to Water Yield : Reservoir Hydropower Production. Changes include an alternative equation for calculating Actual Evapotranspiration (AET) for non-vegetated land cover types including wetlands. This allows for a more accurate representation of processes on land covers such as urban, water, wetlands, where root depth values aren’t applicable. To differentiate between the two equations a column ‘LULC_veg’ has been added to the Biophysical table in Hydropower/input/biophysical_table.csv. In this column a 1 indicates vegetated and 0 indicates non-vegetated.
- The output structure and outputs have also change in Water Yield : Reservoir Hydropower Production. There is now a folder ‘output’ that contains all output files including a sub directory ‘per_pixel’ which has three pixel raster outputs. The subwatershed results are only calculated for the water yield portion and those results can be found as a shapefile, ‘subwatershed_results.shp’, and CSV file, ‘subwatershed_results.csv’. The watershed results can be found in similar files: watershed_results.shp and watershed_results.csv. These two files for the watershed outputs will aggregate the Scarcity and Valuation results as well.
- The evapotranspiration coefficients for crops, Kc, has been changed to a decimal input value in the biophysical table. These values used to be multiplied by 1000 so that they were in integer format, that pre processing step is no longer necessary.
- Changing support from richsharp@stanford.edu to the user support forums at http://ncp-yamato.stanford.edu/natcapforums.
2.5.6 (2013-09-06)¶
The 2.5.6 release of InVEST that addresses minor bugs, performance tweaks, and new functionality of the InVEST standalone models. Including:
- Change the changed the Carbon biophysical table to use code field name from LULC to lucode so it is consistent with the InVEST water yield biophysical table.
- Added Monte Carlo uncertainty analysis and documentation to finfish aquaculture model.
- Replaced sample data in overlap analysis that was causing the model to crash.
- Updates to the overlap analysis user’s guide.
- Added preprocessing toolkit available under C:{InVEST install directory}utils
- Biodiversity Model now exits gracefully if a threat raster is not found in the input folder.
- Wind Energy now uses linear (bilinear because its over 2D space?) interpolation.
- Wind Energy has been refactored to current API.
- Potential Evapotranspiration input has been properly named to Reference Evapotranspiration.
- PET_mn for Water Yield is now Ref Evapotranspiration times Kc (evapotranspiration coefficient).
- The soil depth field has been renamed ‘depth to root restricting layer’ in both the hydropower and nutrient retention models.
- ETK column in biophysical table for Water Yield is now Kc.
- Added help text to Timber model.
- Changed the behavior of nutrient retention to return nodata values when the mean runoff index is zero.
- Fixed an issue where the hydropower model didn’t use the suffix inputs.
- Fixed a bug in Biodiversity that did not allow for numerals in the threat names and rasters.
- Updated routing algorithm to use a modern algorithm for plateau direction resolution.
- Fixed an issue in HRA where individual risk pixels weren’t being calculated correctly.
- HRA will now properly detect in the preprocessed CSVs when criteria or entire habitat-stressor pairs are not desired within an assessment.
- Added an infrastructure feature so that temporary files are created in the user’s workspace rather than at the system level folder. Â This lets users work in a secondary workspace on a USB attached hard drive and use the space of that drive, rather than the primary operating system drive.
2.5.5 (2013-08-06)¶
The 2.5.5 release of InVEST that addresses minor bugs, performance tweaks, and new functionality of the InVEST standalone models. Including:
- Production level release of the 3.0 Coastal Vulnerability model.
- This upgrades the InVEST 2.5.4 version of the beta standalone CV to a full release with full users guide. This version of the CV model should be used in all cases over its ArcGIS equivalent.
- Production level release of the Habitat Risk Assessment model.
- This release upgrades the InVEST 2.5.4 beta version of the standalone habitat risk assessment model. It should be used in all cases over its ArcGIS equivalent.
- Uncertainty analysis in Carbon model (beta)
- Added functionality to assess uncertainty in sequestration and emissions given known uncertainty in carbon pool stocks. Users can now specify standard deviations of carbon pools with normal distributions as well as desired uncertainty levels. New outputs include masks for regions which both sequester and emit carbon with a high probability of confidence. Please see the “Uncertainty Analysis” section of the carbon user’s guide chapter for more information.
- REDD+ Scenario Analysis in Carbon model (beta)
- Additional functionality to assist users evaluating REDD and REDD+ scenarios in the carbon model. The uncertainty analysis functionality can also be used with these scenarios. Please see the “REDD Scenario Analysis” section of the carbon user’s guide chapter for more information.
- Uncertainty analysis in Finfish Aquaculture model (beta)
- Additionally functionality to account for uncertainty in alpha and beta growth parameters as well as histogram plots showing the distribution of harvest weights and net present value. Uncertainty analysis is performed through Monte Carlo runs that normally sample the growth parameters.
- Streamlined Nutrient Retention model functionality
- The nutrient retention module no longer requires users to explicitly run the water yield model. The model now seamlessly runs water yield during execution.
- Beta release of the recreation model
- The recreation is available for beta use with limited documentation.
- Full release of the wind energy model
- Removing the ‘beta’ designation on the wind energy model.
Known Issues:
- Flow routing in the standalone sediment and nutrient models has a bug that prevents routing in some (not all) landscapes. This bug is related to resolving d-infinity flow directions across flat areas. We are implementing the solution in Garbrecht and Martx (1997). In the meanwhile the sediment and nutrient models are still marked as beta until this issue is resolved.
2.5.4 (2013-06-07)¶
This is a minor release of InVEST that addresses numerous minor bugs and performance tweaks in the InVEST 3.0 models. Including:
- Refactor of Wave Energy Model:
- Combining the Biophysical and Valuation modules into one.
- Adding new data for the North Sea and Australia
- Fixed a bug where elevation values that were equal to or greater than zero were being used in calculations.
- Fixed memory issues when dealing with large datasets.
- Updated core functions to remove any use of depracated functions
Performance updates to the carbon model.
- Nodata masking fix for rarity raster in Biodiversity Model.
- When computing rarity from a base landuse raster and current or future landuse raster, the intersection of the two was not being properly taken.
Fixes to the flow routing algorithms in the sediment and nutrient retention models in cases where stream layers were burned in by ArcGIS hydro tools. In those cases streams were at the same elevation and caused routing issues.
Fixed an issue that affected several InVEST models that occured when watershed polygons were too small to cover a pixel. Excessively small watersheds are now handled correctly
Arc model deprecation. We are deprecating the following ArcGIS versions of our InVEST models in the sense we recommend ALL users use the InVEST standalones over the ArcGIS versions, and the existing ArcGIS versions of these models will be removed entirely in the next release.
- Timber
- Carbon
- Pollination
- Biodiversity
- Finfish Aquaculture
Known Issues:
- Flow routing in the standalone sediment and nutrient models has a bug that prevents routing in several landscapes. We’re not certain of the nature of the bug at the moment, but we will fix by the next release. Thus, sediment and nutrient models are marked as (beta) since in some cases the DEM routes correctly.
2.5.3 (2013-03-21)¶
This is a minor release of InVEST that fixes an issue with the HRA model that caused ArcGIS versions of the model to fail when calculating habitat maps for risk hotspots. This upgrade is strongly recommended for users of InVEST 2.5.1 or 2.5.2.
2.5.2 (2013-03-17)¶
This is a minor release of InVEST that fixes an issue with the HRA sample data that caused ArcGIS versions of the model to fail on the training data. There is no need to upgrade for most users unless you are doing InVEST training.
2.5.1 (2013-03-12)¶
This is a minor release of InVEST that does not add any new models, but does add additional functionality, stability, and increased performance to one of the InVEST 3.0 standalones:
- Pollination 3.0 Beta:
- Fixed a bug where Windows users of InVEST could run the model, but most raster outputs were filled with nodata values.
Additionally, this minor release fixes a bug in the InVEST user interface where collapsible containers became entirely non-interactive.
2.5.0 (2013-03-08)¶
This a major release of InVEST that includes new standalone versions (ArcGIS is not required) our models as well as additional functionality, stability, and increased performance to many of the existing models. This release is timed to support our group’s annual training event at Stanford University. We expect to release InVEST 2.5.1 a couple of weeks after to address any software issues that arise during the training. See the release notes below for details of the release, and please contact richsharp@stanford.edu for any issues relating to software:
- new Sediment 3.0 Beta:
- This is a standalone model that executes an order of magnitude faster than the original ArcGIS model, but may have memory issues with larger datasets. This fix is scheduled for the 2.5.1 release of InVEST.
- Uses a d-infinity flow algorithm (ArcGIS version uses D8).
- Includes a more accurate LS factor.
- Outputs are now summarized by polygon rather than rasterized polygons. Users can view results directly as a table rather than sampling a GIS raster.
- new Nutrient 3.0 Beta:
- This is a standalone model that executes an order of magnitude faster than the original ArcGIS model, but may have memory issues with larger datasets. This fix is scheduled for the 2.5.1 release of InVEST.
- Uses a d-infinity flow algorithm (ArcGIS version uses D8).
- Includes a more accurate LS factor.
- Outputs are now summarized by polygon rather than rasterized polygons. Users can view results directly as a table rather than sampling a GIS raster.
- new Wind Energy:
- A new offshore wind energy model. This is a standalone-only model available under the windows start menu.
- new Recreation Alpha:
- This is a working demo of our soon to be released future land and near shore recreation model. The model itself is incomplete and should only be used as a demo or by NatCap partners that know what they’re doing.
- new Habitat Risk Assessment 3.0 Alpha:
- This is a working demo of our soon to be released 3.0 version of habitat risk assessment. The model itself is incomplete and should only be used as a demo or by NatCap partners that know what they’re doing. Users that need to use the habitat risk assessment should use the ArcGIS version of this model.
- Improvements to the InVEST 2.x ArcGIS-based toolset:
- Bug fixes to the ArcGIS based Coastal Protection toolset.
Removed support for the ArcGIS invest_VERSION.mxd map. We expect to transition the InVEST toolset exclusive standalone tools in a few months. In preparation of this we are starting to deprecate parts of our old ArcGIS toolset including this ArcMap document. The InVEST ArcToolbox is still available in C:InVEST_2_5_0invest_250.tbx.
Known issues:
- The InVEST 3.0 standalones generate open source GeoTiffs as outputs rather than the proprietary ESRI Grid format. ArcGIS 9.3.1 occasionally displays these rasters incorrectly. We have found that these layers can be visualized in ArcGIS 9.3.1 by following convoluted steps: Right Click on the layer and select Properties; click on the Symbology tab; select Stretch, agree to calculate a histogram (this will create an .aux file that Arc can use for visualization), click “Ok”, remove the raster from the layer list, then add it back. As an alternative, we suggest using an open source GIS Desktop Tool like Quantum GIS or ArcGIS version 10.0 or greater.
- The InVEST 3.0 carbon model will generate inaccurate sequestration results if the extents of the current and future maps don’t align. This will be fixed in InVEST 2.5.1; in the meanwhile a workaround is to clip both LULCs so they have identical overlaps.
- A user reported an unstable run of InVEST 3.0 water yield. We are not certain what is causing the issue, but we do have a fix that will go out in InVEST 2.5.1.
- At the moment the InVEST standalones do not run on Windows XP. This appears to be related to an incompatibility between Windows XP and GDAL, the an open source gis library we use to create and read GIS data. At the moment we are uncertain if we will be able to fix this bug in future releases, but will pass along more information in the future.
2.4.5 (2013-02-01)¶
This is a minor release of InVEST that does not add any new models, but does add additional functionality, stability, and increased performance to many of the InVEST 3.0 standalones:
- Pollination 3.0 Beta:
- Greatly improved memory efficiency over previous versions of this model.
- 3.0 Beta Pollination Biophysical and Valuation have been merged into a single tool, run through a unified user interface.
- Slightly improved runtime through the use of newer core InVEST GIS libraries.
- Optional ability to weight different species individually. This feature adds a column to the Guilds table that allows the user to specify a relative weight for each species, which will be used before combining all species supply rasters.
- Optional ability to aggregate pollinator abundances at specific points provided by an optional points shapefile input.
- Bugfix: non-agricultural pixels are set to a value of 0.0 to indicate no value on the farm value output raster.
- Bugfix: sup_val_<beename>_<scenario>.tif rasters are now saved to the intermediate folder inside the user’s workspace instead of the output folder.
- Carbon Biophysical 3.0 Beta:
- Tweaked the user interface to require the user to provide a future LULC raster when the ‘Calculate Sequestration’ checkbox is checked.
- Fixed a bug that restricted naming of harvest layers. Harvest layers are now selected simply by taking the first available layer.
Better memory efficiency in hydropower model.
Better support for unicode filepaths in all 3.0 Beta user interfaces.
Improved state saving and retrieval when loading up previous-run parameters in all 3.0 Beta user interfaces.
All 3.0 Beta tools now report elapsed time on completion of a model.
All 3.0 Beta tools now provide disk space usage reports on completion of a model.
All 3.0 Beta tools now report arguments at the top of each logfile.
Biodiversity 3.0 Beta: The half-saturation constant is now allowed to be a positive floating-point number.
Timber 3.0 Beta: Validation has been added to the user interface for this tool for all tabular and shapefile inputs.
Fixed some typos in Equation 1 in the Finfish Aquaculture user’s guide.
Fixed a bug where start menu items were not getting deleted during an InVEST uninstall.
Added a feature so that if the user selects to download datasets but the datasets don’t successfully download the installation alerts the user and continues normally.
Fixed a typo with tau in aquaculture guide, originally said 0.8, really 0.08.
- Improvements to the InVEST 2.x ArcGIS-based toolset:
- Minor bugfix to Coastal Vulnerability, where an internal unit of measurements was off by a couple digits in the Fetch Calculator.
- Minor fixes to various helper tools used in InVEST 2.x models.
- Outputs for Hargreaves are now saved as geoTIFFs.
- Thornwaite allows more flexible entering of hours of sunlight.
2.4.4 (2012-10-24)¶
- Fixes memory errors experienced by some users in the Carbon Valuation 3.0 Beta model.
- Minor improvements to logging in the InVEST User Interface
- Fixes an issue importing packages for some officially-unreleased InVEST models.
2.4.3 (2012-10-19)¶
- Fixed a minor issue with hydropower output vaulation rasters whose statistics were not pre-calculated. This would cause the range in ArcGIS to show ther rasters at -3e38 to 3e38.
- The InVEST installer now saves a log of the installation process to InVEST_<version>install_log.txt
- Fixed an issue with Carbon 3.0 where carbon output values were incorrectly calculated.
- Added a feature to Carbon 3.0 were total carbon stored and sequestered is output as part of the running log.
- Fixed an issue in Carbon 3.0 that would occur when users had text representations of floating point numbers in the carbon pool dbf input file.
- Added a feature to all InVEST 3.0 models to list disk usage before and after each run and in most cases report a low free space error if relevant.
2.4.2 (2012-10-15)¶
- Fixed an issue with the ArcMap document where the paths to default data were not saved as relative paths. This caused the default data in the document to not be found by ArcGIS.
- Introduced some more memory-efficient processing for Biodiversity 3.0 Beta. This fixes an out-of-memory issue encountered by some users when using very large raster datasets as inputs.
2.4.1 (2012-10-08)¶
- Fixed a compatibility issue with ArcGIS 9.3 where the ArcMap and ArcToolbox were unable to be opened by Arc 9.3.
2.4.0 (2012-10-05)¶
Changes in InVEST 2.4.0
General:
This is a major release which releases two additional beta versions of the InVEST models in the InVEST 3.0 framework. Additionally, this release introduces start menu shortcuts for all available InVEST 3.0 beta models. Existing InVEST 2.x models can still be found in the included Arc toolbox.
Existing InVEST models migrated to the 3.0 framework in this release include:
- Biodiversity 3.0 Beta
- Minor bug fixes and usability enhancements
- Runtime decreased by a factor of 210
- Overlap Analysis 3.0 Beta
In most cases runtime decreased by at least a factor of 15
Minor bug fixes and usability enhancements
- Split into two separate tools:
- Overlap Analysis outputs rasters with individually-weighted pixels
- Overlap Analysis: Management Zones produces a shapefile output.
Updated table format for input activity CSVs
Removed the “grid the seascape” step
Updates to ArcGIS models:
- Coastal vulnerability
- Removed the “structures” option
- Minor bug fixes and usability enhancements
- Coastal protection (erosion protection)
- Incorporated economic valuation option
- Minor bug fixes and usability enhancements
Additionally there are a handful of minor fixes and feature enhancements:
- InVEST 3.0 Beta standalones (identified by a new InVEST icon) may be run from the Start Menu (on windows navigate to Start Menu -> All Programs -> InVEST 2.4.0
- Bug fixes for the calculation of raster statistics.
- InVEST 3.0 wave energy no longer requires an AOI for global runs, but encounters memory issues on machines with less than 4GB of RAM. This is a known issue that will be fixed in a minor release.
- Minor fixes to several chapters in the user’s guide.
- Minor bug fix to the 3.0 Carbon model: harvest maps are no longer required inputs.
- Other minor bug fixes and runtime performance tweaks in the 3.0 framework.
- Improved installer allows users to remove InVEST from the Windows Add/Remove programs menu.
- Fixed a visualization bug with wave energy where output rasters did not have the min/max/stdev calculations on them. This made the default visualization in arc be a gray blob.
2.3.0 (2012-08-02)¶
Changes in InVEST 2.3.0
General:
This is a major release which releases several beta versions of the InVEST models in the InVEST 3.0 framework. These models run as standalones, but a GIS platform is needed to edit and view the data inputs and outputs. Until InVEST 3.0 is released the original ArcGIS based versions of these tools will remain the release.
Existing InVEST models migrated to the 3.0 framework in this release include:
- Reservoir Hydropower Production 3.0 beta
- Minor bug fixes.
- Finfish Aquaculture
- Minor bug fixes and usability enhancements.
- Wave Energy 3.0 beta
- Runtimes for non-global runs decreased by a factor of 7
- Minor bugs in interpolation that exist in the 2.x model is fixed in 3.0 beta.
- Crop Pollination 3.0 beta
- Runtimes decreased by a factor of over 10,000
This release also includes the new models which only exist in the 3.0 framework:
- Marine Water Quality 3.0 alpha with a preliminary user’s guide.
InVEST models in the 3.0 framework from previous releases that now have a standalone executable include:
- Managed Timber Production Model
- Carbon Storage and Sequestration
Additionally there are a handful of other minor fixes and feature enhancements since the previous release:
- Minor bug fix to 2.x sedimentation model that now correctly calculates slope exponentials.
- Minor fixes to several chapters in the user’s guide.
- The 3.0 version of the Carbon model now can value the price of carbon in metric tons of C or CO2.
- Other minor bug fixes and runtime performance tweaks in the 3.0 framework.
2.2.2 (2012-03-03)¶
Changes in InVEST 2.2.2
General:
This is a minor release which fixes the following defects:
- -Fixed an issue with sediment retention model where large watersheds
- allowed loading per cell was incorrectly rounded to integer values.
- -Fixed bug where changing the threshold didn’t affect the retention output
- because function was incorrectly rounded to integer values.
-Added total water yield in meters cubed to to output table by watershed.
- -Fixed bug where smaller than default (2000) resolutions threw an error about
- not being able to find the field in “unitynew”. With non-default resolution, “unitynew” was created without an attribute table, so one was created by force.
- -Removed mention of beta state and ecoinformatics from header of software
- license.
- -Modified overlap analysis toolbox so it reports an error directly in the
- toolbox if the workspace name is too long.
2.2.1 (2012-01-26)¶
Changes in InVEST 2.2.1
General:
This is a minor release which fixes the following defects:
-A variety of miscellaneous bugs were fixed that were causing crashes of the Coastal Protection model in Arc 9.3. -Fixed an issue in the Pollination model that was looking for an InVEST1005 directory. -The InVEST “models only” release had an entry for the InVEST 3.0 Beta tools, but was missing the underlying runtime. This has been added to the models only 2.2.1 release at the cost of a larger installer. -The default InVEST ArcMap document wouldn’t open in ArcGIS 9.3. It can now be opened by Arc 9.3 and above. -Minor updates to the Coastal Protection user’s guide.
2.2.0 (2011-12-22)¶
In this release we include updates to the habitat risk assessment model, updates to Coastal Vulnerability Tier 0 (previously named Coastal Protection), and a new tier 1 Coastal Vulnerability tool. Additionally, we are releasing a beta version of our 3.0 platform that includes the terrestrial timber and carbon models.
See the “Marine Models” and “InVEST 3.0 Beta” sections below for more details.
Marine Models
Marine Python Extension Check
This tool has been updated to include extension requirements for the new Coastal Protection T1 model. It also reflects changes to the Habitat Risk Assessment and Coastal Protection T0 models, as they no longer require the PythonWin extension.
Habitat Risk Assessment (HRA)
This model has been updated and is now part of three-step toolset. The first step is a new Ratings Survey Tool which eliminates the need for Microsoft Excel when users are providing habitat-stressor ratings. This Survey Tool now allows users to up- and down-weight the importance of various criteria. For step 2, a copy of the Grid the Seascape tool has been placed in the HRA toolset. In the last step, users will run the HRA model which includes the following updates:
- New habitat outputs classifying risk as low, medium, and high
- Model run status updates (% complete) in the message window
- Improved habitat risk plots embedded in the output HTML
Coastal Protection
This module is now split into sub-models, each with two parts. The first sub-model is Coastal Vulnerability (Tier 0) and the new addition is Coastal Protection (Tier 1).
Coastal Vulnerability (T0) Step 1) Fetch Calculator - there are no updates to this tool. Step 2) Vulnerability Index
- Wave Exposure: In this version of the model, we define wave exposure for sites facing the open ocean as the maximum of the weighted average of wave’s power coming from the ocean or generated by local winds. We weight wave power coming from each of the 16 equiangular sector by the percent of time that waves occur in that sector, and based on whether or not fetch in that sector exceeds 20km. For sites that are sheltered, wave exposure is the average of wave power generated by the local storm winds weighted by the percent occurrence of those winds in each sector. This new method takes into account the seasonality of wind and wave patterns (storm waves generally come from a preferential direction), and helps identify regions that are not exposed to powerful waves although they are open to the ocean (e.g. the leeside of islands).
- Natural Habitats: The ranking is now computed using the rank of all natural habitats present in front of a segment, and we weight the lowest ranking habitat 50% more than all other habitats. Also, rankings and protective distance information are to be provided by CSV file instead of Excel. With this new method, shoreline segments that have more habitats than others will have a lower risk of inundation and/or erosion during storms.
- Structures: The model has been updated to now incorporate the presence of structures by decreasing the ranking of shoreline segments that adjoin structures.
Coastal Protection (T1) - This is a new model which plots the amount of sandy beach erosion or consolidated bed scour that backshore regions experience in the presence or absence of natural habitats. It is composed of two steps: a Profile Generator and Nearshore Waves and Erosion. It is recommended to run the Profile Generator before the Nearshore Waves and Erosion model.
Step 1) Profile Generator: This tool helps the user generate a 1-dimensional bathymetric and topographic profile perpendicular to the shoreline at the user-defined location. This model provides plenty of guidance for building backshore profiles for beaches, marshes and mangroves. It will help users modify bathymetry profiles that they already have, or can generate profiles for sandy beaches if the user has not bathymetric data. Also, the model estimates and maps the location of natural habitats present in front of the region of interest. Finally, it provides sample wave and wind data that can be later used in the Nearshore Waves and Erosion model, based on computed fetch values and default Wave Watch III data.
Step 2) Nearshore Waves and Erosion: This model estimates profiles of beach erosion or values of rates of consolidated bed scour at a site as a function of the type of habitats present in the area of interest. The model takes into account the protective effects of vegetation, coral and oyster reefs, and sand dunes. It also shows the difference of protection provided when those habitats are present, degraded, or gone.
Aesthetic Quality
This model no longer requires users to provide a projection for Overlap Analysis. Instead, it uses the projection from the user-specified Area of Interest (AOI) polygon. Additionally, the population estimates for this model have been fixed.
InVEST 3.0 Beta
The 2.2.0 release includes a preliminary version of our InVEST 3.0 beta platform. It is included as a toolset named “InVEST 3.0 Beta” in the InVEST220.tbx. It is currently only supported with ArcGIS 10. To launch an InVEST 3.0 beta tool, double click on the desired tool in the InVEST 3.0 toolset then click “Ok” on the Arc toolbox screen that opens. The InVEST 3.0 tool panel has inputs very similar to the InVEST 2.2.0 versions of the tools with the following modifications:
- InVEST 3.0 Carbon:
- Fixes a minor bug in the 2.2 version that ignored floating point values in carbon pool inputs.
- Separation of carbon model into a biophysical and valuation model.
- Calculates carbon storage and sequestration at the minimum resolution of the input maps.
- Runtime efficiency improved by an order of magnitude.
- User interface streamlined including dynamic activation of inputs based on user preference, direct link to documentation, and recall of inputs based on user’s previous run.
- InVEST 3.0 Timber:
- User interface streamlined including dynamic activation of inputs based on user preference, direct link to documentation, and recall of inputs based on user’s previous run.
2.1.1 (2011-10-17)¶
Changes in InVEST 2.1.1
General:
This is a minor release which fixes the following defects:
-A truncation error was fixed on nutrient retention and sedimentation model that involved division by the number of cells in a watershed. Now correctly calculates floating point division. -Minor typos were fixed across the user’s guide.
2.1 Beta (2011-05-11)¶
Updates to InVEST Beta
InVEST 2.1 . Beta
Changes in InVEST 2.1
General:
1. InVEST versioning We have altered our versioning scheme. Integer changes will reflect major changes (e.g. the addition of marine models warranted moving from 1.x to 2.0). An increment in the digit after the primary decimal indicates major new features (e.g the addition of a new model) or major revisions. For example, this release is numbered InVEST 2.1 because two new models are included). We will add another decimal to reflect minor feature revisions or bug fixes. For example, InVEST 2.1.1 will likely be out soon as we are continually working to improve our tool. 2. HTML guide With this release, we have migrated the entire InVEST users. guide to an HTML format. The HTML version will output a pdf version for use off-line, printing, etc.
MARINE MODELS
1.Marine Python Extension Check
-This tool has been updated to allow users to select the marine models they intend to run. Based on this selection, it will provide a summary of which Python and ArcGIS extensions are necessary and if the Python extensions have been successfully installed on the user.s machine.
2.Grid the Seascape (GS)
-This tool has been created to allow marine model users to generate an seascape analysis grid within a specified area of interest (AOI).
-It only requires an AOI and cell size (in meters) as inputs, and produces a polygon grid which can be used as inputs for the Habitat Risk Assessment and Overlap Analysis models.
- Coastal Protection
- This is now a two-part model for assessing Coastal Vulnerability. The first part is a tool for calculating fetch and the second maps the value of a Vulnerability Index, which differentiates areas with relatively high or low exposure to erosion and inundation during storms.
- The model has been updated to now incorporate coastal relief and the protective influence of up to eight natural habitat input layers.
- A global Wave Watch 3 dataset is also provided to allow users to quickly generate rankings for wind and wave exposure worldwide.
- Habitat Risk Assessment (HRA)
This new model allows users to assess the risk posed to coastal and marine habitats by human activities and the potential consequences of exposure for the delivery of ecosystem services and biodiversity. The HRA model is suited to screening the risk of current and future human activities in order to prioritize management strategies that best mitigate risk.
- Overlap Analysis
This new model maps current human uses in and around the seascape and summarizes the relative importance of various regions for particular activities. The model was designed to produce maps that can be used to identify marine and coastal areas that are most important for human use, in particular recreation and fisheries, but also other activities.
FRESHWATER MODELS
All Freshwater models now support ArcMap 10.
Sample data:
- Bug fix for error in Water_Tables.mdb Biophysical table where many field values were shifted over one column relative to the correct field name.
- Bug fix for incorrect units in erosivity layer.
Hydropower:
1.In Water Yield, new output tables have been added containing mean biophysical outputs (precipitation, actual and potential evapotranspiration, water yield) for each watershed and sub-watershed.
Water Purification:
- The Water Purification Threshold table now allows users to specify separate thresholds for nitrogen and phosphorus. Field names thresh_n and thresh_p replace the old ann_load.
- The Nutrient Retention output tables nutrient_watershed.dbf and nutrient_subwatershed.dbf now include a column for nutrient retention per watershed/sub-watershed.
- In Nutrient Retention, some output file names have changed.
- The user’s guide has been updated to explain more accurately the inclusion of thresholds in the biophysical service estimates.
Sedimentation:
- The Soil Loss output tables sediment_watershed.dbf and sediment_subwatershed.dbf now include a column for sediment retention per watershed/sub-watershed.
- In Soil Loss, some output file names have changed.
- The default input value for Slope Threshold is now 75.
- The user’s guide has been updated to explain more accurately the inclusion of thresholds in the biophysical service estimates.
- Valuation: Bug fix where the present value was not being applied correctly.
2.0 Beta (2011-02-14)¶
Changes in InVEST 2.0
InVEST 1.005 is a minor release with the following modification:
Aesthetic Quality
This new model allows users to determine the locations from which new nearshore or offshore features can be seen. It generates viewshed maps that can be used to identify the visual footprint of new offshore development.
Coastal Vulnerability
This new model produces maps of coastal human populations and a coastal exposure to erosion and inundation index map. These outputs can be used to understand the relative contributions of different variables to coastal exposure and to highlight the protective services offered by natural habitats.
Aquaculture
This new model is used to evaluate how human activities (e.g., addition or removal of farms, changes in harvest management practices) and climate change (e.g., change in sea surface temperature) may affect the production and economic value of aquacultured Atlantic salmon.
Wave Energy
This new model provides spatially explicit information, showing potential areas for siting Wave Energy conversion (WEC) facilities with the greatest energy production and value. This site- and device-specific information for the WEC facilities can then be used to identify and quantify potential trade-offs that may arise when siting WEC facilities.
Avoided Reservoir Sedimentation
- The name of this model has been changed to the Sediment Retention model.
- We have added a water quality valuation model for sediment retention. The user now has the option to select avoided dredge cost analysis, avoided water treatment cost analysis or both. The water quality valuation approach is the same as that used in the Water Purification: Nutrient Retention model.
- The threshold information for allowed sediment loads (TMDL, dead volume, etc.) are now input in a stand alone table instead of being included in the valuation table. This adjusts the biophysical service output for any social allowance of pollution. Previously, the adjustment was only done in the valuation model.
- The watersheds and sub-watershed layers are now input as shapefiles instead of rasters.
- Final outputs are now aggregated to the sub-basin scale. The user must input a sub-basin shapefile. We provide the Hydro 1K dataset as a starting option. See users guide for changes to many file output names.
- Users are strongly advised not to interpret pixel-scale outputs for hydrological understanding or decision-making of any kind. Pixel outputs should only be used for calibration/validation or model checking.
Hydropower Production
- The watersheds and sub-watershed layers are now input as shapefiles instead of rasters.
- Final outputs are now aggregated to the sub-basin scale. The user must input a sub-basin shapefile. We provide the Hydro 1K dataset as a starting option. See users guide for changes to many file output names.
- Users are strongly advised not to interpret pixel-scale outputs for hydrological understanding or decision-making of any kind. Pixel outputs should only be used for calibration/validation or model checking.
- The calibration constant for each watershed is now input in a stand-alone table instead of being included in the valuation table. This makes running the water scarcity model simpler.
Water Purification: Nutrient Retention
- The threshold information for allowed pollutant levels (TMDL, etc.) are now input in a stand alone table instead of being included in the valuation table. This adjusts the biophysical service output for any social allowance of pollution. Previously, the adjustment was only done in the valuation model.
- The watersheds and sub-watershed layers are now input as shapefiles instead of rasters.
- Final outputs are now aggregated to the sub-basin scale. The user must input a sub-basin shapefile. We provide the Hydro 1K dataset as a starting option. See users guide for changes to many file output names.
- Users are strongly advised not to interpret pixel-scale outputs for hydrological understanding or decision-making of any kind. Pixel outputs should only be used for calibration/validation or model checking.
Carbon Storage and Sequestration
The model now outputs an aggregate sum of the carbon storage.
Habitat Quality and Rarity
This model had an error while running ReclassByACII if the land cover codes were not sorted alphabetically. This has now been corrected and it sorts the reclass file before running the reclassification
The model now outputs an aggregate sum of the habitat quality.
Pollination
In this version, the pollination model accepts an additional parameter which indicated the proportion of a crops yield that is attributed to wild pollinators.
Tutorial: Batch Processing on Windows¶
Introduction¶
These are the steps that will need to be taken in order to use the batch
scripting framework for InVEST models available in the natcap.invest
python
package.
Note
The natcap.invest
python package is currently only supported in Python
2.7. Other versions of python may be supported at a later date.
Setting up your Python environment¶
Install Python 2.7.11 or later.
Python can be downloaded from here. When installing, be sure to allow
python.exe
to be added to the path in the installation options.Put pip on the PATH.
The
pip
utility for installing python packages is already included with Python 2.7.9 and later. Be sure to addC:\Python27\Scripts
to the Windows PATH environment variable so thatpip
can be called from the command line without needing to use its full path.After this is done (and you’ve opened a new command-line window), you will be able to use
pip
at the command-line to install packages like so:> pip install <packagename>
Install packages needed to run InVEST.
Most (maybe even all) of these packages can be downloaded as precompiled wheels from Christoph Gohlke’s build page. Others should be able to be installed via
pip install <packagename>
.gdal>=1.11.2,<2.0 h5py>=2.3.0 matplotlib natcap.versioner>=0.4.2 numpy>=1.11.0 pyamg>=2.2.1 pygeoprocessing>=0.3.0a17 rtree>=0.8.2 scipy>=0.14.0 shapely setuptools>=8.0
Install the InVEST python package
4a. Download a release of the
natcap.invest
python package.4b. Install the downloaded python package..
win32.whl
files are prebuilt binary distributions and can be installed via pip. See the pip docs for installing a package from a wheelwin32-py2.7.exe
files are also prebuilt binary distributions, but cannot be installed by pip. Instead, double-click the downloaded file to launch an installer..zip
and.tar.gz
files are source archives. See Installing from Source for details.
Creating Sample Python Scripts¶
Launch InVEST Model
Once an InVEST model is selected for scripting, launch that model from the Windows Start menu. This example in this guide follows the NDR model.
Fill in InVEST Model Input Parameters
Once the user interface loads, populate the inputs in the model likely to be used in the Python script. For testing purposes the default InVEST’s data is appropriate. However, if a user wishes to write a batch for several InVEST runs, it would be reasonable to populate the user interface with data for the first run.
Generate a sample Python Script from the User Interface
Open the Development menu at the top of the user interface and select “Save to python script...” and save the file to a known location.
Execute the script in the InVEST Python Environment
Launch a Windows PowerShell from the Start menu (type “powershell” in the search box), then invoke the Python interpreter on the InVEST Python script from that shell. In this example the Python interpreter is installed in
C:\Python27\python.exe
and the script was saved inC:\Users\rpsharp\Desktop\ndr.py
, thus the command to invoke the interpreter is:> C:\Python27\python.exe C:\Users\rpsharp\Desktop\ndr.py
Output Results
As the model executes, status information will be printed to the console. Once complete, model results can be found in the workspace folder selected during the initial configuration.
Modifying a Python Script¶
InVEST Python scripts consist of two sections:
- The argument dictionary that represents the model’s user interface input boxes and parameters.
- The call to the InVEST model itself.
For reference, consider the following script generated by the Nutrient model with default data inputs:
"""
This is a saved model run from natcap.invest.ndr.ndr.
Generated: Mon 16 May 2016 03:52:59 PM
InVEST version: 3.3.0
"""
import natcap.invest.ndr.ndr
args = {
u'k_param': u'2',
u'runoff_proxy_uri': u'C:\InVEST_3.3.0_x86\Base_Data\Freshwater\precip',
u'subsurface_critical_length_n': u'150',
u'subsurface_critical_length_p': u'150',
u'subsurface_eff_n': u'0.8',
u'subsurface_eff_p': u'0.8',
u'threshold_flow_accumulation': u'1000',
u'biophysical_table_uri': u'C:\InVEST_3.3.0_x86\WP_Nutrient_Retention\Input\water_biophysical_table.csv',
u'calc_n': True,
u'calc_p': True,
u'suffix': '',
u'dem_uri': u'C:\InVEST_3.3.0_x86\Base_Data\Freshwater\dem',
u'lulc_uri': u'C:\InVEST_3.3.0_x86\Base_Data\Freshwater\landuse_90',
u'watersheds_uri': u'C:\InVEST_3.3.0_x86\Base_Data\Freshwater\watersheds.shp',
u'workspace_dir': u'C:\InVEST_3.3.0_x86\ndr_workspace',
}
if __name__ == '__main__':
natcap.invest.ndr.ndr.execute(args)
Elements to note:
- Parameter Python Dictionary: Key elements include the
‘args’
dictionary. Note the similarities between the key values such as‘workspace_dir’
and the equivalent “Workspace” input parameter in the user interface. Every key in the‘args’
dictionary has a corresponding reference in the user interface.
In the example below we’ll modify the script to execute the nutrient model for a parameter study of ‘threshold_flow_accumulation’.
- Execution of the InVEST model: The InVEST API invokes models with a consistent syntax where the module name that contains the InVEST model is listed first and is followed by a function called ‘execute’ that takes a single parameter called
‘args’
. This parameter is the dictionary of input parameters discussed above. In this example, the line
natcap.invest.ndr.ndr.execute(args)
executes the nutrient model end-to-end. If the user wishes to make batch calls to InVEST, this line will likely be placed inside a loop.
Example: Threshold Flow Accumulation Parameter Study¶
This example executes the InVEST NDR model on 10 values of threshold accumulation stepping from 500 to 1000 pixels in steps of 50. To modify the script above, replace the execution call with the following loop:
#Loops through the values 500, 550, 600, ... 1000
for threshold_flow_accumulation in range(500, 1001, 50):
#set the accumulation threshold to the current value in the loop
args['threshold_flow_accumulation'] = threshold_flow_accumulation
#set the suffix to be accum### for the current threshold_flow_accumulation
args['suffix'] = 'accum' + str(threshold_flow_accumulation)
natcap.invest.ndr.ndr.execute(args)
This loop executes the InVEST nutrient model 10 times for accumulation values
500, 550, 600, ... 1000
and adds a suffix to the output files so results
can be distinguished.
Example: Invoke NDR Model on a directory of Land Cover Maps¶
In this case we invoke the InVEST nutrient model on a directory of land cover data located at C:UserRichDesktoplandcover_data. As in the previous example, replace the last line in the UI generated Python script with:
import os
landcover_dir = r'C:\User\Rich\Desktop\landcover_data'
#Loop over all the filenames in the landcover dir
for landcover_file in os.listdir(landcover_dir):
#Point the landuse uri parameter at the directory+filename
args['lulc_uri'] = os.path.join(landcover_dir, landcover_file)
#Make a useful suffix so we can differentiate the results
args['suffix'] = 'landmap' + os.path.splitext(landcover_file)[0]
#call the nutrient model
natcap.invest.ndr.ndr.execute(args)
This loop covers all the files located in
C:\User\Rich\Desktop\landcover_data
and updates the relevant lulc_uri
key in the args dictionary to each
of those files during execution as well as making a useful suffix so output
files can be distinguished from each other.
Example: Saving model log messages to a file¶
There are many cases where you may want or need to capture all of the log
messages generated by the model. When we run models through the InVEST user
interface application, the UI captures all of this logging and saves it to a
logfile. We can replicate this behavior through the python logging package,
by adding the following code just after the import
statements in the
example script.
import logging
import pygeoprocessing
# Write all NDR log messages to logfile.txt
MODEL_LOGGER = natcap.invest.ndr.ndr.LOGGER
handler = logging.FileHandler('logfile.txt')
MODEL_LOGGER.addHandler(handler)
# log pygeoprocessing messages to the same logfile
PYGEO_LOGGER = pygeoprocessing.geoprocessing.LOGGER
PYGEO_LOGGER.addHandler(handler)
This will capture all logging generated by the ndr
model and by
pygeoprocessing
, writing all messages to logfile.txt
. While
this is a common use case, the logging
package provides functionality
for many more complex logging features. For more
advanced use of the python logging module, refer to the Python project’s
Logging Cookbook
Summary¶
The InVEST scripting framework was designed to assist InVEST users in automating batch runs or adding custom functionality to the existing InVEST software suite. Support questions can be directed to the NatCap support forums at http://forums.naturalcapitalproject.org.
API Reference¶
InVEST Model Entry Points¶
All InVEST models share a consistent python API:
- The model has a function called
execute
that takes a single python dict ("args"
) as its argument.- This arguments dict contains an entry,
'workspace_dir'
, which points to the folder on disk where all files created by the model should be saved.
Calling a model requires importing the model’s execute function and then calling the model with the correct parameters. For example, if you were to call the Carbon Storage and Sequestration model, your script might include
import natcap.invest.carbon.carbon_combined
args = {
'workspace_dir': 'path/to/workspace'
# Other arguments, as needed for Carbon.
}
natcap.invest.carbon.carbon_combined.execute(args)
For examples of scripts that could be created around a model run, or multiple successive model runs, see Creating Sample Python Scripts.
Available Models and Tools:
- Annual Water Yield: Reservoir Hydropower Production
- Carbon Storage and Sequestration
- Coastal Blue Carbon
- Coastal Blue Carbon Preprocessor
- Coastal Vulnerability
- Crop Production
- Delineateit: Watershed Delineation
- Finfish Aquaculture
- Fisheries
- Fisheries: Habitat Scenario Tool
- Forest Carbon Edge Effect
- GLOBIO
- Habitat Quality
- Habitat Risk Assessment
- Habitat Risk Assessment Preprocessor
- Habitat Suitability
- Managed Timber Production
- Marine Water Quality
- Nutrient Delivery Ratio
- Overlap Analysis
- Overlap Analysis: Management Zones
- Pollinator Abundance: Crop Pollination
- Recreation
- RouteDEM: D-Infinity Routing
- Scenario Generator: Proximity-Based
- Scenario Generator: Rule-Based
- Scenic Quality
- Seasonal Water Yield
- Sediment Delivery Ratio
- Wave Energy
- Wind Energy
Annual Water Yield: Reservoir Hydropower Production¶
-
natcap.invest.hydropower.hydropower_water_yield.
execute
(args)¶ Annual Water Yield: Reservoir Hydropower Production.
Executes the hydropower/water_yield model
Parameters: - args['workspace_dir'] (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- args['lulc_uri'] (string) – a uri to a land use/land cover raster whose LULC indexes correspond to indexes in the biophysical table input. Used for determining soil retention and other biophysical properties of the landscape. (required)
- args['depth_to_root_rest_layer_uri'] (string) – a uri to an input raster describing the depth of “good” soil before reaching this restrictive layer (required)
- args['precipitation_uri'] (string) – a uri to an input raster describing the average annual precipitation value for each cell (mm) (required)
- args['pawc_uri'] (string) – a uri to an input raster describing the plant available water content value for each cell. Plant Available Water Content fraction (PAWC) is the fraction of water that can be stored in the soil profile that is available for plants’ use. PAWC is a fraction from 0 to 1 (required)
- args['eto_uri'] (string) – a uri to an input raster describing the annual average evapotranspiration value for each cell. Potential evapotranspiration is the potential loss of water from soil by both evaporation from the soil and transpiration by healthy Alfalfa (or grass) if sufficient water is available (mm) (required)
- args['watersheds_uri'] (string) – a uri to an input shapefile of the watersheds of interest as polygons. (required)
- args['sub_watersheds_uri'] (string) – a uri to an input shapefile of
the subwatersheds of interest that are contained in the
args['watersheds_uri']
shape provided as input. (optional) - args['biophysical_table_uri'] (string) – a uri to an input CSV table of land use/land cover classes, containing data on biophysical coefficients such as root_depth (mm) and Kc, which are required. A column with header LULC_veg is also required which should have values of 1 or 0, 1 indicating a land cover type of vegetation, a 0 indicating non vegetation or wetland, water. NOTE: these data are attributes of each LULC class rather than attributes of individual cells in the raster map (required)
- args['seasonality_constant'] (float) – floating point value between 1 and 10 corresponding to the seasonal distribution of precipitation (required)
- args['results_suffix'] (string) – a string that will be concatenated onto the end of file names (optional)
- args['demand_table_uri'] (string) – a uri to an input CSV table of LULC classes, showing consumptive water use for each landuse / land-cover type (cubic meters per year) (required for water scarcity)
- args['valuation_table_uri'] (string) – a uri to an input CSV table of hydropower stations with the following fields (required for valuation): (‘ws_id’, ‘time_span’, ‘discount’, ‘efficiency’, ‘fraction’, ‘cost’, ‘height’, ‘kw_price’)
Returns: None
Carbon Storage and Sequestration¶
-
natcap.invest.carbon.carbon_combined.
execute
(args)¶ Carbon Storage and Sequestration.
This can include the biophysical model, the valuation model, or both.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- suffix (string) – a string to append to any output file name (optional)
- do_biophysical (boolean) – whether to run the biophysical model
- lulc_cur_uri (string) – a uri to a GDAL raster dataset (required)
- lulc_cur_year (int) – An integer representing the year of lulc_cur used in HWP calculation (required if args contains a ‘hwp_cur_shape_uri’, or ‘hwp_fut_shape_uri’ key)
- lulc_fut_uri (string) – a uri to a GDAL raster dataset (optional if calculating sequestration)
- lulc_redd_uri (string) – a uri to a GDAL raster dataset that represents land cover data for the REDD policy scenario (optional).
- lulc_fut_year (int) – An integer representing the year of lulc_fut used in HWP calculation (required if args contains a ‘hwp_fut_shape_uri’ key)
- carbon_pools_uri (string) – a uri to a CSV or DBF dataset mapping carbon storage density to the lulc classifications specified in the lulc rasters. (required if ‘do_uncertainty’ is false)
- hwp_cur_shape_uri (String) – Current shapefile uri for harvested wood calculation (optional, include if calculating current lulc hwp)
- hwp_fut_shape_uri (String) – Future shapefile uri for harvested wood calculation (optional, include if calculating future lulc hwp)
- do_uncertainty (boolean) – a boolean that indicates whether we should do uncertainty analysis. Defaults to False if not present.
- carbon_pools_uncertain_uri (string) – as above, but has probability distribution data for each lulc type rather than point estimates. (required if ‘do_uncertainty’ is true)
- confidence_threshold (float) – a number between 0 and 100 that indicates the minimum threshold for which we should highlight regions in the output raster. (required if ‘do_uncertainty’ is True)
- sequest_uri (string) – uri to a GDAL raster dataset describing the amount of carbon sequestered.
- yr_cur (int) – the year at which the sequestration measurement started
- yr_fut (int) – the year at which the sequestration measurement ended
- do_valuation (boolean) – whether to run the valuation model
- carbon_price_units (string) – indicates whether the price is in terms of carbon or carbon dioxide. Can value either as ‘Carbon (C)’ or ‘Carbon Dioxide (CO2)’.
- V (string) – value of a sequestered ton of carbon or carbon dioxide in
- per metric ton (dollars) –
- r (int) – the market discount rate in terms of a percentage
- c (float) – the annual rate of change in the price of carbon
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir/', 'suffix': '_results', 'do_biophysical': True, 'lulc_cur_uri': 'path/to/lulc_cur', 'lulc_cur_year': 2014, 'lulc_fut_uri': 'path/to/lulc_fut', 'lulc_redd_uri': 'path/to/lulc_redd', 'lulc_fut_year': 2025, 'carbon_pools_uri': 'path/to/carbon_pools', 'hwp_cur_shape_uri': 'path/to/hwp_cur_shape', 'hwp_fut_shape_uri': 'path/to/hwp_fut_shape', 'do_uncertainty': True, 'carbon_pools_uncertain_uri': 'path/to/carbon_pools_uncertain', 'confidence_threshold': 50.0, 'sequest_uri': 'path/to/sequest_uri', 'yr_cur': 2014, 'yr_fut': 2025, 'do_valuation': True, 'carbon_price_units':, 'Carbon (C)', 'V': 43.0, 'r': 7, 'c': 0, }
Returns: outputs – contains names of all output files Return type: dictionary
Coastal Blue Carbon¶
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
execute
(args)¶ Coastal Blue Carbon.
Parameters: - workspace_dir (str) – location into which all intermediate and output files should be placed.
- results_suffix (str) – a string to append to output filenames.
- lulc_lookup_uri (str) – filepath to a CSV table used to convert the lulc code to a name. Also used to determine if a given lulc type is a coastal blue carbon habitat.
- lulc_transition_matrix_uri (str) – generated by the preprocessor. This file must be edited before it can be used by the main model. The left-most column represents the source lulc class, and the top row represents the destination lulc class.
- carbon_pool_initial_uri (str) – the provided CSV table contains information related to the initial conditions of the carbon stock within each of the three pools of a habitat. Biomass includes carbon stored above and below ground. All non-coastal blue carbon habitat lulc classes are assumed to contain no carbon. The values for ‘biomass’, ‘soil’, and ‘litter’ should be given in terms of Megatonnes CO2 e/ ha.
- carbon_pool_transient_uri (str) – the provided CSV table contains information related to the transition of carbon into and out of coastal blue carbon pools. All non-coastal blue carbon habitat lulc classes are assumed to neither sequester nor emit carbon as a result of change. The ‘yearly_accumulation’ values should be given in terms of Megatonnes of CO2 e/ha-yr. The ‘half-life’ values must be given in terms of years. The ‘disturbance’ values must be given as a decimal (e.g. 0.5 for 50%) of stock distrubed given a transition occurs away from a lulc-class.
- lulc_baseline_map_uri (str) – a GDAL-supported raster representing the baseline landscape/seascape.
- lulc_transition_maps_list (list) – a list of GDAL-supported rasters representing the landscape/seascape at particular points in time. Provided in chronological order.
- lulc_transition_years_list (list) – a list of years that respectively correspond to transition years of the rasters. Provided in chronological order.
- analysis_year (int) – optional. Indicates how many timesteps to run the transient analysis beyond the last transition year. Must come chronologically after the last transition year if provided. Otherwise, the final timestep of the model will be set to the last transition year.
- do_economic_analysis (bool) – boolean value indicating whether model should run economic analysis.
- do_price_table (bool) – boolean value indicating whether a price table is included in the arguments and to be used or a price and interest rate is provided and to be used instead.
- price (float) – the price per Megatonne CO2 e at the base year.
- interest_rate (float) – the interest rate on the price per Megatonne CO2e, compounded yearly. Provided as a percentage (e.g. 3.0 for 3%).
- price_table_uri (bool) – if args[‘do_price_table’] is set to True the provided CSV table is used in place of the initial price and interest rate inputs. The table contains the price per Megatonne CO2e sequestered for a given year, for all years from the original snapshot to the analysis year, if provided.
- discount_rate (float) – the discount rate on future valuations of sequestered carbon, compounded yearly. Provided as a percentage (e.g. 3.0 for 3%).
Example Args:
args = { 'workspace_dir': 'path/to/workspace/', 'results_suffix': '', 'lulc_lookup_uri': 'path/to/lulc_lookup_uri', 'lulc_transition_matrix_uri': 'path/to/lulc_transition_uri', 'carbon_pool_initial_uri': 'path/to/carbon_pool_initial_uri', 'carbon_pool_transient_uri': 'path/to/carbon_pool_transient_uri', 'lulc_baseline_map_uri': 'path/to/baseline_map.tif', 'lulc_transition_maps_list': [raster1_uri, raster2_uri, ...], 'lulc_transition_years_list': [2000, 2005, ...], 'analysis_year': 2100, 'do_economic_analysis': '<boolean>', 'do_price_table': '<boolean>', 'price': '<float>', 'interest_rate': '<float>', 'price_table_uri': 'path/to/price_table', 'discount_rate': '<float>' }
Coastal Blue Carbon Preprocessor¶
-
natcap.invest.coastal_blue_carbon.preprocessor.
execute
(args)¶ Coastal Blue Carbon Preprocessor.
The preprocessor accepts a list of rasters and checks for cell-transitions across the rasters. The preprocessor outputs a CSV file representing a matrix of land cover transitions, each cell prefilled with a string indicating whether carbon accumulates or is disturbed as a result of the transition, if a transition occurs.
Parameters: - workspace_dir (string) – directory path to workspace
- results_suffix (string) – append to outputs directory name if provided
- lulc_lookup_uri (string) – filepath of lulc lookup table
- lulc_snapshot_list (list) – a list of filepaths to lulc rasters
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'results_suffix': '', 'lulc_lookup_uri': 'path/to/lookup.csv', 'lulc_snapshot_list': ['path/to/raster1', 'path/to/raster2', ...] }
Coastal Vulnerability¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability.
execute
(args)¶ Coastal Vulnerability.
Parameters: - workspace_dir (string) – The path to the workspace directory on disk (required)
- aoi_uri (string) – Path to an OGR vector on disk representing the area of interest. (required)
- landmass_uri (string) – Path to an OGR vector on disk representing the global landmass. (required)
- bathymetry_uri (string) – Path to a GDAL raster on disk representing the bathymetry. Must overlap with the Area of Interest if if provided. (optional)
- bathymetry_constant (int) – An int between 1 and 5 (inclusive). (optional)
- relief_uri (string) – Path to a GDAL raster on disk representing the elevation within the land polygon provided. (optional)
- relief_constant (int) – An int between 1 and 5 (inclusive). (optional)
- elevation_averaging_radius (int) – a positive int. The radius around which to compute the average elevation for relief. Must be in meters. (required)
- mean_sea_level_datum (int) – a positive int. This input is the elevation of Mean Sea Level (MSL) datum relative to the datum of the bathymetry layer that they provide. The model transforms all depths to MSL datum by subtracting the value provided by the user to the bathymetry. This input can be used to run the model for a future sea-level rise scenario. Must be in meters. (required)
- cell_size (int) – Cell size in meters. The higher the value, the faster the computation, but the coarser the output rasters produced by the model. (required)
- depth_threshold (int) – Depth in meters (integer) cutoff to determine if fetch rays project over deep areas. (optional)
- exposure_proportion (float) – Minimum proportion of rays that project over exposed and/or deep areas need to classify a shore segment as exposed. (required)
- geomorphology_uri (string) – A OGR-supported polygon vector file that has a field called “RANK” with values between 1 and 5 in the attribute table. (optional)
- geomorphology_constant (int) – Integer value between 1 and 5. If layer associated to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether.
- habitats_directory_uri (string) – Directory containing OGR-supported polygon vectors associated with natural habitats. The name of these shapefiles should be suffixed with the ID that is specified in the natural habitats CSV file provided along with the habitats (optional)
- habitats_csv_uri (string) – A CSV file listing the attributes for each
habitat. For more information, see ‘Habitat Data Layer’ section in
the model’s documentation. (required if
args['habitat_directory_uri']
is provided) - habitat_constant (int) – Integer value between 1 and 5. If layer associated to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- area_computed (string) – Determine if the output data is about all the
coast about sheltered segments only. Either
'sheltered'
or'both'
(required) - suffix (string) – A string that will be added to the end of the output file. (optional)
- climatic_forcing_uri (string) – An OGR-supported vector containing both wind wave information across the region of interest. (optional)
- climatic_forcing_constant (int) – Integer value between 1 and 5. If layer to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- continental_shelf_uri (string) – An OGR-supported polygon vector delineating edges of the continental shelf. Default is global continental shelf shapefile. If omitted, the user can specify depth contour. See entry below. (optional)
- depth_contour (int) – Used to delineate shallow and deep areas. Continental limit is at about 150 meters. (optional)
- sea_level_rise_uri (string) – An OGR-supported point or polygon vector file features with “Trend” fields in the attributes table. (optional)
- sea_level_rise_constant (int) – Integer value between 1 and 5. If layer to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- structures_uri (string) – An OGR-supported vector file containing rigid structures to identify the portions of the coast that is armored. (optional)
- structures_constant (int) – Integer value between 1 and 5. If layer associated this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- population_uri (string) – A GDAL-supported raster file representing the population. (required)
- urban_center_threshold (int) – Minimum population required to consider shore segment a population center. (required)
- additional_layer_uri (string) – An OGR-supported vector file representing level rise, and will be used in the computation of coastal vulnerability and coastal vulnerability without habitat. (optional)
- additional_layer_constant (int) – Integer value between 1 and 5. If layer to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- rays_per_sector (int) – Number of rays used to subsample the fetch distance each of the 16 sectors. (required)
- max_fetch (int) – Maximum fetch distance computed by the model (>=60,000m). (optional)
- spread_radius (int) – Integer multiple of ‘cell size’. The coast from geomorphology layer could be of a better resolution than the global landmass, so the shores do not necessarily overlap. To make them coincide, the shore from the geomorphology layer is widened by 1 or more pixels. The value should be a multiple of ‘cell size’ that indicates how many pixels the coast from the geomorphology layer is widened. The widening happens on each side of the coast (n pixels landward, and n pixels seaward). (required)
- population_radius (int) – Radius length in meters used to count the number people leaving close to the coast. (optional)
Note
If neither
args['bathymetry_uri']
norargs['bathymetry_constant']
is provided, bathymetry is ignored altogether.If neither
args['relief_uri']
norargs['relief_constant']
is provided, relief is ignored altogether.If neither
args['geomorphology_uri']
norargs['geomorphology_constant']
is provided, geomorphology is ignored altogether.If neither
args['climatic_forcing_uri']
norargs['climatic_forcing_constant']
is provided, climatic_forcing is ignored altogether.If neither
args['sea_level_rise_uri']
norargs['sea_level_rise_constant']
is provided, sea level rise is ignored altogether.If neither
args['structures_uri']
norargs['structures_constant']
is provided, structures is ignored altogether.If neither
args['additional_layer_uri']
norargs['additional_layer_constant']
is provided, the additional layer option is ignored altogether.Example args:
args = { u'additional_layer_uri': u'CoastalProtection/Input/SeaLevRise_WCVI.shp', u'aoi_uri': u'CoastalProtection/Input/AOI_BarkClay.shp', u'area_computed': u'both', u'bathymetry_uri': u'Base_Data/Marine/DEMs/claybark_dem/hdr.adf', u'cell_size': 1000, u'climatic_forcing_uri': u'CoastalProtection/Input/WaveWatchIII.shp', u'continental_shelf_uri': u'CoastalProtection/Input/continentalShelf.shp', u'depth_contour': 150, u'depth_threshold': 0, u'elevation_averaging_radius': 5000, u'exposure_proportion': 0.8, u'geomorphology_uri': u'CoastalProtection/Input/Geomorphology_BarkClay.shp', u'habitats_csv_uri': u'CoastalProtection/Input/NaturalHabitat_WCVI.csv', u'habitats_directory_uri': u'CoastalProtection/Input/NaturalHabitat', u'landmass_uri': u'Base_Data/Marine/Land/global_polygon.shp', u'max_fetch': 12000, u'mean_sea_level_datum': 0, u'population_radius': 1000, u'population_uri': u'Base_Data/Marine/Population/global_pop/w001001.adf', u'rays_per_sector': 1, u'relief_uri': u'Base_Data/Marine/DEMs/claybark_dem/hdr.adf', u'sea_level_rise_uri': u'CoastalProtection/Input/SeaLevRise_WCVI.shp', u'spread_radius': 250, u'structures_uri': u'CoastalProtection/Input/Structures_BarkClay.shp', u'urban_center_threshold': 5000, u'workspace_dir': u'coastal_vulnerability_workspace' }
Returns: None
Crop Production¶
-
natcap.invest.crop_production.crop_production.
execute
(args)¶ Crop Production.
Parameters: - args['workspace_dir'] (str) – location into which all intermediate and output files should be placed.
- args['results_suffix'] (str) – a string to append to output filenames
- args['lookup_table'] (str) – filepath to a CSV table used to convert the crop code provided in the Crop Map to the crop name that can be used for searching through inputs and formatting outputs.
- args['aoi_raster'] (str) – a GDAL-supported raster representing a crop management scenario.
- args['dataset_dir'] (str) – the provided folder should contain a set of folders and data specified in the ‘Running the Model’ section of the model’s User Guide.
- args['yield_function'] (str) – the method used to compute crop yield. Can be one of three: ‘observed’, ‘percentile’, and ‘regression’.
- args['percentile_column'] (str) – for percentile yield function, the table column name must be provided so that the program can fetch the correct yield values for each climate bin.
- args['fertilizer_dir'] (str) – path to folder that contains a set of GDAL-supported rasters representing the amount of Nitrogen (N), Phosphorous (P2O5), and Potash (K2O) applied to each area of land (kg/ha).
- args['irrigation_raster'] (str) – filepath to a GDAL-supported raster representing whether irrigation occurs or not. A zero value indicates that no irrigation occurs. A one value indicates that irrigation occurs. If any other values are provided, irrigation is assumed to occur within that cell area.
- args['compute_nutritional_contents'] (boolean) – if true, calculates nutrition from crop production and creates associated outputs.
- args['nutrient_table'] (str) – filepath to a CSV table containing information about the nutrient contents of each crop.
- args['compute_financial_analysis'] (boolean) – if true, calculates economic returns from crop production and creates associated outputs.
- args['economics_table'] (str) – filepath to a CSV table containing information related to market price of a given crop and the costs involved with producing that crop.
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'results_suffix': 'scenario_name', 'lookup_table': 'path/to/lookup_table', 'aoi_raster': 'path/to/aoi_raster', 'dataset_dir': 'path/to/dataset_dir/', 'yield_function': 'regression', 'percentile_column': 'yield_95th', 'fertilizer_dir':'path/to/fertilizer_rasters_dir/', 'irrigation_raster': 'path/to/is_irrigated_raster', 'compute_nutritional_contents': True, 'nutrient_table': 'path/to/nutrition_table', 'compute_financial_analysis': True, 'economics_table': 'path/to/economics_table' }
Delineateit: Watershed Delineation¶
-
natcap.invest.routing.delineateit.
execute
(args)¶ Delineateit: Watershed Delineation.
This ‘model’ provides an InVEST-based wrapper around the pygeoprocessing routing API for watershed delineation.
Upon successful completion, the following files are written to the output workspace:
snapped_outlets.shp
- an ESRI shapefile with the points snapped to a nearby stream.watersheds.shp
- an ESRI shapefile of watersheds determined by the d-infinity routing algorithm.stream.tif
- a GeoTiff representing detected streams based on the providedflow_threshold
parameter. Values of 1 are streams, values of 0 are not.
Parameters: - workspace_dir (string) – The selected folder is used as the workspace all intermediate and output files will be written.If the selected folder does not exist, it will be created. If datasets already exist in the selected folder, they will be overwritten. (required)
- results_suffix (string) – This text will be appended to the end of output files to help separate multiple runs. (optional)
- dem_uri (string) – A GDAL-supported raster file with an elevation for each cell. Make sure the DEM is corrected by filling in sinks, and if necessary burning hydrographic features into the elevation model (recommended when unusual streams are observed.) See the ‘Working with the DEM’ section of the InVEST User’s Guide for more information. (required)
- outlet_shapefile_uri (string) – This is a vector of points representing points that the watersheds should be built around. (required)
- flow_threshold (int) – The number of upstream cells that must into a cell before it’s considered part of a stream such that retention stops and the remaining export is exported to the stream. Used to define streams from the DEM. (required)
- snap_distance (int) – Pixel Distance to Snap Outlet Points (required)
Returns: None
Finfish Aquaculture¶
-
natcap.invest.finfish_aquaculture.finfish_aquaculture.
execute
(args)¶ Finfish Aquaculture.
This function will take care of preparing files passed into the finfish aquaculture model. It will handle all files/inputs associated with biophysical and valuation calculations and manipulations. It will create objects to be passed to the aquaculture_core.py module. It may write log, warning, or error messages to stdout.
Parameters: - workspace_dir (string) – The directory in which to place all result files.
- ff_farm_loc (string) – URI that points to a shape file of fishery locations
- farm_ID (string) – column heading used to describe individual farms. Used to link GIS location data to later inputs.
- g_param_a (float) – Growth parameter alpha, used in modeling fish growth, should be an int or float.
- g_param_b (float) – Growth parameter beta, used in modeling fish growth, should be an int or float.
- g_param_tau (float) – Growth parameter tau, used in modeling fish growth, should be an int or float
- use_uncertainty (boolean) –
- g_param_a_sd (float) – (description)
- g_param_b_sd (float) – (description)
- num_monte_carlo_runs (int) –
- water_temp_tbl (string) – URI to a CSV table where daily water temperature values are stored from one year
- farm_op_tbl (string) – URI to CSV table of static variables for calculations
- outplant_buffer (int) – This value will allow the outplanting start day to be flexible plus or minus the number of days specified here.
- do_valuation (boolean) – Boolean that indicates whether or not valuation should be performed on the aquaculture model
- p_per_kg (float) – Market price per kilogram of processed fish
- frac_p (float) – Fraction of market price that accounts for costs rather than profit
- discount (float) – Daily market discount rate
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'ff_farm_loc': 'path/to/shapefile', 'farm_ID': 'FarmID' 'g_param_a': 0.038, 'g_param_b': 0.6667, 'g_param_tau': 0.08, 'use_uncertainty': True, 'g_param_a_sd': 0.005, 'g_param_b_sd': 0.05, 'num_monte_carlo_runs': 1000, 'water_temp_tbl': 'path/to/water_temp_tbl', 'farm_op_tbl': 'path/to/farm_op_tbl', 'outplant_buffer': 3, 'do_valuation': True, 'p_per_kg': 2.25, 'frac_p': 0.3, 'discount': 0.000192, }
Fisheries¶
-
natcap.invest.fisheries.fisheries.
execute
(args, create_outputs=True)¶ Fisheries.
Parameters: - args['workspace_dir'] (str) – location into which all intermediate and output files should be placed.
- args['results_suffix'] (str) – a string to append to output filenames
- args['aoi_uri'] (str) – location of shapefile which will be used as subregions for calculation. Each region must conatin a ‘Name’ attribute (case-sensitive) matching the given name in the population parameters csv file.
- args['timesteps'] (int) – represents the number of time steps that the user desires the model to run.
- args['population_type'] (str) – specifies whether the model is age-specific or stage-specific. Options will be either “Age Specific” or “Stage Specific” and will change which equation is used in modeling growth.
- args['sexsp'] (str) – specifies whether or not the age and stage classes are distinguished by sex.
- args['harvest_units'] (str) – specifies how the user wants to get the harvest data. Options are either “Individuals” or “Weight”, and will change the harvest equation used in core. (Required if args[‘val_cont’] is True)
- args['do_batch'] (bool) – specifies whether program will perform a single model run or a batch (set) of model runs.
- args['population_csv_uri'] (str) – location of the population parameters csv. This will contain all age and stage specific parameters. (Required if args[‘do_batch’] is False)
- args['population_csv_dir'] (str) – location of the directory that contains the Population Parameters CSV files for batch processing (Required if args[‘do_batch’] is True)
- args['spawn_units'] (str) – (description)
- args['total_init_recruits'] (float) – represents the initial number of recruits that will be used in calculation of population on a per area basis.
- args['recruitment_type'] (str) – Name corresponding to one of the built-in recruitment functions {‘Beverton-Holt’, ‘Ricker’, ‘Fecundity’, Fixed}, or ‘Other’, meaning that the user is passing in their own recruitment function as an anonymous python function via the optional dictionary argument ‘recruitment_func’.
- args['recruitment_func'] (function) – Required if args[‘recruitment_type’] is set to ‘Other’. See below for instructions on how to create a user-defined recruitment function.
- args['alpha'] (float) – must exist within args for BH or Ricker Recruitment. Parameter that will be used in calculation of recruitment.
- args['beta'] (float) – must exist within args for BH or Ricker Recruitment. Parameter that will be used in calculation of recruitment.
- args['total_recur_recruits'] (float) – must exist within args for Fixed Recruitment. Parameter that will be used in calculation of recruitment.
- args['migr_cont'] (bool) – if True, model uses migration
- args['migration_dir'] (str) – if this parameter exists, it means migration is desired. This is the location of the parameters folder containing files for migration. There should be one file for every age class which migrates. (Required if args[‘migr_cont’] is True)
- args['val_cont'] (bool) – if True, model computes valuation
- args['frac_post_process'] (float) – represents the fraction of the species remaining after processing of the whole carcass is complete. This will exist only if valuation is desired for the particular species. (Required if args[‘val_cont’] is True)
- args['unit_price'] (float) – represents the price for a single unit of harvest. Exists only if valuation is desired. (Required if args[‘val_cont’] is True)
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'results_suffix': 'scenario_name', 'aoi_uri': 'path/to/aoi_uri', 'total_timesteps': 100, 'population_type': 'Stage-Based', 'sexsp': 'Yes', 'harvest_units': 'Individuals', 'do_batch': False, 'population_csv_uri': 'path/to/csv_uri', 'population_csv_dir': '', 'spawn_units': 'Weight', 'total_init_recruits': 100000.0, 'recruitment_type': 'Ricker', 'alpha': 32.4, 'beta': 54.2, 'total_recur_recruits': 92.1, 'migr_cont': True, 'migration_dir': 'path/to/mig_dir/', 'val_cont': True, 'frac_post_process': 0.5, 'unit_price': 5.0, }
Creating a User-Defined Recruitment Function
An optional argument has been created in the Fisheries Model to allow users proficient in Python to pass their own recruitment function into the program via the args dictionary.
Using the Beverton-Holt recruitment function as an example, here’s how a user might create and pass in their own recruitment function:
import natcap.invest import numpy as np # define input data Matu = np.array([...]) # the Maturity vector in the Population Parameters File Weight = np.array([...]) # the Weight vector in the Population Parameters File LarvDisp = np.array([...]) # the LarvalDispersal vector in the Population Parameters File alpha = 2.0 # scalar value beta = 10.0 # scalar value sexsp = 2 # 1 = not sex-specific, 2 = sex-specific # create recruitment function def spawners(N_prev): return (N_prev * Matu * Weight).sum() def rec_func_BH(N_prev): N_0 = (LarvDisp * ((alpha * spawners( N_prev) / (beta + spawners(N_prev)))) / sexsp) return (N_0, spawners(N_prev)) # fill out args dictionary args = {} # ... define other arguments ... args['recruitment_type'] = 'Other' # lets program know to use user-defined function args['recruitment_func'] = rec_func_BH # pass recruitment function as 'anonymous' Python function # run model natcap.invest.fisheries.fisheries.execute(args)
Conditions that a new recruitment function must meet to run properly:
- The function must accept as an argument: a single numpy three-dimensional array (N_prev) representing the state of the population at the previous time step. N_prev has three dimensions: the indices of the first dimension correspond to the region (must be in same order as provided in the Population Parameters File), the indices of the second dimension represent the sex if it is specific (i.e. two indices representing female, then male if the model is ‘sex-specific’, else just a single zero index representing the female and male populations aggregated together), and the indicies of the third dimension represent age/stage in ascending order.
- The function must return: a tuple of two values. The first value (N_0) being a single numpy one-dimensional array representing the youngest age of the population for the next time step. The indices of the array correspond to the regions of the population (outputted in same order as provided). If the model is sex-specific, it is currently assumed that males and females are produced in equal number and that the returned array has been already been divided by 2 in the recruitment function. The second value (spawners) is the number or weight of the spawners created by the population from the previous time step, provided as a scalar float value (non-negative).
Example of How Recruitment Function Operates within Fisheries Model:
# input data N_prev_xsa = [[[region0-female-age0, region0-female-age1], [region0-male-age0, region1-male-age1]], [[region1-female-age0, region1-female-age1], [region1-male-age0], [region1-male-age1]]] # execute function N_0_x, spawners = rec_func(N_prev_xsa) # output data - where N_0 contains information about the youngest # age/stage of the population for the next time step: N_0_x = [region0-age0, region1-age0] # if sex-specific, rec_func should divide by two before returning type(spawners) is float
Fisheries: Habitat Scenario Tool¶
-
natcap.invest.fisheries.fisheries_hst.
execute
(args)¶ Fisheries: Habitat Scenario Tool.
The Fisheries Habitat Scenario Tool generates a new Population Parameters CSV File with modified survival attributes across classes and regions based on habitat area changes and class-level dependencies on those habitats.
param str args[‘workspace_dir’]: location into which the resultant modified Population Parameters CSV file should be placed. param str args[‘sexsp’]: specifies whether or not the age and stage classes are distinguished by sex. Options: ‘Yes’ or ‘No’ param str args[‘population_csv_uri’]: location of the population parameters csv file. This file contains all age and stage specific parameters. param str args[‘habitat_chg_csv_uri’]: location of the habitat change parameters csv file. This file contains habitat area change information. param str args[‘habitat_dep_csv_uri’]: location of the habitat dependency parameters csv file. This file contains habitat-class dependency information. param float args[‘gamma’]: describes the relationship between a change in habitat area and a change in survival of life stages dependent on that habitat - Returns:
- None
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'sexsp': 'Yes', 'population_csv_uri': 'path/to/csv', 'habitat_chg_csv_uri': 'path/to/csv', 'habitat_dep_csv_uri': 'path/to/csv', 'gamma': 0.5, }
Note:
- Modified Population Parameters CSV File saved to ‘workspace_dir/output/’
‘’‘
# Parse, Verify Inputs vars_dict = io.fetch_args(args)
# Convert Data vars_dict = convert_survival_matrix(vars_dict)
# Generate Modified Population Parameters CSV File io.save_population_csv(vars_dict)
- def convert_survival_matrix(vars_dict):
‘’’ Creates a new survival matrix based on the information provided by the user related to habitat area changes and class-level dependencies on those habitats.
- Args:
- vars_dict (dictionary): see fisheries_preprocessor_io.fetch_args for
- example
- Returns:
- vars_dict (dictionary): modified vars_dict with new Survival matrix
- accessible using the key ‘Surv_nat_xsa_mod’ with element values that exist between [0,1]
Example Returns:
ret = { # Other Variables... 'Surv_nat_xsa_mod': np.ndarray([...]) }
Forest Carbon Edge Effect¶
-
natcap.invest.forest_carbon_edge_effect.
execute
(args)¶ Forest Carbon Edge Effect.
InVEST Carbon Edge Model calculates the carbon due to edge effects in tropical forest pixels.
Parameters: - args['workspace_dir'] (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['n_nearest_model_points'] (int) – number of nearest neighbor model points to search for
- args['aoi_uri'] (string) – (optional) if present, a path to a shapefile that will be used to aggregate carbon stock results at the end of the run.
- args['biophysical_table_uri'] (string) –
a path to a CSV table that has at least the fields ‘lucode’ and ‘c_above’. If
args['compute_forest_edge_effects'] == True
, table must also contain an ‘is_tropical_forest’ field. Ifargs['pools_to_calculate'] == 'all'
, this table must contain the fields ‘c_below’, ‘c_dead’, and ‘c_soil’.lucode
: an integer that corresponds to landcover codes in the rasterargs['lulc_uri']
is_tropical_forest
: either 0 or 1 indicating whether the landcover type is forest (1) or not (0). If 1, the value inc_above
is ignored and instead calculated from the edge regression model.c_above
: floating point number indicating tons of above ground carbon per hectare for that landcover type{'c_below', 'c_dead', 'c_soil'}
: three other optional carbon pools that will statically map landcover types to the carbon densities in the table.
Example:
lucode,is_tropical_forest,c_above,c_soil,c_dead,c_below 0,0,32.8,5,5.2,2.1 1,1,n/a,2.5,0.0,0.0 2,1,n/a,1.8,1.0,0.0 16,0,28.1,4.3,0.0,2.0
Note the “n/a” in
c_above
are optional since that field is ignored whenis_tropical_forest==1
. - args['lulc_uri'] (string) – path to a integer landcover code raster
- args['pools_to_calculate'] (string) – one of “all” or “above_ground”. If “all” model expects ‘c_above’, ‘c_below’, ‘c_dead’, ‘c_soil’ in header of biophysical_table and will make a translated carbon map for each based off the landcover map. If “above_ground”, this is only done with ‘c_above’.
- args['compute_forest_edge_effects'] (boolean) – if True, requires biophysical table to have ‘is_tropical_forest’ forest field, and any landcover codes that have a 1 in this column calculate carbon stocks using the Chaplin-Kramer et. al method and ignore ‘c_above’.
- args['tropical_forest_edge_carbon_model_shape_uri'] (string) –
path to a shapefile that defines the regions for the local carbon edge models. Has at least the fields ‘method’, ‘theta1’, ‘theta2’, ‘theta3’. Where ‘method’ is an int between 1..3 describing the biomass regression model, and the thetas are floating point numbers that have different meanings depending on the ‘method’ parameter. Specifically,
- method 1 (asymptotic model):
biomass = theta1 - theta2 * exp(-theta3 * edge_dist_km)
- method 2 (logarithmic model):
# NOTE: theta3 is ignored for this method biomass = theta1 + theta2 * numpy.log(edge_dist_km)
- method 3 (linear regression):
biomass = theta1 + theta2 * edge_dist_km
- method 1 (asymptotic model):
- args['biomass_to_carbon_conversion_factor'] (string/float) – Number by which to multiply forest biomass to convert to carbon in the edge effect calculation.
Returns: None
GLOBIO¶
-
natcap.invest.globio.
execute
(args)¶ GLOBIO.
The model operates in two modes. Mode (a) generates a landcover map based on a base landcover map and information about crop yields, infrastructure, and more. Mode (b) assumes the globio landcover map is generated. These modes are used below to describe input parameters.
Parameters: - args['workspace_dir'] (string) – output directory for intermediate, temporary, and final files
- args['predefined_globio'] (boolean) – if True then “mode (b)” else “mode (a)”
- args['results_suffix'] (string) – (optional) string to append to any output files
- args['lulc_uri'] (string) – used in “mode (a)” path to a base landcover map with integer codes
- args['lulc_to_globio_table_uri'] (string) –
used in “mode (a)” path to table that translates the land-cover args[‘lulc_uri’] to intermediate GLOBIO classes, from which they will be further differentiated using the additional data in the model. Contains at least the following fields:
- ‘lucode’: Land use and land cover class code of the dataset used. LULC codes match the ‘values’ column in the LULC raster of mode (b) and must be numeric and unique.
- ‘globio_lucode’: The LULC code corresponding to the GLOBIO class to which it should be converted, using intermediate codes described in the example below.
- args['infrastructure_dir'] (string) – used in “mode (a) and (b)” a path to a folder containing maps of either gdal compatible rasters or OGR compatible shapefiles. These data will be used in the infrastructure to calculation of MSA.
- args['pasture_uri'] (string) – used in “mode (a)” path to pasture raster
- args['potential_vegetation_uri'] (string) – used in “mode (a)” path to potential vegetation raster
- args['pasture_threshold'] (float) – used in “mode (a)”
- args['intensification_fraction'] (float) – used in “mode (a)”; a value between 0 and 1 denoting proportion of total agriculture that should be classified as ‘high input’
- args['primary_threshold'] (float) – used in “mode (a)”
- args['msa_parameters_uri'] (string) – path to MSA classification parameters
- args['aoi_uri'] (string) – (optional) if it exists then final MSA raster is summarized by AOI
- args['globio_lulc_uri'] (string) – used in “mode (b)” path to predefined globio raster.
Returns: None
Habitat Quality¶
-
natcap.invest.habitat_quality.habitat_quality.
execute
(args)¶ Habitat Quality.
Open files necessary for the portion of the habitat_quality model.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation (required)
- landuse_cur_uri (string) – a uri to an input land use/land cover raster (required)
- landuse_fut_uri (string) – a uri to an input land use/land cover raster (optional)
- landuse_bas_uri (string) – a uri to an input land use/land cover raster (optional, but required for rarity calculations)
- threat_folder (string) – a uri to the directory that will contain all threat rasters (required)
- threats_uri (string) – a uri to an input CSV containing data of all the considered threats. Each row is a degradation source and each column a different attribute of the source with the following names: ‘THREAT’,’MAX_DIST’,’WEIGHT’ (required).
- access_uri (string) – a uri to an input polygon shapefile containing data on the relative protection against threats (optional)
- sensitivity_uri (string) – a uri to an input CSV file of LULC types, whether they are considered habitat, and their sensitivity to each threat (required)
- half_saturation_constant (float) – a python float that determines the spread and central tendency of habitat quality scores (required)
- suffix (string) – a python string that will be inserted into all raster uri paths just before the file extension.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'landuse_cur_uri': 'path/to/landuse_cur_raster', 'landuse_fut_uri': 'path/to/landuse_fut_raster', 'landuse_bas_uri': 'path/to/landuse_bas_raster', 'threat_raster_folder': 'path/to/threat_rasters/', 'threats_uri': 'path/to/threats_csv', 'access_uri': 'path/to/access_shapefile', 'sensitivity_uri': 'path/to/sensitivity_csv', 'half_saturation_constant': 0.5, 'suffix': '_results', }
Returns: none
Habitat Risk Assessment¶
-
natcap.invest.habitat_risk_assessment.hra.
execute
(args)¶ Habitat Risk Assessment.
This function will prepare files passed from the UI to be sent on to the hra_core module.
All inputs are required.
Parameters: - workspace_dir (string) – The location of the directory into which intermediate and output files should be placed.
- csv_uri (string) – The location of the directory containing the CSV files of habitat, stressor, and overlap ratings. Will also contain a .txt JSON file that has directory locations (potentially) for habitats, species, stressors, and criteria.
- grid_size (int) – Represents the desired pixel dimensions of both intermediate and ouput rasters.
- risk_eq (string) – A string identifying the equation that should be used in calculating risk scores for each H-S overlap cell. This will be either ‘Euclidean’ or ‘Multiplicative’.
- decay_eq (string) – A string identifying the equation that should be used in calculating the decay of stressor buffer influence. This can be ‘None’, ‘Linear’, or ‘Exponential’.
- max_rating (int) – An int representing the highest potential value that should be represented in rating, data quality, or weight in the CSV table.
- max_stress (int) – This is the highest score that is used to rate a criteria within this model run. These values would be placed within the Rating column of the habitat, species, and stressor CSVs.
- aoi_tables (string) – A shapefile containing one or more planning regions for a given model. This will be used to get the average risk value over a larger area. Each potential region MUST contain the attribute “name” as a way of identifying each individual shape.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'csv_uri': 'path/to/csv', 'grid_size': 200, 'risk_eq': 'Euclidean', 'decay_eq': 'None', 'max_rating': 3, 'max_stress': 4, 'aoi_tables': 'path/to/shapefile', }
Returns: None
Habitat Risk Assessment Preprocessor¶
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
execute
(args)¶ Habitat Risk Assessment Preprocessor.
Want to read in multiple hab/stressors directories, in addition to named criteria, and make an appropriate csv file.
Parameters: - args['workspace_dir'] (string) – The directory to dump the output CSV files to. (required)
- args['habitats_dir'] (string) – A directory of shapefiles that are habitats. This is not required, and may not exist if there is a species layer directory. (optional)
- args['species_dir'] (string) – Directory which holds all species shapefiles, but may or may not exist if there is a habitats layer directory. (optional)
- args['stressors_dir'] (string) – A directory of ArcGIS shapefiles that are stressors. (required)
- args['exposure_crits'] (list) – list containing string names of exposure criteria (hab-stress) which should be applied to the exposure score. (optional)
- args['sensitivity-crits'] (list) – List containing string names of sensitivity (habitat-stressor overlap specific) criteria which should be applied to the consequence score. (optional)
- args['resilience_crits'] (list) – List containing string names of resilience (habitat or species-specific) criteria which should be applied to the consequence score. (optional)
- args['criteria_dir'] (string) – Directory which holds the criteria shapefiles. May not exist if the user does not desire criteria shapefiles. This needs to be in a VERY specific format, which shall be described in the user’s guide. (optional)
Returns: None
This function creates a series of CSVs within
args['workspace_dir']
. There will be one CSV for every habitat/species. These files will contain information relevant to each habitat or species, including all criteria. The criteria will be broken up into those which apply to only the habitat, and those which apply to the overlap of that habitat, and each stressor.JSON file containing vars that need to be passed on to hra non-core when that gets run. Should live inside the preprocessor folder which will be created in
args['workspace_dir']
. It will contain habitats_dir, species_dir, stressors_dir, and criteria_dir.
Habitat Suitability¶
-
natcap.invest.habitat_suitability.
execute
(args)¶ Habitat Suitability.
Calculate habitat suitability indexes given biophysical parameters.
The objective of a habitat suitability index (HSI) is to help users identify areas within their AOI that would be most suitable for habitat restoration. The output is a gridded map of the user’s AOI in which each grid cell is assigned a suitability rank between 0 (not suitable) and 1 (most suitable). The suitability rank is generally calculated as the weighted geometric mean of several individual input criteria, which have also been ranked by suitability from 0-1. Habitat types (e.g. marsh, mangrove, coral, etc.) are treated separately, and each habitat type will have a unique set of relevant input criteria and a resultant habitat suitability map.
Parameters: - args['workspace_dir'] (string) – directory path to workspace directory for output files.
- args['results_suffix'] (string) – (optional) string to append to any output file names.
- args['aoi_path'] (string) – file path to an area of interest shapefile.
- args['exclusion_path_list'] (list) – (optional) a list of file paths to shapefiles which define areas which the HSI should be masked out in a final output.
- args['output_cell_size'] (float) – (optional) size of output cells. If not present, the output size will snap to the smallest cell size in the HSI range rasters.
- args['habitat_threshold'] (float) – a value to threshold the habitat score values to 0 and 1.
- args['hsi_ranges'] (dict) –
a dictionary that describes the habitat biophysical base rasters as well as the ranges for optimal and tolerable values. Each biophysical value has a unique key in the dictionary that is used to name the mapping of biophysical to local HSI value. Each value is dictionary with keys:
- ‘raster_path’: path to disk for biophysical raster.
- ‘range’: a 4-tuple in non-decreasing order describing the “tolerable” to “optimal” ranges for those biophysical values. The endpoints non-inclusively define where the suitability score is 0.0, the two midpoints inclusively define the range where the suitability is 1.0, and the ranges above and below are linearly interpolated between 0.0 and 1.0.
Example:
{ 'depth': { 'raster_path': r'C:/path/to/depth.tif', 'range': (-50, -30, -10, -10), }, 'temperature': { 'temperature_path': ( r'C:/path/to/temperature.tif'), 'range': (5, 7, 12.5, 16), } }
Returns: None
Managed Timber Production¶
-
natcap.invest.timber.timber.
execute
(args)¶ Managed Timber Production.
Invoke the timber model given uri inputs specified by the user guide.
Parameters: - args['workspace_dir'] (string) – The file location where the outputs will be written (Required)
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['timber_shape_uri'] (string) – The shapefile describing timber parcels with fields as described in the user guide (Required)
- args['attr_table_uri'] (string) – The CSV attribute table location with fields that describe polygons in timber_shape_uri (Required)
- market_disc_rate (float) – The market discount rate
Returns: nothing
Marine Water Quality¶
-
natcap.invest.marine_water_quality.marine_water_quality_biophysical.
execute
(args)¶ Marine Water Quality.
Main entry point for the InVEST 3.0 marine water quality biophysical model.
Parameters: - args['workspace_dir'] (string) – Directory to place outputs
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['aoi_poly_uri'] (string) – OGR polygon Datasource indicating region of interest to run the model. Will define the grid.
- args['land_poly_uri'] (string) – OGR polygon DataSource indicating areas where land is.
- args['pixel_size'] (float) – float indicating pixel size in meters of output grid.
- args['layer_depth'] (float) – float indicating the depth of the grid cells in meters.
- args['source_points_uri'] (string) – OGR point Datasource indicating point sources of pollution.
- args['source_point_data_uri'] (string) – csv file indicating the biophysical properties of the point sources.
- args['kps'] (float) – float indicating decay rate of pollutant (kg/day)
- args['tide_e_points_uri'] (string) – OGR point Datasource with spatial information about the E parameter
- args['adv_uv_points_uri'] (string) – optional OGR point Datasource with spatial advection u and v vectors.
Returns: nothing
Nutrient Delivery Ratio¶
-
natcap.invest.ndr.ndr.
execute
(args)¶ Nutrient Delivery Ratio.
Parameters: - args['workspace_dir'] (string) – path to current workspace
- args['dem_uri'] (string) – path to digital elevation map raster
- args['lulc_uri'] (string) – a path to landcover map raster
- args['runoff_proxy_uri'] (string) – a path to a runoff proxy raster
- args['watersheds_uri'] (string) – path to the watershed shapefile
- args['biophysical_table_uri'] (string) –
path to csv table on disk containing nutrient retention values.
For each nutrient type [t] in args[‘calc_[t]’] that is true, must contain the following headers:
‘load_[t]’, ‘eff_[t]’, ‘crit_len_[t]’
If args[‘calc_n’] is True, must also contain the header ‘proportion_subsurface_n’ field.
- args['calc_p'] (boolean) – if True, phosphorous is modeled, additionally if True then biophysical table must have p fields in them
- args['calc_n'] (boolean) – if True nitrogen will be modeled, additionally biophysical table must have n fields in them.
- args['results_suffix'] (string) – (optional) a text field to append to all output files
- args['threshold_flow_accumulation'] – a number representing the flow accumulation in terms of upstream pixels.
- args['_prepare'] – (optional) The preprocessed set of data created by the ndr._prepare call. This argument could be used in cases where the call to this function is scripted and can save a significant amount DEM processing runtime.
Returns: None
Overlap Analysis¶
-
natcap.invest.overlap_analysis.overlap_analysis.
execute
(args)¶ Overlap Analysis.
This function will take care of preparing files passed into the overlap analysis model. It will handle all files/inputs associated with calculations and manipulations. It may write log, warning, or error messages to stdout.
Parameters: - args – A python dictionary created by the UI and passed to this method. It will contain the following data.
- args['workspace_dir'] (string) – The directory in which to place all resulting files, will come in as a string. (required)
- args['zone_layer_uri'] (string) – A URI pointing to a shapefile with the analysis zones on it. (required)
- args['grid_size'] (int) – This is an int specifying how large the gridded squares over the shapefile should be. (required)
- args['overlap_data_dir_uri'] (string) – URI pointing to a directory where multiple shapefiles are located. Each shapefile represents an activity of interest for the model. (required)
- args['do-inter'] (bool) – Boolean that indicates whether or not inter-activity weighting is desired. This will decide if the overlap table will be created. (required)
- args['do_intra'] (bool) – Boolean which indicates whether or not intra-activity weighting is desired. This will will pull attributes from shapefiles passed in in ‘zone_layer_uri’. (required)
- args['do_hubs'] (bool) – Boolean which indicates if human use hubs are desired. (required)
- args['overlap_layer_tbl'] (string) – URI to a CSV file that holds relational data and identifier data for all layers being passed in within the overlap analysis directory. (optional)
- args['intra_name'] (string) – string which corresponds to a field within the layers being passed in within overlap analysis directory. This is the intra-activity importance for each activity. (optional)
- args['hubs_uri'] (string) – The location of the shapefile containing points for human use hub calculations. (optional)
- args['decay_amt'] (float) – A double representing the decay rate of value from the human use hubs. (optional)
Returns: None
Overlap Analysis: Management Zones¶
-
natcap.invest.overlap_analysis.overlap_analysis_mz.
execute
(args)¶ Overlap Analysis: Management Zones.
Parameters: - args – A python dictionary created by the UI and passed to this method. It will contain the following data.
- args['workspace_dir'] (string) – The directory in which to place all resulting files, will come in as a string. (required)
- args['zone_layer_loc'] (string) – A URI pointing to a shapefile with the analysis zones on it. (required)
- args['overlap_data_dir_loc'] (string) – URI pointing to a directory where multiple shapefiles are located. Each shapefile represents an activity of interest for the model. (required)
Returns: None
Pollinator Abundance: Crop Pollination¶
-
natcap.invest.pollination.pollination.
execute
(args)¶ Pollinator Abundance: Crop Pollination.
Execute the pollination model from the topmost, user-accessible level.
Parameters: - workspace_dir (string) – a URI to the workspace folder. Not required to exist on disk. Additional folders will be created inside of this folder. If there are any file name collisions, this model will overwrite those files.
- landuse_cur_uri (string) – a URI to a GDAL raster on disk. ‘do_valuation’ - A boolean. Indicates whether valuation should be performed. This applies to all scenarios.
- landuse_attributes_uri (string) – a URI to a CSV on disk. See the model’s documentation for details on the structure of this table.
- landuse_fut_uri (string) – (Optional) a URI to a GDAL dataset on disk. If this args dictionary entry is provided, this model will process both the current and future scenarios.
- do_valuation (boolean) – Indicates whether the model should include valuation
- half_saturation (float) – a number between 0 and 1 indicating the half-saturation constant. See the pollination documentation for more information.
- wild_pollination_proportion (float) – a number between 0 and 1 indicating the proportion of all pollinators that are wild. See the pollination documentation for more information.
- guilds_uri (string) – a URI to a CSV on disk. See the model’s documentation for details on the structure of this table.
- ag_classes (string) – (Optional) a space-separated list of land cover classes that are to be considered as agricultural. If this input is not provided, all land cover classes are considered to be agricultural.
- farms_shapefile (string) – (Optional) shapefile containing points representing data collection points on the landscape.
- results_suffix (string) – inserted into the URI of each file created by this model, right before the file extension.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'landuse_cur_uri': 'path/to/raster', 'landuse_attributes_uri': 'path/to/csv', 'landuse_fut_uri': 'path/to/raster', 'do_valuation': 'example', 'half_saturation': 'example', 'wild_pollination_proportion': 'example', 'guilds_uri': 'path/to/csv', 'ag_classes': 'example', 'farms_shapefile': 'example', 'results_suffix': 'example', }
The following args dictionary entries are optional, and will affect the behavior of the model if provided:
- landuse_fut_uri
- ag_classes
- results_suffix
- farms_shapefile
If args[‘do_valuation’] is set to True, the following args dictionary entries are also required:
- half_saturation
- wild_pollination_proportion
This function has no return value, though it does save a number of rasters to disk. See the user’s guide for details.
Recreation¶
-
natcap.invest.recreation.recmodel_client.
execute
(args)¶ Recreation.
Execute recreation client model on remote server.
Parameters: - args['workspace_dir'] (string) – path to workspace directory
- args['aoi_path'] (string) – path to AOI vector
- args['hostname'] (string) – FQDN to recreation server
- args['port'] (string or int) – port on hostname for recreation server
- args['start_year'] (string) – start year in form YYYY. This year is the inclusive lower bound to consider points in the PUD and regression.
- args['end_year'] (string) – end year in form YYYY. This year is the inclusive upper bound to consider points in the PUD and regression.
- args['grid_aoi'] (boolean) – if true the polygon vector in args[‘aoi_path’] should be gridded into a new vector and the recreation model should be executed on that
- args['grid_type'] (string) – optional, but must exist if args[‘grid_aoi’] is True. Is one of ‘hexagon’ or ‘square’ and indicates the style of gridding.
- args['cell_size'] (string/float) – optional, but must exist if args[‘grid_aoi’] is True. Indicates the cell size of square pixels and the width of the horizontal axis for the hexagonal cells.
- args['compute_regression'] (boolean) – if True, then process the predictor table and scenario table (if present).
- args['predictor_table_path'] (string) –
required if args[‘compute_regression’] is True. Path to a table that describes the regression predictors, their IDs and types. Must contain the fields ‘id’, ‘path’, and ‘type’ where:
- ‘id’: is a <=10 character length ID that is used to uniquely describe the predictor. It will be added to the output result shapefile attribute table which is an ESRI Shapefile, thus limited to 10 characters.
- ‘path’: an absolute or relative (to this table) path to the predictor dataset, either a vector or raster type.
- ‘type’: one of the following,
- ‘raster_mean’: mean of values in the raster under the response polygon
- ‘raster_sum’: sum of values in the raster under the response polygon
- ‘point_count’: count of the points contained in the response polygon
- ‘point_nearest_distance’: distance to the nearest point from the response polygon
- ‘line_intersect_length’: length of lines that intersect with the response polygon in projected units of AOI
- ‘polygon_area’: area of the polygon contained within response polygon in projected units of AOI
- args['scenario_predictor_table_path'] (string) – (optional) if present runs the scenario mode of the recreation model with the datasets described in the table on this path. Field headers are identical to args[‘predictor_table_path’] and ids in the table are required to be identical to the predictor list.
- args['results_suffix'] (string) – optional, if exists is appended to any output file paths.
Returns: None
RouteDEM: D-Infinity Routing¶
-
natcap.invest.routing.routedem.
execute
(args)¶ RouteDEM: D-Infinity Routing.
This model exposes the pygeoprocessing d-infinity routing functionality in the InVEST model API.
Parameters: - workspace_dir (string) – The selected folder is used as the workspace where all intermediate and output files will be written. If the selected folder does not exist, it will be created. If datasets already exist in the selected folder, they will be overwritten. (required)
- dem_uri (string) – A GDAL-supported raster file containing a base Digital Elevation Model to execute the routing functionality across. (required)
- pit_filled_filename (string) – The filename of the output raster with pits filled in. It will go in the project workspace. (required)
- flow_direction_filename (string) – The filename of the flow direction raster. It will go in the project workspace. (required)
- flow_accumulation_filename (string) – The filename of the flow accumulation raster. It will go in the project workspace. (required)
- threshold_flow_accumulation (int) – The number of upstream cells that must flow into a cell before it’s classified as a stream. (required)
- multiple_stream_thresholds (bool) – Set to
True
to calculate multiple maps. If enabled, set stream threshold to the lowest amount, then set upper and step size thresholds. (optional) - threshold_flow_accumulation_upper (int) – The number of upstream pixels that must flow into a cell before it’s classified as a stream. (required)
- threshold_flow_accumulation_stepsize (int) – The number of cells to step up from lower to upper threshold range. (required)
- calculate_slope (bool) – Set to
True
to output a slope raster. (optional) - slope_filename (string) – The filename of the output slope raster. This will go in the project workspace. (required)
- calculate_downstream_distance (bool) – Select to calculate a distance stream raster, based on uppper threshold limit. (optional)
- downstream_distance_filename (string) – The filename of the output raster. It will go in the project workspace. (required)
Returns: None
Scenario Generator: Proximity-Based¶
-
natcap.invest.scenario_gen_proximity.
execute
(args)¶ Scenario Generator: Proximity-Based.
Main entry point for proximity based scenario generator model.
Parameters: - args['workspace_dir'] (string) – output directory for intermediate, temporary, and final files
- args['results_suffix'] (string) – (optional) string to append to any output files
- args['base_lulc_uri'] (string) – path to the base landcover map
- args['replacment_lucode'] (string or int) – code to replace when converting pixels
- args['area_to_convert'] (string or float) – max area (Ha) to convert
- args['focal_landcover_codes'] (string) – a space separated string of landcover codes that are used to determine the proximity when refering to “towards” or “away” from the base landcover codes
- args['convertible_landcover_codes'] (string) – a space separated string of landcover codes that can be converted in the generation phase found in args[‘base_lulc_uri’].
- args['n_fragmentation_steps'] (string) – an int as a string indicating the number of steps to take for the fragmentation conversion
- args['aoi_uri'] (string) – (optional) path to a shapefile that indicates an area of interest. If present, the expansion scenario operates only under that AOI and the output raster is clipped to that shape.
- args['convert_farthest_from_edge'] (boolean) – if True will run the conversion simulation starting from the furthest pixel from the edge and work inwards. Workspace will contain output files named ‘toward_base{suffix}.{tif,csv}.
- args['convert_nearest_to_edge'] (boolean) – if True will run the conversion simulation starting from the nearest pixel on the edge and work inwards. Workspace will contain output files named ‘toward_base{suffix}.{tif,csv}.
Returns: None.
Scenario Generator: Rule-Based¶
-
natcap.invest.scenario_generator.scenario_generator.
execute
(args)¶ Scenario Generator: Rule-Based.
Model entry-point.
Parameters: - workspace_dir (str) – path to workspace directory
- suffix (str) – string to append to output files
- landcover (str) – path to land-cover raster
- transition (str) – path to land-cover attributes table
- calculate_priorities (bool) – whether to calculate priorities
- priorities_csv_uri (str) – path to priority csv table
- calculate_proximity (bool) – whether to calculate proximity
- proximity_weight (float) – weight given to proximity
- calculate_transition (bool) – whether to specifiy transitions
- calculate_factors (bool) – whether to use suitability factors
- suitability_folder (str) – path to suitability folder
- suitability (str) – path to suitability factors table
- weight (float) – suitability factor weight
- factor_inclusion (int) – the rasterization method – all touched or center points
- factors_field_container (bool) – whether to use suitability factor inputs
- calculate_constraints (bool) – whether to use constraint inputs
- constraints (str) – filepath to constraints shapefile layer
- constraints_field (str) – shapefile field containing constraints field
- override_layer (bool) – whether to use override layer
- override (str) – path to override shapefile
- override_field (str) – shapefile field containing override value
- override_inclusion (int) – the rasterization method
Example Args:
args = { 'workspace_dir': 'path/to/dir', 'suffix': '', 'landcover': 'path/to/raster', 'transition': 'path/to/csv', 'calculate_priorities': True, 'priorities_csv_uri': 'path/to/csv', 'calculate_proximity': True, 'calculate_transition': True, 'calculate_factors': True, 'suitability_folder': 'path/to/dir', 'suitability': 'path/to/csv', 'weight': 0.5, 'factor_inclusion': 0, 'factors_field_container': True, 'calculate_constraints': True, 'constraints': 'path/to/shapefile', 'constraints_field': '', 'override_layer': True, 'override': 'path/to/shapefile', 'override_field': '', 'override_inclusion': 0 }
Added Afterwards:
d = { 'proximity_weight': 0.3, 'distance_field': '', 'transition_id': 'ID', 'percent_field': 'Percent Change', 'area_field': 'Area Change', 'priority_field': 'Priority', 'proximity_field': 'Proximity', 'suitability_id': '', 'suitability_layer': '', 'suitability_field': '', }
Scenic Quality¶
-
natcap.invest.scenic_quality.scenic_quality.
execute
(args)¶ Scenic Quality.
Warning
The Scenic Quality model is under active development and is currently unstable.
Parameters: - workspace_dir (string) – The selected folder is used as the workspace where all intermediate and output files will be written. If the selected folder does not exist, it will be created. If datasets already exist in the selected folder, they will be overwritten. (required)
- aoi_uri (string) – An OGR-supported vector file. This AOI instructs the model where to clip the input data and the extent of analysis. Users will create a polygon feature layer that defines their area of interest. The AOI must intersect the Digital Elevation Model (DEM). (required)
- cell_size (float) – Length (in meters) of each side of the (square) cell. (optional)
- structure_uri (string) – An OGR-supported vector file. The user must specify a point feature layer that indicates locations of objects that contribute to negative scenic quality, such as aquaculture netpens or wave energy facilities. In order for the viewshed analysis to run correctly, the projection of this input must be consistent with the project of the DEM input. (required)
- dem_uri (string) – A GDAL-supported raster file. An elevation raster layer is required to conduct viewshed analysis. Elevation data allows the model to determine areas within the AOI’s land-seascape where point features contributing to negative scenic quality are visible. (required)
- refraction (float) – The earth curvature correction option corrects for the curvature of the earth and refraction of visible light in air. Changes in air density curve the light downward causing an observer to see further and the earth to appear less curved. While the magnitude of this effect varies with atmospheric conditions, a standard rule of thumb is that refraction of visible light reduces the apparent curvature of the earth by one-seventh. By default, this model corrects for the curvature of the earth and sets the refractivity coefficient to 0.13. (required)
- pop_uri (string) – A GDAL-supported raster file. A population raster layer is required to determine population within the AOI’s land-seascape where point features contributing to negative scenic quality are visible and not visible. (optional)
- overlap_uri (string) – An OGR-supported vector file. The user has the option of providing a polygon feature layer where they would like to determine the impact of objects on visual quality. This input must be a polygon and projected in meters. The model will use this layer to determine what percent of the total area of each polygon feature can see at least one of the point features impacting scenic quality.optional
- valuation_function (string) – Either ‘polynomial’ or ‘logarithmic’. This field indicates the functional form f(x) the model will use to value the visual impact for each viewpoint. For distances less than 1 km (x<1), the model uses a linear form g(x) where the line passes through f(1) (i.e. g(1) == f(1)) and extends to zero with the same slope as f(1) (i.e. g’(x) == f’(1)). (optional)
- a_coefficient (float) – First coefficient used either by the polynomial or by the logarithmic valuation function. (required)
- b_coefficient (float) – Second coefficient used either by the polynomial or by the logarithmic valuation function. (required)
- c_coefficient (float) – Third coefficient for the polynomial’s quadratic term. (required)
- d_coefficient (float) – Fourth coefficient for the polynomial’s cubic exponent. (required)
- max_valuation_radius (float) – Radius beyond which the valuation is set to zero. The valuation function ‘f’ cannot be negative at the radius ‘r’ (f(r)>=0). (required)
Returns: None
Seasonal Water Yield¶
-
natcap.invest.seasonal_water_yield.seasonal_water_yield.
execute
(args)¶ Seasonal Water Yield.
This function invokes the InVEST seasonal water yield model described in “Spatial attribution of baseflow generation at the parcel level for ecosystem-service valuation”, Guswa, et. al (under review in “Water Resources Research”)
Parameters: - args['workspace_dir'] (string) – output directory for intermediate,
- and final files (temporary,) –
- args['results_suffix'] (string) – (optional) string to append to any output files
- args['threshold_flow_accumulation'] (number) – used when classifying stream pixels from the DEM by thresholding the number of upstream cells that must flow into a cell before it’s considered part of a stream.
- args['et0_dir'] (string) – required if args[‘user_defined_local_recharge’] is False. Path to a directory that contains rasters of monthly reference evapotranspiration; units in mm.
- args['precip_dir'] (string) – required if args[‘user_defined_local_recharge’] is False. A path to a directory that contains rasters of monthly precipitation; units in mm.
- args['dem_raster_path'] (string) – a path to a digital elevation raster
- args['lulc_raster_path'] (string) – a path to a land cover raster used to classify biophysical properties of pixels.
- args['soil_group_path'] (string) –
required if args[‘user_defined_local_recharge’] is False. A path to a raster indicating SCS soil groups where integer values are mapped to soil types:
1: A 2: B 3: C 4: D
- args['aoi_path'] (string) – path to a vector that indicates the area over which the model should be run, as well as the area in which to aggregate over when calculating the output Qb.
- args['biophysical_table_path'] (string) – path to a CSV table that maps landcover codes paired with soil group types to curve numbers as well as Kc values. Headers must include ‘lucode’, ‘CN_A’, ‘CN_B’, ‘CN_C’, ‘CN_D’, ‘Kc_1’, ‘Kc_2’, ‘Kc_3’, ‘Kc_4’, ‘Kc_5’, ‘Kc_6’, ‘Kc_7’, ‘Kc_8’, ‘Kc_9’, ‘Kc_10’, ‘Kc_11’, ‘Kc_12’.
- args['rain_events_table_path'] (string) – Not required if args[‘user_defined_local_recharge’] is True or args[‘user_defined_climate_zones’] is True. Path to a CSV table that has headers ‘month’ (1-12) and ‘events’ (int >= 0) that indicates the number of rain events per month
- args['alpha_m'] (float or string) – required if args[‘monthly_alpha’] is false. Is the proportion of upslope annual available local recharge that is available in month m.
- args['beta_i'] (float or string) – is the fraction of the upgradient subsidy that is available for downgradient evapotranspiration.
- args['gamma'] (float or string) – is the fraction of pixel local recharge that is available to downgradient pixels.
- args['user_defined_local_recharge'] (boolean) – if True, indicates user will provide pre-defined local recharge raster layer
- args['l_path'] (string) – required if args[‘user_defined_local_recharge’] is True. If provided pixels indicate the amount of local recharge; units in mm.
- args['user_defined_climate_zones'] (boolean) – if True, user provides a climate zone rain events table and a climate zone raster map in lieu of a global rain events table.
- args['climate_zone_table_path'] (string) – required if args[‘user_defined_climate_zones’] is True. Contains monthly precipitation events per climate zone. Fields must be: “cz_id”, “jan”, “feb”, “mar”, “apr”, “may”, “jun”, “jul”, “aug”, “sep”, “oct”, “nov”, “dec”.
- args['climate_zone_raster_path'] (string) – required if args[‘user_defined_climate_zones’] is True, pixel values correspond to the “cz_id” values defined in args[‘climate_zone_table_path’]
- args['monthly_alpha'] (boolean) – if True, use the alpha
- args['monthly_alpha_path'] (string) – required if args[‘monthly_alpha’] is True.
Returns: None
Sediment Delivery Ratio¶
-
natcap.invest.sdr.
execute
(args)¶ Sediment Delivery Ratio.
This function calculates the sediment export and retention of a landscape using the sediment delivery ratio model described in the InVEST user’s guide.
Parameters: - args['workspace_dir'] (string) – output directory for intermediate, temporary, and final files
- args['results_suffix'] (string) – (optional) string to append to any output file names
- args['dem_path'] (string) – path to a digital elevation raster
- args['erosivity_path'] (string) – path to rainfall erosivity index raster
- args['erodibility_path'] (string) – a path to soil erodibility raster
- args['lulc_path'] (string) – path to land use/land cover raster
- args['watersheds_path'] (string) – path to vector of the watersheds
- args['biophysical_table_path'] (string) – path to CSV file with biophysical information of each land use classes. contain the fields ‘usle_c’ and ‘usle_p’
- args['threshold_flow_accumulation'] (number) – number of upstream pixels on the dem to threshold to a stream.
- args['k_param'] (number) – k calibration parameter
- args['sdr_max'] (number) – max value the SDR
- args['ic_0_param'] (number) – ic_0 calibration parameter
- args['drainage_path'] (string) – (optional) path to drainage raster that is used to add additional drainage areas to the internally calculated stream layer
Returns: None.
Wave Energy¶
-
natcap.invest.wave_energy.wave_energy.
execute
(args)¶ Wave Energy.
Executes both the biophysical and valuation parts of the wave energy model (WEM). Files will be written on disk to the intermediate and output directories. The outputs computed for biophysical and valuation include: wave energy capacity raster, wave power raster, net present value raster, percentile rasters for the previous three, and a point shapefile of the wave points with attributes.
Parameters: - workspace_dir (string) – Where the intermediate and output folder/files will be saved. (required)
- wave_base_data_uri (string) – Directory location of wave base data including WW3 data and analysis area shapefile. (required)
- analysis_area_uri (string) – A string identifying the analysis area of interest. Used to determine wave data shapefile, wave data text file, and analysis area boundary shape. (required)
- aoi_uri (string) – A polygon shapefile outlining a more detailed area within the analysis area. This shapefile should be projected with linear units being in meters. (required to run Valuation model)
- machine_perf_uri (string) – The path of a CSV file that holds the machine performance table. (required)
- machine_param_uri (string) – The path of a CSV file that holds the machine parameter table. (required)
- dem_uri (string) – The path of the Global Digital Elevation Model (DEM). (required)
- suffix (string) – A python string of characters to append to each output filename (optional)
- valuation_container (boolean) – Indicates whether the model includes valuation
- land_gridPts_uri (string) – A CSV file path containing the Landing and Power Grid Connection Points table. (required for Valuation)
- machine_econ_uri (string) – A CSV file path for the machine economic parameters table. (required for Valuation)
- number_of_machines (int) – An integer specifying the number of machines for a wave farm site. (required for Valuation)
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'wave_base_data_uri': 'path/to/base_data_dir', 'analysis_area_uri': 'West Coast of North America and Hawaii', 'aoi_uri': 'path/to/shapefile', 'machine_perf_uri': 'path/to/csv', 'machine_param_uri': 'path/to/csv', 'dem_uri': 'path/to/raster', 'suffix': '_results', 'valuation_container': True, 'land_gridPts_uri': 'path/to/csv', 'machine_econ_uri': 'path/to/csv', 'number_of_machines': 28, }
Wind Energy¶
-
natcap.invest.wind_energy.wind_energy.
execute
(args)¶ Wind Energy.
This module handles the execution of the wind energy model given the following dictionary:
Parameters: - workspace_dir (string) – a python string which is the uri path to where the outputs will be saved (required)
- wind_data_uri (string) – path to a CSV file with the following header: [‘LONG’,’LATI’,’LAM’, ‘K’, ‘REF’]. Each following row is a location with at least the Longitude, Latitude, Scale (‘LAM’), Shape (‘K’), and reference height (‘REF’) at which the data was collected (required)
- aoi_uri (string) – a uri to an OGR datasource that is of type polygon and projected in linear units of meters. The polygon specifies the area of interest for the wind data points. If limiting the wind farm bins by distance, then the aoi should also cover a portion of the land polygon that is of interest (optional for biophysical and no distance masking, required for biophysical and distance masking, required for valuation)
- bathymetry_uri (string) – a uri to a GDAL dataset that has the depth values of the area of interest (required)
- land_polygon_uri (string) – a uri to an OGR datasource of type polygon that provides a coastline for determining distances from wind farm bins. Enabled by AOI and required if wanting to mask by distances or run valuation
- global_wind_parameters_uri (string) – a float for the average distance in kilometers from a grid connection point to a land connection point (required for valuation if grid connection points are not provided)
- suffix (string) – a String to append to the end of the output files (optional)
- turbine_parameters_uri (string) – a uri to a CSV file that holds the turbines biophysical parameters as well as valuation parameters (required)
- number_of_turbines (int) – an integer value for the number of machines for the wind farm (required for valuation)
- min_depth (float) – a float value for the minimum depth for offshore wind farm installation (meters) (required)
- max_depth (float) – a float value for the maximum depth for offshore wind farm installation (meters) (required)
- min_distance (float) – a float value for the minimum distance from shore for offshore wind farm installation (meters) The land polygon must be selected for this input to be active (optional, required for valuation)
- max_distance (float) – a float value for the maximum distance from shore for offshore wind farm installation (meters) The land polygon must be selected for this input to be active (optional, required for valuation)
- valuation_container (boolean) – Indicates whether model includes valuation
- foundation_cost (float) – a float representing how much the foundation will cost for the specific type of turbine (required for valuation)
- discount_rate (float) – a float value for the discount rate (required for valuation)
- grid_points_uri (string) – a uri to a CSV file that specifies the landing and grid point locations (optional)
- avg_grid_distance (float) – a float for the average distance in kilometers from a grid connection point to a land connection point (required for valuation if grid connection points are not provided)
- price_table (boolean) – a bool indicating whether to use the wind energy price table or not (required)
- wind_schedule (string) – a URI to a CSV file for the yearly prices of wind energy for the lifespan of the farm (required if ‘price_table’ is true)
- wind_price (float) – a float for the wind energy price at year 0 (required if price_table is false)
- rate_change (float) – a float as a percent for the annual rate of change in the price of wind energy. (required if price_table is false)
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'wind_data_uri': 'path/to/file', 'aoi_uri': 'path/to/shapefile', 'bathymetry_uri': 'path/to/raster', 'land_polygon_uri': 'path/to/shapefile', 'global_wind_parameters_uri': 'path/to/csv', 'suffix': '_results', 'turbine_parameters_uri': 'path/to/csv', 'number_of_turbines': 10, 'min_depth': 3, 'max_depth': 60, 'min_distance': 0, 'max_distance': 200000, 'valuation_container': True, 'foundation_cost': 3.4, 'discount_rate': 7.0, 'grid_points_uri': 'path/to/csv', 'avg_grid_distance': 4, 'price_table': True, 'wind_schedule': 'path/to/csv', 'wind_price': 0.4, 'rate_change': 0.0, }
Returns: None
API Reference¶
Note
For the function documentation of available models, see InVEST Model Entry Points.
natcap¶
natcap package¶
Subpackages¶
InVEST Carbon biophysical module at the “uri” level
-
exception
natcap.invest.carbon.carbon_biophysical.
MapCarbonPoolError
¶ Bases:
exceptions.Exception
A custom error for catching lulc codes from a raster that do not match the carbon pools data file
-
natcap.invest.carbon.carbon_biophysical.
execute
(args)¶
-
natcap.invest.carbon.carbon_biophysical.
execute_30
(**args)¶ This function invokes the carbon model given URI inputs of files. It will do filehandling and open/create appropriate objects to pass to the core carbon biophysical processing function. It may write log, warning, or error messages to stdout.
args - a python dictionary with at the following possible entries: args[‘workspace_dir’] - a uri to the directory that will write output
and other temporary files during calculation. (required)args[‘suffix’] - a string to append to any output file name (optional) args[‘lulc_cur_uri’] - is a uri to a GDAL raster dataset (required) args[‘carbon_pools_uri’] - is a uri to a CSV or DBF dataset mapping carbon
storage density to the lulc classifications specified in the lulc rasters. (required if ‘do_uncertainty’ is false)- args[‘carbon_pools_uncertain_uri’] - as above, but has probability distribution
- data for each lulc type rather than point estimates. (required if ‘do_uncertainty’ is true)
- args[‘do_uncertainty’] - a boolean that indicates whether we should do
- uncertainty analysis. Defaults to False if not present.
- args[‘confidence_threshold’] - a number between 0 and 100 that indicates
- the minimum threshold for which we should highlight regions in the output raster. (required if ‘do_uncertainty’ is True)
- args[‘lulc_fut_uri’] - is a uri to a GDAL raster dataset (optional
- if calculating sequestration)
- args[‘lulc_cur_year’] - An integer representing the year of lulc_cur
- used in HWP calculation (required if args contains a ‘hwp_cur_shape_uri’, or ‘hwp_fut_shape_uri’ key)
- args[‘lulc_fut_year’] - An integer representing the year of lulc_fut
- used in HWP calculation (required if args contains a ‘hwp_fut_shape_uri’ key)
- args[‘lulc_redd_uri’] - is a uri to a GDAL raster dataset that represents
- land cover data for the REDD policy scenario (optional).
- args[‘hwp_cur_shape_uri’] - Current shapefile uri for harvested wood
- calculation (optional, include if calculating current lulc hwp)
- args[‘hwp_fut_shape_uri’] - Future shapefile uri for harvested wood
- calculation (optional, include if calculating future lulc hwp)
returns a dict with the names of all output files.
Integrated carbon model with biophysical and valuation components.
-
natcap.invest.carbon.carbon_combined.
execute
(args)¶ Carbon Storage and Sequestration.
This can include the biophysical model, the valuation model, or both.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- suffix (string) – a string to append to any output file name (optional)
- do_biophysical (boolean) – whether to run the biophysical model
- lulc_cur_uri (string) – a uri to a GDAL raster dataset (required)
- lulc_cur_year (int) – An integer representing the year of lulc_cur used in HWP calculation (required if args contains a ‘hwp_cur_shape_uri’, or ‘hwp_fut_shape_uri’ key)
- lulc_fut_uri (string) – a uri to a GDAL raster dataset (optional if calculating sequestration)
- lulc_redd_uri (string) – a uri to a GDAL raster dataset that represents land cover data for the REDD policy scenario (optional).
- lulc_fut_year (int) – An integer representing the year of lulc_fut used in HWP calculation (required if args contains a ‘hwp_fut_shape_uri’ key)
- carbon_pools_uri (string) – a uri to a CSV or DBF dataset mapping carbon storage density to the lulc classifications specified in the lulc rasters. (required if ‘do_uncertainty’ is false)
- hwp_cur_shape_uri (String) – Current shapefile uri for harvested wood calculation (optional, include if calculating current lulc hwp)
- hwp_fut_shape_uri (String) – Future shapefile uri for harvested wood calculation (optional, include if calculating future lulc hwp)
- do_uncertainty (boolean) – a boolean that indicates whether we should do uncertainty analysis. Defaults to False if not present.
- carbon_pools_uncertain_uri (string) – as above, but has probability distribution data for each lulc type rather than point estimates. (required if ‘do_uncertainty’ is true)
- confidence_threshold (float) – a number between 0 and 100 that indicates the minimum threshold for which we should highlight regions in the output raster. (required if ‘do_uncertainty’ is True)
- sequest_uri (string) – uri to a GDAL raster dataset describing the amount of carbon sequestered.
- yr_cur (int) – the year at which the sequestration measurement started
- yr_fut (int) – the year at which the sequestration measurement ended
- do_valuation (boolean) – whether to run the valuation model
- carbon_price_units (string) – indicates whether the price is in terms of carbon or carbon dioxide. Can value either as ‘Carbon (C)’ or ‘Carbon Dioxide (CO2)’.
- V (string) – value of a sequestered ton of carbon or carbon dioxide in
- per metric ton (dollars) –
- r (int) – the market discount rate in terms of a percentage
- c (float) – the annual rate of change in the price of carbon
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir/', 'suffix': '_results', 'do_biophysical': True, 'lulc_cur_uri': 'path/to/lulc_cur', 'lulc_cur_year': 2014, 'lulc_fut_uri': 'path/to/lulc_fut', 'lulc_redd_uri': 'path/to/lulc_redd', 'lulc_fut_year': 2025, 'carbon_pools_uri': 'path/to/carbon_pools', 'hwp_cur_shape_uri': 'path/to/hwp_cur_shape', 'hwp_fut_shape_uri': 'path/to/hwp_fut_shape', 'do_uncertainty': True, 'carbon_pools_uncertain_uri': 'path/to/carbon_pools_uncertain', 'confidence_threshold': 50.0, 'sequest_uri': 'path/to/sequest_uri', 'yr_cur': 2014, 'yr_fut': 2025, 'do_valuation': True, 'carbon_price_units':, 'Carbon (C)', 'V': 43.0, 'r': 7, 'c': 0, }
Returns: outputs – contains names of all output files Return type: dictionary
-
natcap.invest.carbon.carbon_combined.
execute_30
(**args)¶ Carbon Storage and Sequestration.
This can include the biophysical model, the valuation model, or both.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- suffix (string) – a string to append to any output file name (optional)
- do_biophysical (boolean) – whether to run the biophysical model
- lulc_cur_uri (string) – a uri to a GDAL raster dataset (required)
- lulc_cur_year (int) – An integer representing the year of lulc_cur used in HWP calculation (required if args contains a ‘hwp_cur_shape_uri’, or ‘hwp_fut_shape_uri’ key)
- lulc_fut_uri (string) – a uri to a GDAL raster dataset (optional if calculating sequestration)
- lulc_redd_uri (string) – a uri to a GDAL raster dataset that represents land cover data for the REDD policy scenario (optional).
- lulc_fut_year (int) – An integer representing the year of lulc_fut used in HWP calculation (required if args contains a ‘hwp_fut_shape_uri’ key)
- carbon_pools_uri (string) – a uri to a CSV or DBF dataset mapping carbon storage density to the lulc classifications specified in the lulc rasters. (required if ‘do_uncertainty’ is false)
- hwp_cur_shape_uri (String) – Current shapefile uri for harvested wood calculation (optional, include if calculating current lulc hwp)
- hwp_fut_shape_uri (String) – Future shapefile uri for harvested wood calculation (optional, include if calculating future lulc hwp)
- do_uncertainty (boolean) – a boolean that indicates whether we should do uncertainty analysis. Defaults to False if not present.
- carbon_pools_uncertain_uri (string) – as above, but has probability distribution data for each lulc type rather than point estimates. (required if ‘do_uncertainty’ is true)
- confidence_threshold (float) – a number between 0 and 100 that indicates the minimum threshold for which we should highlight regions in the output raster. (required if ‘do_uncertainty’ is True)
- sequest_uri (string) – uri to a GDAL raster dataset describing the amount of carbon sequestered.
- yr_cur (int) – the year at which the sequestration measurement started
- yr_fut (int) – the year at which the sequestration measurement ended
- do_valuation (boolean) – whether to run the valuation model
- carbon_price_units (string) – indicates whether the price is in terms of carbon or carbon dioxide. Can value either as ‘Carbon (C)’ or ‘Carbon Dioxide (CO2)’.
- V (string) – value of a sequestered ton of carbon or carbon dioxide in
- per metric ton (dollars) –
- r (int) – the market discount rate in terms of a percentage
- c (float) – the annual rate of change in the price of carbon
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir/', 'suffix': '_results', 'do_biophysical': True, 'lulc_cur_uri': 'path/to/lulc_cur', 'lulc_cur_year': 2014, 'lulc_fut_uri': 'path/to/lulc_fut', 'lulc_redd_uri': 'path/to/lulc_redd', 'lulc_fut_year': 2025, 'carbon_pools_uri': 'path/to/carbon_pools', 'hwp_cur_shape_uri': 'path/to/hwp_cur_shape', 'hwp_fut_shape_uri': 'path/to/hwp_fut_shape', 'do_uncertainty': True, 'carbon_pools_uncertain_uri': 'path/to/carbon_pools_uncertain', 'confidence_threshold': 50.0, 'sequest_uri': 'path/to/sequest_uri', 'yr_cur': 2014, 'yr_fut': 2025, 'do_valuation': True, 'carbon_price_units':, 'Carbon (C)', 'V': 43.0, 'r': 7, 'c': 0, }
Returns: outputs – contains names of all output files Return type: dictionary
Useful functions for the carbon biophysical and valuation models.
-
natcap.invest.carbon.carbon_utils.
make_suffix
(model_args)¶ Return the suffix from the args (prepending ‘_’ if necessary).
-
natcap.invest.carbon.carbon_utils.
setup_dirs
(workspace_dir, *dirnames)¶ Create the requested directories, and return the pathnames.
-
natcap.invest.carbon.carbon_utils.
sum_pixel_values_from_uri
(uri)¶ Return the sum of the values of all pixels in the given file.
InVEST valuation interface module. Informally known as the URI level.
-
natcap.invest.carbon.carbon_valuation.
execute
(args)¶
-
natcap.invest.carbon.carbon_valuation.
execute_30
(**args)¶ This function calculates carbon sequestration valuation.
args - a python dictionary with at the following required entries:
- args[‘workspace_dir’] - a uri to the directory that will write output
- and other temporary files during calculation. (required)
args[‘suffix’] - a string to append to any output file name (optional) args[‘sequest_uri’] - is a uri to a GDAL raster dataset describing the
amount of carbon sequestered (baseline scenario, if this is REDD)- args[‘sequest_redd_uri’] (optional) - uri to the raster dataset for
- sequestration under the REDD policy scenario
- args[‘conf_uri’] (optional) - uri to the raster dataset indicating
- confident pixels for sequestration or emission
args[‘conf_redd_uri’] (optional) - as above, but for the REDD scenario args[‘carbon_price_units’] - a string indicating whether the price is
in terms of carbon or carbon dioxide. Can value either as ‘Carbon (C)’ or ‘Carbon Dioxide (CO2)’.- args[‘V’] - value of a sequestered ton of carbon or carbon dioxide in
- dollars per metric ton
args[‘r’] - the market discount rate in terms of a percentage args[‘c’] - the annual rate of change in the price of carbon args[‘yr_cur’] - the year at which the sequestration measurement
startedargs[‘yr_fut’] - the year at which the sequestration measurement ended
returns a dict with output file URIs.
Coastal Blue Carbon Model.
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
execute
(args)¶ Coastal Blue Carbon.
Parameters: - workspace_dir (str) – location into which all intermediate and output files should be placed.
- results_suffix (str) – a string to append to output filenames.
- lulc_lookup_uri (str) – filepath to a CSV table used to convert the lulc code to a name. Also used to determine if a given lulc type is a coastal blue carbon habitat.
- lulc_transition_matrix_uri (str) – generated by the preprocessor. This file must be edited before it can be used by the main model. The left-most column represents the source lulc class, and the top row represents the destination lulc class.
- carbon_pool_initial_uri (str) – the provided CSV table contains information related to the initial conditions of the carbon stock within each of the three pools of a habitat. Biomass includes carbon stored above and below ground. All non-coastal blue carbon habitat lulc classes are assumed to contain no carbon. The values for ‘biomass’, ‘soil’, and ‘litter’ should be given in terms of Megatonnes CO2 e/ ha.
- carbon_pool_transient_uri (str) – the provided CSV table contains information related to the transition of carbon into and out of coastal blue carbon pools. All non-coastal blue carbon habitat lulc classes are assumed to neither sequester nor emit carbon as a result of change. The ‘yearly_accumulation’ values should be given in terms of Megatonnes of CO2 e/ha-yr. The ‘half-life’ values must be given in terms of years. The ‘disturbance’ values must be given as a decimal (e.g. 0.5 for 50%) of stock distrubed given a transition occurs away from a lulc-class.
- lulc_baseline_map_uri (str) – a GDAL-supported raster representing the baseline landscape/seascape.
- lulc_transition_maps_list (list) – a list of GDAL-supported rasters representing the landscape/seascape at particular points in time. Provided in chronological order.
- lulc_transition_years_list (list) – a list of years that respectively correspond to transition years of the rasters. Provided in chronological order.
- analysis_year (int) – optional. Indicates how many timesteps to run the transient analysis beyond the last transition year. Must come chronologically after the last transition year if provided. Otherwise, the final timestep of the model will be set to the last transition year.
- do_economic_analysis (bool) – boolean value indicating whether model should run economic analysis.
- do_price_table (bool) – boolean value indicating whether a price table is included in the arguments and to be used or a price and interest rate is provided and to be used instead.
- price (float) – the price per Megatonne CO2 e at the base year.
- interest_rate (float) – the interest rate on the price per Megatonne CO2e, compounded yearly. Provided as a percentage (e.g. 3.0 for 3%).
- price_table_uri (bool) – if args[‘do_price_table’] is set to True the provided CSV table is used in place of the initial price and interest rate inputs. The table contains the price per Megatonne CO2e sequestered for a given year, for all years from the original snapshot to the analysis year, if provided.
- discount_rate (float) – the discount rate on future valuations of sequestered carbon, compounded yearly. Provided as a percentage (e.g. 3.0 for 3%).
Example Args:
args = { 'workspace_dir': 'path/to/workspace/', 'results_suffix': '', 'lulc_lookup_uri': 'path/to/lulc_lookup_uri', 'lulc_transition_matrix_uri': 'path/to/lulc_transition_uri', 'carbon_pool_initial_uri': 'path/to/carbon_pool_initial_uri', 'carbon_pool_transient_uri': 'path/to/carbon_pool_transient_uri', 'lulc_baseline_map_uri': 'path/to/baseline_map.tif', 'lulc_transition_maps_list': [raster1_uri, raster2_uri, ...], 'lulc_transition_years_list': [2000, 2005, ...], 'analysis_year': 2100, 'do_economic_analysis': '<boolean>', 'do_price_table': '<boolean>', 'price': '<float>', 'interest_rate': '<float>', 'price_table_uri': 'path/to/price_table', 'discount_rate': '<float>' }
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
get_inputs
(args)¶ Get Inputs.
Parameters: - workspace_dir (str) – workspace directory
- results_suffix (str) – optional suffix appended to results
- lulc_lookup_uri (str) – lulc lookup table filepath
- lulc_transition_matrix_uri (str) – lulc transition table filepath
- carbon_pool_initial_uri (str) – initial conditions table filepath
- carbon_pool_transient_uri (str) – transient conditions table filepath
- lulc_baseline_map_uri (str) – baseline map filepath
- lulc_transition_maps_list (list) – ordered list of transition map filepaths
- lulc_transition_years_list (list) – ordered list of transition years
- analysis_year (int) – optional final year to extend the analysis beyond the last transition year
- do_economic_analysis (bool) – whether to run economic component of the analysis
- do_price_table (bool) – whether to use the price table for the economic component of the analysis
- price (float) – the price of net sequestered carbon
- interest_rate (float) – the interest rate on the price of carbon
- price_table_uri (str) – price table filepath
- discount_rate (float) – the discount rate on future valuations of carbon
Returns: d – data dictionary.
Return type: dict
- Example Returns:
- d = {
- ‘do_economic_analysis’: <bool>, ‘lulc_to_Sb’: <dict>, ‘lulc_to_Ss’: <dict> ‘lulc_to_L’: <dict>, ‘lulc_to_Yb’: <dict>, ‘lulc_to_Ys’: <dict>, ‘lulc_to_Hb’: <dict>, ‘lulc_to_Hs’: <dict>, ‘lulc_trans_to_Db’: <dict>, ‘lulc_trans_to_Ds’: <dict>, ‘C_r_rasters’: <list>, ‘transition_years’: <list>, ‘snapshot_years’: <list>, ‘timesteps’: <int>, ‘transitions’: <list>, ‘price_t’: <list>, ‘File_Registry’: <dict>
}
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
get_num_blocks
(raster_uri)¶ Get the number of blocks in a raster file.
Parameters: raster_uri (str) – filepath to raster Returns: num_blocks – number of blocks in raster Return type: int
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
is_transition_year
(snapshot_years, transitions, timestep)¶ Check whether given timestep is a transition year.
Parameters: - snapshot_years (list) – list of snapshot years.
- transitions (int) – number of transitions.
- timestep (int) – current timestep.
Returns: is_transition_year – whether the year corresponding to the
timestep is a transition year.
Return type: bool
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
read_from_raster
(input_raster, offset_block)¶ Read numpy array from raster block.
Parameters: - input_raster (str) – filepath to input raster
- offset_block (dict) – dictionary of offset information
Returns: array – a blocked array of the input raster
Return type: np.array
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
reclass
(array, d, out_dtype=None, nodata_mask=None)¶ Reclassify values in array.
If a nodata value is not provided, the function will return an array with NaN values in its place to mark cells that could not be reclassed.
Parameters: - array (np.array) – input data
- d (dict) – reclassification map
- out_dtype (np.dtype) – a numpy datatype for the reclass_array
- nodata_mask (number) – for floats, a nodata value that is set to np.nan if provided to make reclass_array nodata values consistent
Returns: reclass_array – reclassified array
Return type: np.array
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
reclass_transition
(a_prev, a_next, trans_dict, out_dtype=None, nodata_mask=None)¶ Reclass arrays based on element-wise combinations between two arrays.
Parameters: - a_prev (np.array) – previous lulc array
- a_next (np.array) – next lulc array
- trans_dict (dict) – reclassification map
- out_dtype (np.dtype) – a numpy datatype for the reclass_array
- nodata_mask (number) – for floats, a nodata value that is set to np.nan if provided to make reclass_array nodata values consistent
Returns: reclass_array – reclassified array
Return type: np.array
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
s_to_timestep
(snapshot_years, snapshot_idx)¶ Convert snapshot index position to timestep.
Parameters: - snapshot_years (list) – list of snapshot years.
- snapshot_idx (int) – index of snapshot
Returns: snapshot_timestep – timestep of the snapshot
Return type: int
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
timestep_to_transition_idx
(snapshot_years, transitions, timestep)¶ Convert timestep to transition index.
Parameters: - snapshot_years (list) – a list of years corresponding to the provided rasters
- transitions (int) – the number of transitions in the scenario
- timestep (int) – the current timestep
Returns: transition_idx – the current transition
Return type: int
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
write_to_raster
(output_raster, array, xoff, yoff)¶ Write numpy array to raster block.
Parameters: - output_raster (str) – filepath to output raster
- array (np.array) – block to save to raster
- xoff (int) – offset index for x-dimension
- yoff (int) – offset index for y-dimension
Coastal Blue Carbon Preprocessor.
-
natcap.invest.coastal_blue_carbon.preprocessor.
execute
(args)¶ Coastal Blue Carbon Preprocessor.
The preprocessor accepts a list of rasters and checks for cell-transitions across the rasters. The preprocessor outputs a CSV file representing a matrix of land cover transitions, each cell prefilled with a string indicating whether carbon accumulates or is disturbed as a result of the transition, if a transition occurs.
Parameters: - workspace_dir (string) – directory path to workspace
- results_suffix (string) – append to outputs directory name if provided
- lulc_lookup_uri (string) – filepath of lulc lookup table
- lulc_snapshot_list (list) – a list of filepaths to lulc rasters
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'results_suffix': '', 'lulc_lookup_uri': 'path/to/lookup.csv', 'lulc_snapshot_list': ['path/to/raster1', 'path/to/raster2', ...] }
-
natcap.invest.coastal_blue_carbon.preprocessor.
read_from_raster
(input_raster, offset_block)¶ Read block from raster.
Parameters: - input_raster (str) – filepath to raster.
- offset_block (dict) – where the block is indexed.
Returns: a – the raster block.
Return type: np.array
Coastal Blue Carbon package.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability.
execute
(args)¶ Coastal Vulnerability.
Parameters: - workspace_dir (string) – The path to the workspace directory on disk (required)
- aoi_uri (string) – Path to an OGR vector on disk representing the area of interest. (required)
- landmass_uri (string) – Path to an OGR vector on disk representing the global landmass. (required)
- bathymetry_uri (string) – Path to a GDAL raster on disk representing the bathymetry. Must overlap with the Area of Interest if if provided. (optional)
- bathymetry_constant (int) – An int between 1 and 5 (inclusive). (optional)
- relief_uri (string) – Path to a GDAL raster on disk representing the elevation within the land polygon provided. (optional)
- relief_constant (int) – An int between 1 and 5 (inclusive). (optional)
- elevation_averaging_radius (int) – a positive int. The radius around which to compute the average elevation for relief. Must be in meters. (required)
- mean_sea_level_datum (int) – a positive int. This input is the elevation of Mean Sea Level (MSL) datum relative to the datum of the bathymetry layer that they provide. The model transforms all depths to MSL datum by subtracting the value provided by the user to the bathymetry. This input can be used to run the model for a future sea-level rise scenario. Must be in meters. (required)
- cell_size (int) – Cell size in meters. The higher the value, the faster the computation, but the coarser the output rasters produced by the model. (required)
- depth_threshold (int) – Depth in meters (integer) cutoff to determine if fetch rays project over deep areas. (optional)
- exposure_proportion (float) – Minimum proportion of rays that project over exposed and/or deep areas need to classify a shore segment as exposed. (required)
- geomorphology_uri (string) – A OGR-supported polygon vector file that has a field called “RANK” with values between 1 and 5 in the attribute table. (optional)
- geomorphology_constant (int) – Integer value between 1 and 5. If layer associated to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether.
- habitats_directory_uri (string) – Directory containing OGR-supported polygon vectors associated with natural habitats. The name of these shapefiles should be suffixed with the ID that is specified in the natural habitats CSV file provided along with the habitats (optional)
- habitats_csv_uri (string) – A CSV file listing the attributes for each
habitat. For more information, see ‘Habitat Data Layer’ section in
the model’s documentation. (required if
args['habitat_directory_uri']
is provided) - habitat_constant (int) – Integer value between 1 and 5. If layer associated to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- area_computed (string) – Determine if the output data is about all the
coast about sheltered segments only. Either
'sheltered'
or'both'
(required) - suffix (string) – A string that will be added to the end of the output file. (optional)
- climatic_forcing_uri (string) – An OGR-supported vector containing both wind wave information across the region of interest. (optional)
- climatic_forcing_constant (int) – Integer value between 1 and 5. If layer to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- continental_shelf_uri (string) – An OGR-supported polygon vector delineating edges of the continental shelf. Default is global continental shelf shapefile. If omitted, the user can specify depth contour. See entry below. (optional)
- depth_contour (int) – Used to delineate shallow and deep areas. Continental limit is at about 150 meters. (optional)
- sea_level_rise_uri (string) – An OGR-supported point or polygon vector file features with “Trend” fields in the attributes table. (optional)
- sea_level_rise_constant (int) – Integer value between 1 and 5. If layer to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- structures_uri (string) – An OGR-supported vector file containing rigid structures to identify the portions of the coast that is armored. (optional)
- structures_constant (int) – Integer value between 1 and 5. If layer associated this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- population_uri (string) – A GDAL-supported raster file representing the population. (required)
- urban_center_threshold (int) – Minimum population required to consider shore segment a population center. (required)
- additional_layer_uri (string) – An OGR-supported vector file representing level rise, and will be used in the computation of coastal vulnerability and coastal vulnerability without habitat. (optional)
- additional_layer_constant (int) – Integer value between 1 and 5. If layer to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- rays_per_sector (int) – Number of rays used to subsample the fetch distance each of the 16 sectors. (required)
- max_fetch (int) – Maximum fetch distance computed by the model (>=60,000m). (optional)
- spread_radius (int) – Integer multiple of ‘cell size’. The coast from geomorphology layer could be of a better resolution than the global landmass, so the shores do not necessarily overlap. To make them coincide, the shore from the geomorphology layer is widened by 1 or more pixels. The value should be a multiple of ‘cell size’ that indicates how many pixels the coast from the geomorphology layer is widened. The widening happens on each side of the coast (n pixels landward, and n pixels seaward). (required)
- population_radius (int) – Radius length in meters used to count the number people leaving close to the coast. (optional)
Note
If neither
args['bathymetry_uri']
norargs['bathymetry_constant']
is provided, bathymetry is ignored altogether.If neither
args['relief_uri']
norargs['relief_constant']
is provided, relief is ignored altogether.If neither
args['geomorphology_uri']
norargs['geomorphology_constant']
is provided, geomorphology is ignored altogether.If neither
args['climatic_forcing_uri']
norargs['climatic_forcing_constant']
is provided, climatic_forcing is ignored altogether.If neither
args['sea_level_rise_uri']
norargs['sea_level_rise_constant']
is provided, sea level rise is ignored altogether.If neither
args['structures_uri']
norargs['structures_constant']
is provided, structures is ignored altogether.If neither
args['additional_layer_uri']
norargs['additional_layer_constant']
is provided, the additional layer option is ignored altogether.Example args:
args = { u'additional_layer_uri': u'CoastalProtection/Input/SeaLevRise_WCVI.shp', u'aoi_uri': u'CoastalProtection/Input/AOI_BarkClay.shp', u'area_computed': u'both', u'bathymetry_uri': u'Base_Data/Marine/DEMs/claybark_dem/hdr.adf', u'cell_size': 1000, u'climatic_forcing_uri': u'CoastalProtection/Input/WaveWatchIII.shp', u'continental_shelf_uri': u'CoastalProtection/Input/continentalShelf.shp', u'depth_contour': 150, u'depth_threshold': 0, u'elevation_averaging_radius': 5000, u'exposure_proportion': 0.8, u'geomorphology_uri': u'CoastalProtection/Input/Geomorphology_BarkClay.shp', u'habitats_csv_uri': u'CoastalProtection/Input/NaturalHabitat_WCVI.csv', u'habitats_directory_uri': u'CoastalProtection/Input/NaturalHabitat', u'landmass_uri': u'Base_Data/Marine/Land/global_polygon.shp', u'max_fetch': 12000, u'mean_sea_level_datum': 0, u'population_radius': 1000, u'population_uri': u'Base_Data/Marine/Population/global_pop/w001001.adf', u'rays_per_sector': 1, u'relief_uri': u'Base_Data/Marine/DEMs/claybark_dem/hdr.adf', u'sea_level_rise_uri': u'CoastalProtection/Input/SeaLevRise_WCVI.shp', u'spread_radius': 250, u'structures_uri': u'CoastalProtection/Input/Structures_BarkClay.shp', u'urban_center_threshold': 5000, u'workspace_dir': u'coastal_vulnerability_workspace' }
Returns: None
Coastal vulnerability model core functions
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
adjust_dataset_ranks
(input_uri, output_uri)¶ Adjust the rank of a dataset’s first band using ‘adjust_layer_ranks’.
- Inputs:
- input_uri: dataset uri where values are 1, 2, 3, 4, or 5
- output_uri: new dataset with values adjusted by ‘adjust_layer_ranks’.
Returns output_uri.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
adjust_layer_ranks
(layer)¶ Adjust the rank of a layer in case there are less than 5 values.
- Inputs:
- layer: a float or int numpy array as extracted by ReadAsArray
that encodes the layer ranks (valued 1, 2, 3, 4, or 5).
- Output:
adjusted_layer: a numpy array of same dimensions as the input array with rank values reassigned follows:
-non-shore segments have a (no-data) value of zero (0) -all segments have the same value: all are set to a rank of 3 -2 different values: lower values are set to 3, 4 for the rest -3 values: 2, 3, and 4 by ascending level of vulnerability -4 values: 2, 3, 4, and 5 by ascending level of vulnerability
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
adjust_raster_to_aoi
(in_dataset_uri, aoi_datasource_uri, cell_size, out_dataset_uri)¶ Adjust in_dataset_uri to match aoi_dataset_uri’s extents, cell size and projection.
- Inputs:
in_dataset_uri: the uri of the dataset to adjust
- aoi_dataset_uri: uri to the aoi we want to use to adjust
in_dataset_uri
out_dataset_uri: uri to the adjusted dataset
- Returns:
- out_dataset_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
adjust_shapefile_to_aoi
(data_uri, aoi_uri, output_uri, empty_raster_allowed=False)¶ Adjust the shapefile’s data to the aoi, i.e.reproject & clip data points.
- Inputs:
- data_uri: uri to the shapefile to adjust
- aoi_uri: uir to a single polygon shapefile
- base_path: directory where the intermediate files will be saved
- output_uri: dataset that is clipped and/or reprojected to the
aoi if necessary. - empty_raster_allowed: boolean flag that, if False (default),
causes the function to break if output_uri is empty, or return an empty raster otherwise.
Returns: output_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
aggregate_csvs
(csv_list, out_uri)¶ Concatenate 3-row csv files created with tif2csv
- Inputs:
- csv_list: list of csv_uri strings
- Outputs:
- uri_output: the output uri of the concatenated csv
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
aggregate_tifs_from_directory
(path='.', mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
aggregate_tifs_from_list
(uri_list, path, mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
assign_sheltered_segments
(exposure_raster_uri, raster_uri, output_raster_uri)¶ Propagate values from ‘sources’ across a surface defined by ‘mask’ in a breadth-first-search manner.
- Inputs:
-exposure_raster_uri: URI to the GDAL dataset that we want to process -mask: a numpy array where 1s define the area across which we want
to propagate the values defined in ‘sources’.- -sources: a tuple as is returned by numpy.where(...) of coordinates
- of where to pick values in ‘raster_uri’ (a source). They are the values we want to propagate across the area defined by ‘mask’.
-output_raster_uri: URI to the GDAL dataset where we want to save the array once the values from source are propagated.
Returns: nothing.
The algorithm tries to spread the values pointed by ‘sources’ to every of the 8 immediately adjascent pixels where mask==1. Each source point is processed in sequence to ensure that values are propagated from the closest source point. If a connected component of 1s in ‘mask’ does not contain any source, its value remains unchanged in the output raster.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
cast_ray_fast
(direction, d_max)¶ March from the origin towards a direction until either land or a maximum distance is met.
Inputs: - origin: algorithm’s starting point – has to be on sea - direction: marching direction - d_max: maximum distance to traverse - raster: land mass raster
Returns the distance to the origin.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
clip_datasource
(aoi_ds, orig_ds, output_uri)¶ Clip an OGR Datasource of geometry type polygon by another OGR Datasource geometry type polygon. The aoi_ds should be a shapefile with a layer that has only one polygon feature
aoi_ds - an OGR Datasource that is the clipping bounding box orig_ds - an OGR Datasource to clip out_uri - output uri path for the clipped datasource
returns - a clipped OGR Datasource
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
combined_rank
(R_k)¶ Compute the combined habitats ranks as described in equation (3)
- Inputs:
- R_k: the list of ranks
- Output:
- R_hab as decribed in the user guide’s equation 3.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_additional_layer
(args)¶ Compute the additional layer the sea level rise index.
- Inputs:
-args[‘additional_layer_uri’]: uri to the additional layer data. -args[‘aoi_uri’]: uri to datasource of the area of interest -args[‘shore_raster_uri’]: uri to the shoreline dataset (land =1, sea =0) -args[‘cell_size’]: integer of the cell size in meters -args[‘intermediate_directory’]: uri to the intermediate file
directory- Output:
- Return a dictionary of all the intermediate file URIs.
- Intermediate outputs:
rasterized_sea_level_rise.tif:rasterized version of the shapefile
shore_FIELD_NAME.tif: raw value along the shore.
- FIELD_NAME.tif: index along the shore. If all
the shore has the same value, assign the moderate index value 3.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_coastal_exposure
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_coastal_exposure_no_habitats
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_coastal_population
(args)¶ Compute population living along the shore within a given radius.
- Inputs:
- args[‘intermediate_directory’]: uri to a directory where intermediate files are stored
- args[‘subdirectory’]: string URI of an existing subdirectory
- args[‘prefix’]: string prefix appended to every file generated
- args[‘population_uri’]: uri to the population density dataset.
- args[‘population_radius’]: used to compute the population density.
- args[‘aoi_uri’]: uri to a polygon shapefile
- args[‘cell_size’]: size of a pixel in meters
- Outputs:
- Return a uri dictionary of all the files created to generate the population density along the coastline.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_continental_shelf_distance
(args)¶ Copy the continental shelf distance data to the outputs/ directory.
- Inputs:
- args[‘shore_shelf_distance’]: uri to the continental shelf distance args[‘prefix’]:
- Outputs:
- data_uri: a dictionary containing the uri where the data is saved.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_erodible_shoreline
(args)¶ Compute the erodible shoreline as described in Greg’s notes. The erodible shoreline is the shoreline segments of rank 5.
- Inputs:
- args[geomorphology]: the geomorphology data. args[‘prefix’]: prefix to be added to the new filename. args[‘aoi_uri’]: URI to the area of interest shapefile args[‘cell_size’]: size of a cell on the raster
- Outputs:
- data_uri: a dictionary containing the uri where the data is saved.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_erosion_exposure
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_fetch
(land_array, rays_per_sector, d_max, cell_size, shore_points, bathymetry, bathymetry_nodata, GT, shore_raster)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_fetch_uri
(landmass_raster_uri, rays_per_sector, d_max, cell_size, shore_uri, bathymetry_uri)¶ Given a land raster, return the fetch distance from a point in given directions
- land_raster: raster where land is encoded as 1s, sea as 0s,
and cells outside the area of interest as anything different from 0s or 1s.
- directions: tuple of angles (in radians) from which the fetch
will be computed for each pixel.
d_max: maximum distance in meters over which to compute the fetch
cell_size: size of a cell in meters
- shore_uri: URI to the raster where the shoreline is encoded as 1s,
the rest as 0s.
- returns: a tuple (distances, depths) where:
- distances is a dictionary of fetch data where the key is a shore point (tuple of integer coordinates), and the value is a 1*sectors numpy array containing fetch distances (float) from that point for each sector. The first sector (0) points eastward.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_geomorphology
(args)¶ Translate geomorphology RANKS to shore pixels.
Create a raster identical to the shore pixel raster that has geomorphology RANK values. The values are gathered by finding the closest geomorphology feature to the center of the pixel cell.
Parameters: - args['geomorphology_uri'] (string) – a path on disk to a shapefile of the gemorphology ranking along the coastline.
- args['shore_raster_uri'] (string) – a path on disk to a the shoreline dataset (land = 1, sea = 0).
- args['intermediate_directory'] (string) – a path to the directory where intermediate files are stored.
- args['subdirectory'] (string) – a path for a directory to store the specific geomorphology intermediate steps.
Returns: data_uri – a dictionary of with the path for the geomorphology
raster.
Return type: dict
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_habitat_role
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_natural_habitats_vulnerability
(args)¶ Compute the natural habitat rank as described in the user manual.
- Inputs:
- -args[‘habitats_csv_uri’]: uri to a comma-separated text file
- containing the list of habitats.
- -args[‘habitats_directory_uri’]: uri to the directory where to find
- the habitat shapefiles.
-args[‘aoi_uri’]: uri to the datasource of the area of interest -args[‘shore_raster_uri’]: uri to the shoreline dataset
(land =1, sea =0)-args[‘cell_size’]: integer cell size in meters -args[‘intermediate_directory’]: uri to the directory where
intermediate files are stored- Output:
- -data_uri: a dictionary of all the intermediate file URIs.
- Intermediate outputs:
- For each habitat (habitat name ‘ABCD’, with id ‘X’) shapefile:
ABCD_X_raster.tif: rasterized shapefile data.
ABCD_influence.tif: habitat area of influence. Convolution between the rasterized shape data and a circular kernel which
radius is the habitat’s area of influence, TRUNCATED TO CELL_SIZE!!!
ABCD_influence_on_shore.tif: habitat influence along the shore
- habitats_available_data.tif: combined habitat rank along the
shore using equation 4.4 in the user guide.
habitats_missing_data.tif: shore section without habitat data.
habitats.tif: shore ranking using habitat and default ranks.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_relief_rank
(args)¶ Compute the relief index as is described in InVEST’s user guide.
- Inputs:
args[‘relief_uri’]: uri to an elevation dataset.
args[‘aoi_uri’]: uri to the datasource of the region of interest.
args[‘landmass_uri’]: uri to the landmass datasource where land is 1 and sea is 0.
- args[‘spread_radius’]: if the coastline from the geomorphology i
doesn’t match the land polygon’s shoreline, we can increase the overlap by ‘spreading’ the data from the geomorphology over a wider area. The wider the spread, the more ranking data overlaps with the coast. The spread is a convolution between the geomorphology ranking data and a 2D gaussian kernel of area (2*spread_radius+1)^2. A radius of zero reduces the kernel to the scalar 1, which means no spread at all.
- args[‘spread_radius’]: how much the shore coast is spread to match
the relief’s coast.
args[‘shore_raster_uri’]: URI to the shore tiff dataset.
args[‘cell_size’]: granularity of the rasterization.
- args[‘intermediate_directory’]: where intermediate files are
stored
- Output:
- Return R_relief as described in the user manual.
- A rastrer file called relief.tif
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_sea_level_rise
(args)¶ Compute the sea level rise index as described in the user manual.
- Inputs:
-args[‘sea_level_rise’]: shapefile with the sea level rise data. -args[‘aoi_uri’]: uri to datasource of the area of interest -args[‘shore_raster_uri’]: uri to the shoreline dataset (land =1, sea =0) -args[‘cell_size’]: integer of the cell size in meters -args[‘intermediate_directory’]: uri to the intermediate file
directory- Output:
- Return a dictionary of all the intermediate file URIs.
- Intermediate outputs:
rasterized_sea_level_rise.tif:rasterized version of the shapefile
shore_level_rise.tif: sea level rise along the shore.
- sea_level_rise.tif: sea level rise index along the shore. If all
the shore has the same value, assign the moderate index value 3.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_segment_exposure
(args)¶ Compute exposed and sheltered shoreline segment map.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_structure_protection
(args)¶ Compute the structure influence on the shore to later include it in the computation of the layers final rankings, as is specified in Gregg’s the additional notes (decrement ranks around structure edges).
- Inputs:
- args[‘aoi_uri’]: string uri to the datasource of the area of
interest
args[‘shore_raster_uri’]: dataset uri of the coastline within the AOI
args[‘structures_uri’]: string of the structure datasource uri
args[‘cell_size’]: integer of the size of a pixel in meters
- args[‘intermediate_directory’]: string of the uri where
intermediate files are stored
args[‘prefix’]: string prefix appended to every intermediate file
- Outputs:
- data_uri: a dictionary of the file uris generated in the intermediate directory.
- data_uri[‘adjusted_structures’]: string of the dataset uri obtained from reprojecting args[‘structures_uri’] and burining it onto the aoi. Contains the structure information across the whole aoi.
- data_uri[‘shore_structures’]: string uri pointing to the structure information along the coast only.
- data_uri[‘structure_influence’]: string uri pointing to a datasource of the spatial influence of the structures.
- data_uri[‘structure_edge’]: string uri pointing to the datasource of the edges of the structures.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_surge_potential
(args)¶ Compute surge potential index as described in the user manual.
- Inputs:
- args[‘bathymetry’]: bathymetry DEM file.
- args[‘landmass’]: shapefile containing land coverage data (land = 1, sea = 0)
- args[‘aoi_uri’]: uri to the datasource of the area of interest
- args[‘shore_raster_uri’]: uri to a shore raster where the shoreline is 1, and everything else is 0.
- args[‘cell_size’]: integer number for the cell size in meters
- args[‘intermediate_directory’]: uri to the directory where intermediate files are stored
- Output:
- Return R_surge as described in the user guide.
- Intermediate outputs:
- rasterized_sea_level_rise.tif:rasterized version of the shapefile
- shore_level_rise.tif: sea level rise along the shore.
- sea_level_rise.tif: sea level rise index along the shore.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_wave_exposure
(args)¶ Compute the wind exposure for every shore segment
- Inputs:
- args[‘climatic_forcing_uri’]: uri to wave datasource
- args[‘aoi_uri’]: uri to area of interest datasource
- args[‘fetch_distances’]: a dictionary of (point, list) pairs where point is a tuple of integer (row, col) coordinates and list is a maximal fetch distance in meters for each fetch sector.
- args[‘fetch_depths’]: same dictionary as fetch_distances, but list is a maximal fetch depth in meters for each fetch sector.
- args[‘cell_size’]: cell size in meters (integer)
- args[‘H_threshold’]: threshold (double) for the H function (eq. 7)
- args[‘intermediate_directory’]: uri to the directory that contains the intermediate files
- Outputs:
- data_uri: dictionary of the uri of all the files created in the function execution
- Detail of files:
A file called wave.tif that contains the wind exposure index along the shore.
- For each equiangular fetch sector k:
- F_k.tif: per-sector fetch value (see eq. 6).
- H_k.tif: per-sector H value (see eq. 7)
- E_o_k.tif: per-sector average oceanic wave power (eq. 6)
- E_l_k.tif: per-sector average wind-generated wave power (eq.9)
- E_w_k.tif: per-sector wave power (eq.5)
- E_w.tif: combined wave power.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_wind_exposure
(args)¶ Compute the wind exposure for every shore segment as in equation 4.5
- Inputs:
- args[‘climatic_forcing_uri’]: uri to the wind information datasource
- args[‘aoi_uri’]: uri to the area of interest datasource
- args[‘fetch_distances’]: a dictionary of (point, list) pairs where point is a tuple of integer (row, col) coordinates and list is a maximal fetch distance in meters for each fetch sector.
- args[‘fetch_depths’]: same dictionary as fetch_distances, but list is a maximal fetch depth in meters for each fetch sector.
- args[‘cell_size’]: granularity of the rasterization.
- args[‘intermediate_directory’]:where intermediate files are stored
- args[‘prefix’]: string
- Outputs:
- data_uri: dictionary of the uri of all the files created in the function execution
- File description:
REI.tif: combined REI value of the wind exposure index for all sectors along the shore.
- For each equiangular fetch sector n:
- REI_n.tif: per-sector REI value (U_n * P_n * F_n).
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
convert_tif_to_csv
(tif_uri, csv_uri=None, mask=None)¶ Converts a single band geo-tiff file to a csv text file
- Inputs:
- -tif_uri: the uri to the file to be converted -csv_uri: uri to the output file. The file should not exist.
- Outputs:
- -returns the ouput file uri
returns nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
convert_tifs_to_csv
(tif_list, mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
detect_shore
(land_sea_array, aoi_array, aoi_nodata)¶ Extract the boundary between land and sea from a raster.
- raster: numpy array with sea, land and nodata values.
returns a numpy array the same size as the input raster with the shore encoded as ones, and zeros everywhere else.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
detect_shore_uri
(landmass_raster_uri, aoi_raster_uri, output_uri)¶ Extract the boundary between land and sea from a raster.
- raster: numpy array with sea, land and nodata values.
returns a numpy array the same size as the input raster with the shore encoded as ones, and zeros everywhere else.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
dict_to_point_shapefile
(dict_data, out_path, spat_ref, columns, row_order)¶ Create a point shapefile from a dictionary.
Parameters: - dict_data (dict) – a dictionary where keys point to a sub dictionary that has at least keys ‘x’, ‘y’. Each sub dictionary will be added as a point feature using ‘x’, ‘y’ as the geometry for the point. All other key, value pairs in the sub dictionary will be added as fields and values to the point feature.
- out_path (string) – a path on disk for the point shapefile.
- spat_ref (osr spatial reference) – an osr spatial reference to use when creating the layer.
- columns (list) – a list of strings representing the order the field names should be written. Attempting the attribute table reflects this order.
- row_order (list) – a list of tuples that match the keys of ‘dict_data’. This is so we can add the points in a specific order and hopefully populate the attribute table in that order.
Returns: Nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
disc_kernel
(r)¶ Create a (r+1)^2 disc-shaped array filled with 1s where d(i-r,j-r) <= r
Input: r, the kernel radius. r=0 is a single scalar of value 1.
- Output: a (r+1)x(r+1) array with:
- 1 if cell is closer than r units to the kernel center (r,r),
- 0 otherwise.
Distances are Euclidean.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
enumerate_shapefile_fields
(shapefile_uri)¶ Enumerate all the fielfd in a shapefile.
- Inputs:
- -shapefile_uri: uri to the shapefile which fields have to be enumerated
Returns a nested list of the field names in the order they are stored in the layer, and groupped per layer in the order the layers appear.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
execute
(args)¶ Entry point for coastal vulnerability core
args[‘’] - actual data structure the way I want them look like :RICH:DESCRIBE ALL THE ARGUMENTS IN ARGS
returns nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
fetch_vectors
(angles)¶ convert the angles passed as arguments to raster vector directions.
- Input:
- -angles: list of angles in radians
- Outputs:
- -directions: vector directions numpy array of size (len(angles), 2)
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
find_attribute_field
(field_name, shapefile_uri)¶ Look for a field name in the shapefile attribute table. Search is case insensitive.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
get_field
(field_name, shapefile, case_sensitive=True)¶ Return the field in shapefile that corresponds to field_name, None otherwise.
- Inputs:
- field_name: string to look for.
- shapefile: where to look for the field.
- case_sensitive: indicates whether the case is relevant when
comparing field names
- Output:
- the field name in the shapefile that corresponds to field_name,
None otherwise.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
get_layer_and_index_from_field_name
(field_name, shapefile)¶ Given a field name, return its layer and field index. Inputs:
- field_name: string to look for.
- shapefile: where to look for the field.
- Output:
- A tuple (layer, field_index) if the field exist in ‘shapefile’.
- (None, None) otherwise.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
has_field
(field_name, shapefile, case_sensitive=True)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
is_point_datasource
(uri)¶ Returns True if the datasource is a point shapefile
- Inputs:
- -uri: uri to a datasource
- Outputs:
- -True if uri points to a point datasource, False otherwise
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
is_polygon_datasource
(uri)¶ Returns True if the datasource is a polygon shapefile
- Inputs:
- -uri: uri to a datasource
- Outputs:
- -True if uri points to a polygon datasource, False otherwise
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
nearest_vector_neighbor
(neighbors_path, point_path, inherit_field)¶ Inherit a field value from the closest shapefile feature.
Each point in ‘point_path’ will inherit field ‘inherit_field’ from the closest feature in ‘neighbors_path’. Uses an rtree to build up a spatial index of ‘neighbor_path’ bounding boxes to find nearest points.
Parameters: - neighbors_path (string) – a filepath on disk to a shapefile that has at least one field called ‘inherit_field’
- point_path (string) – a filepath on disk to a shapefile. A field ‘inherit_field’ will be added to the point features. The value of that field will come from the closest feature’s field in ‘neighbors_path’
- inherit_field (string) – the name of the field in ‘neighbors_path’ to pass along to ‘point_path’.
Returns: Nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
preprocess_dataset
(dataset_uri, aoi_uri, cell_size, output_uri)¶ Funstion that preprocesses an input dataset (clip, reproject, resample) so that it is ready to be used in the model
- Inputs:
-dataset_uri: uri to the input dataset to be pre-processed -aoi_uri: uri to an aoi polygon datasource that is used for
clipping and reprojection.-cell_size: output dataset cell size in meters (integer) -output_uri: uri to the pre-processed output dataset.
Returns output_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
preprocess_inputs
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
preprocess_point_datasource
(datasource_uri, aoi_uri, cell_size, output_uri, field_list, nodata=0.0)¶ Function that converts a point shapefile to a dataset by clipping, reprojecting, resampling, burning, and extrapolating burnt values.
- Inputs:
-datasource_uri: uri to the datasource to be pre-processed -aoi_uri: uri to an aoi polygon datasource that is used for
clipping and reprojection.-cell_size: output dataset cell size in meters (integer) -output_uri: uri to the pre-processed output dataset. -field_name: name of the field in the attribute table to get the values from. If a number, use it as a constant. If Null, use 1.
Returns output_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
preprocess_polygon_datasource
(datasource_uri, aoi_uri, cell_size, output_uri, field_name=None, all_touched=False, nodata=0.0, empty_raster_allowed=False)¶ Function that converts a polygon shapefile to a dataset by clipping, reprojecting, resampling, burning, and extrapolating burnt values.
- Inputs:
-datasource_uri: uri to the datasource to be pre-processed -aoi_uri: uri to an aoi polygon datasource that is used for
clipping and reprojection.-cell_size: output dataset cell size in meters (integer) -output_uri: uri to the pre-processed output dataset. -field_name: name of the field in the attribute table to get the values from. If a number, use it as a constant. If Null, use 1. -all_touched: boolean flag used in gdal’s vectorize_rasters options flag -nodata: float used as nodata in the output raster -empty_raster_allowed: flag that allows the function to return an empty raster if set to True, or break if set to False. False is the default.
Returns output_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
projections_match
(projection_list, silent_mode=True)¶ Check that two gdal datasets are projected identically. Functionality adapted from Doug’s biodiversity_biophysical.check_projections
- Inputs:
- projection_list: list of wkt projections to compare
- silent_mode: id True (default), don’t output anything, otherwise output if and why some projections are not the same.
- Output:
- False the datasets are not projected identically.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
rank_by_quantiles
(X, bin_count)¶ Tries to evenly distribute elements in X among ‘bin_count’ bins. If the boundary of a bin falls within a group of elements with the same value, all these elements will be included in that bin. Inputs:
-X: a 1D numpy array of the elements to bin -bin_count: the number of binsReturns the bin boundaries ready to be used by numpy.digitize
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
rank_shore
(X, bin_count)¶ Assign a rank based on natural breaks (Jenks natural breaks for now).
- Inputs:
- X: a numpy array with the lements to be ranked
- bins: the number of ranks (integer)
- Outputs:
- output: a numpy array with rankings in the interval
[0, bin_count-1] that correspond to the elements of X (rank of X[i] == outputs[i]).
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
raster_from_shapefile_uri
(shapefile_uri, aoi_uri, cell_size, output_uri, field=None, all_touched=False, nodata=0.0, datatype=<Mock id='140294660256336'>)¶ Burn default or user-defined data from a shapefile on a raster.
- Inputs:
shapefile: the dataset to be discretized
aoi_uri: URI to an AOI shapefile
cell_size: coarseness of the discretization (in meters)
output_uri: uri where the raster will be saved
- field: optional field name (string) where to extract the data
from.
all_touched: optional boolean that indicates if we use GDAL’s ALL_TOUCHED parameter when rasterizing.
- Output: A shapefile where:
If field is specified, the field data is used as burn value. If field is not specified, then:
- shapes on the first layer are encoded as 1s
- the rest is encoded as 0
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
raster_to_point_vector
(raster_path, point_vector_path)¶ Create a point shapefile from raster pixels.
Creates a point feature from each non nodata raster pixel, where the geometry for the point is the center of the pixel. A field ‘Value’ is added to each point feature with the value from the pixel. The created point shapefile will use a spatial reference taking from the rasters projection.
Parameters: - raster_path (string) – a filepath on disk of the raster to convert into a point shapefile.
- point_vector_path (string) – a filepath on disk for where to save the shapefile. Must have a ‘.shp’ extension.
Returns: Nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
raster_wkt
(raster)¶ Return the projection of a raster in the OpenGIS WKT format.
- Input:
- raster: raster file
- Output:
- a projection encoded as a WKT-compliant string.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
read_habitat_info
(habitats_csv_uri, habitats_directory_uri)¶ Extract the habitats information from the csv file and directory.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
rowcol_to_xy
(rows, cols, raster)¶ non-uri version of rowcol_to_xy_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
rowcol_to_xy_uri
(rows, cols, raster_uri)¶ converts row/col coordinates into x/y coordinates using raster_uri’s geotransform
- Inputs:
- -rows: integer scalar or numpy array of row coordinates -cols: integer scalar or numpy array of column coordinates -raster_uri: uri from where the geotransform is going to be extracted
Returns a tuple (X, Y) of scalars or numpy arrays of the projected coordinates
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_array_to_raster
(array, out_uri, base_uri, cell_size, no_data=None, default_nodata=0.0, gdal_type=<Mock id='140294660256208'>)¶ Save an array to a raster constructed from an AOI.
- Inputs:
- array: numpy array to be saved
- out_uri: output raster file URI.
- base_uri: URI to the AOI from which to construct the template raster
- cell_size: granularity of the rasterization in meters
- recompute_nodata: if True, recompute nodata to avoid interferece with existing raster data
- no_data: value of nodata used in the function. If None, revert to default_nodata.
- default_nodata: nodata used if no_data is set to none.
- Output:
- save the array in a raster file constructed from the AOI of granularity specified by cell_size
- Return the array uri.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_fetch_depths
(fetch, aoi_uri, cell_size, base_path, prefix)¶ Create dictionary of raster filenames of fetch F(n) for each sector n.
- Inputs:
- wind_data: wind data points adjusted to the aoi
- aoi: used to create the rasters for each sector
- cell_size: raster granularity in meters
- base_path: base path where the generated raster will be saved
- Output:
- A dictionary where keys are sector angles in degrees and values are raster filenames where F(n) is defined on each cell
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_fetch_distances
(fetch, aoi_uri, cell_size, base_path, prefix='')¶ Create dictionary of raster filenames of fetch F(n) for each sector n.
- Inputs:
- wind_data: wind data points adjusted to the aoi
- aoi: used to create the rasters for each sector
- cell_size: raster granularity in meters
- base_path: base path where the generated raster will be saved
Output: A list of raster URIs corresponding to sectors of increasing angles where data points encode the sector’s fetch distance for that point
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_fetch_to_outputs
(args)¶ Function that copies the fetch information (depth and distances) in the outputs directory.
- Inputs:
- args[‘fetch_distance_uris’]: A dictionary of (‘string’:string)
- entries where the first string is the sector in degrees, and the second string is a uri pointing to the file that contains the fetch distances for this sector.
- args[‘fetch_depths_uris’]: A dictionary similar to the depth one,
- but the second string is pointing to the file that contains fetch depths, not distances.
- args[‘prefix’]: String appended before the filenames. Currently
- used to follow Greg’s output labelling scheme.
- Outputs:
- data_uri that contains the uri of the new files in the outputs
directory, one for fetch distance and one for fetch depths for each fetch direction ‘n’, for a total of 2n.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_local_wave_exposure_to_subdirectory
(args)¶ Copy local wave exposure to the outputs/ directory.
- Inputs:
- args[‘E_l’]: uri to the local wave exposure data args[‘prefix’]: prefix to be appended to the new filename
- Outputs:
- data_uri: dictionary containing the uri where the data is saved
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_oceanic_wave_exposure_to_subdirectory
(args)¶ Copy oceanic wave exposure to the outputs/ directory.
- Inputs:
- args[‘E_o’]: uri to the oceanic wave exposure data args[‘prefix’]: prefix to be appended to the new filename
- Outputs:
- data_uri: dictionary containing the uri where the data is saved
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_structure_to_subdirectory
(args)¶ Save structure data to its intermediate subdirectory, under a custom prefix.
- Inputs:
args[‘structure_edges’]: the data’s uri to save to /outputs args[‘prefix’]: prefix to add to the new filename. Currently used to
mirror the labeling of outputs in Greg’s notes.- Outputs:
- data_uri: a dictionary of the uri where the data has been saved.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_wind_generated_waves_to_subdirectory
(args)¶ Copy the wave height and wave period to the outputs/ directory.
- Inputs:
- args[‘wave_height’][sector]: uri to “sector“‘s wave height data args[‘wave_period’][sector]: uri to “sector“‘s wave period data args[‘prefix’]: prefix to be appended to the new filename
- Outputs:
- data_uri: dictionary containing the uri where the data is saved
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
set_H_threshold
(threshold)¶ Return 0 if fetch is strictly below a threshold in km, 1 otherwise.
- Inputs:
- fetch: fetch distance in meters.
Returns: 1 if fetch >= threshold (in km) 0 if fetch < threshold Note: conforms to equation 4.8 in the invest documentation.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
shapefile_wkt
(shapefile)¶ Return the projection of a shapefile in the OpenGIS WKT format.
- Input:
- raster: raster file
- Output:
- a projection encoded as a WKT-compliant string.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
xy_to_rowcol
(x, y, raster)¶ non-uri version of xy_to_rowcol_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
xy_to_rowcol_uri
(x, y, raster_uri)¶ Does the opposite of rowcol_to_xy_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
aggregate_csvs
(csv_list, out_uri)¶ Concatenate 3-row csv files created with tif2csv
- Inputs:
- csv_list: list of csv_uri strings
- Outputs:
- uri_output: the output uri of the concatenated csv
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
aggregate_tifs_from_directory
(path='.', mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
convert_tif_to_csv
(tif_uri, csv_uri=None, mask=None)¶ Converts a single band geo-tiff file to a csv text file
- Inputs:
- -tif_uri: the uri to the file to be converted -csv_uri: uri to the output file. The file should not exist.
- Outputs:
- -returns the ouput file uri
returns nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
convert_tifs_to_csv
(tif_list, mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
execute
(args)¶
The Crop Production module excutes the Crop Production model.
-
natcap.invest.crop_production.crop_production.
calc_area_costs
(lookup_dict, economics_dict, aoi_raster)¶ Calculate area-related costs (e.g. labor, seed, machine, irrigation).
Parameters: - lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
- economics_dict (dict) – economic information.
- aoi_raster (str) – path to aoi raster.
Returns: area_cost_dict – {<code>: <total of area-related costs>}
Return type: dict
-
natcap.invest.crop_production.crop_production.
calc_fertilizer_costs
(code, code_dict, aoi_raster, fertilizer_dict)¶ Calculate fertilizer application rate costs.
Parameters: - code (int) – crop code.
- code_dict (dict) – economic information of crop.
- aoi_raster (str) – path to aoi raster.
- fertilizer_dict (dict) – mapping of fertilizers to their respective raster paths.
Returns: fertilizer_costs – total cost of fertilizer application for
a given crop.
Return type: float
-
natcap.invest.crop_production.crop_production.
check_inputs
(args)¶ Check user provides inputs necessary for particular yield functions.
Parameters: args (dict) – user-provided arguments dictionary.
-
natcap.invest.crop_production.crop_production.
compute_financial_analysis
(yield_dict, economics_table, aoi_raster, lookup_dict, fertilizer_dict, financial_analysis_table)¶ Compute financial analysis.
Parameters: - yield_dict (collections.Counter) – mapping from crop code to total yield.
- economics_table (str) – path to table containing economic information for each crop.
- aoi_raster (str) – path to aoi raster.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
- fertilizer_dict (dict) – mapping of fertilizers to their respective raster paths.
- financial_analysis_table (str) – path to output table.
-
natcap.invest.crop_production.crop_production.
compute_nutritional_contents
(yield_dict, lookup_dict, nutrient_table, nutritional_contents_table)¶ Compute nutritional contents of crop yields.
Parameters: - yield_dict (collections.Counter) – mapping from crop code to total yield.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
- nutrient_table (str) – path to table containing information about the nutrient contents of each crop.
- nutritional_contents_table (str) – path to output table.
-
natcap.invest.crop_production.crop_production.
compute_observed_yield
(aoi_raster, lookup_dict, observed_yield_dict, yield_raster)¶ Compute observed yield.
Parameters: - aoi_raster (str) – path to aoi raster.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
- observed_yield_dict (dict) – mapping of crops to observed yield rasters.
- yield_raster (str) – path to output directory.
Returns: yield_dict – mapping from crop code to total
yield.
Return type: collections.Counter
-
natcap.invest.crop_production.crop_production.
compute_percentile_yield
(aoi_raster, lookup_dict, climate_bin_dict, percentile_yield_dict, yield_raster, percentile_yield)¶ Compute yield using percentile method.
Parameters: - aoi_raster (str) – path to aoi raster.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
- climate_bin_dict (dict) – mapping of codes to climate bin rasters.
- percentile_yields_dict (dict) – mapping of crops to their respective information.
- yield_raster (str) – path to output raster.
- percentile_yield (str) – selected yield percentile.
Returns: yield_dict – mapping from crop code to total
yield.
Return type: collections.Counter
-
natcap.invest.crop_production.crop_production.
compute_regression_yield
(aoi_raster, lookup_dict, climate_bin_dict, regression_coefficient_dict, fertilizer_dict, irrigation_raster, yield_raster)¶ Compute regression yield.
Parameters: - aoi_raster (str) – path to aoi raster.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
- climate_bin_dict (dict) – mapping of codes to climate bin rasters.
- fertilizer_dir (str) – path to directory containing fertilizer rasters.
- regression_coefficients_dict (dict) – nested dictionary of regression coefficients for each crop code.
- irrigation_raster (str) – path to is_irrigated raster.
- yield_raster (str) – path to output raster.
Returns: yield_dict – mapping from crop code to total
yield.
Return type: collections.Counter
-
natcap.invest.crop_production.crop_production.
create_map
(d, sub_dict_key)¶ “Shorten nested dictionary into a one-to-one mapping.
Parameters: - d (dict) – nested dictionary.
- sub_dict_key (object) – key in sub-dictionary whose value becomes value in return dictionary.
Returns: one_to_one_dict – dictionary that is a one-to-one mapping.
Return type: dict
-
natcap.invest.crop_production.crop_production.
execute
(args)¶ Crop Production.
Parameters: - args['workspace_dir'] (str) – location into which all intermediate and output files should be placed.
- args['results_suffix'] (str) – a string to append to output filenames
- args['lookup_table'] (str) – filepath to a CSV table used to convert the crop code provided in the Crop Map to the crop name that can be used for searching through inputs and formatting outputs.
- args['aoi_raster'] (str) – a GDAL-supported raster representing a crop management scenario.
- args['dataset_dir'] (str) – the provided folder should contain a set of folders and data specified in the ‘Running the Model’ section of the model’s User Guide.
- args['yield_function'] (str) – the method used to compute crop yield. Can be one of three: ‘observed’, ‘percentile’, and ‘regression’.
- args['percentile_column'] (str) – for percentile yield function, the table column name must be provided so that the program can fetch the correct yield values for each climate bin.
- args['fertilizer_dir'] (str) – path to folder that contains a set of GDAL-supported rasters representing the amount of Nitrogen (N), Phosphorous (P2O5), and Potash (K2O) applied to each area of land (kg/ha).
- args['irrigation_raster'] (str) – filepath to a GDAL-supported raster representing whether irrigation occurs or not. A zero value indicates that no irrigation occurs. A one value indicates that irrigation occurs. If any other values are provided, irrigation is assumed to occur within that cell area.
- args['compute_nutritional_contents'] (boolean) – if true, calculates nutrition from crop production and creates associated outputs.
- args['nutrient_table'] (str) – filepath to a CSV table containing information about the nutrient contents of each crop.
- args['compute_financial_analysis'] (boolean) – if true, calculates economic returns from crop production and creates associated outputs.
- args['economics_table'] (str) – filepath to a CSV table containing information related to market price of a given crop and the costs involved with producing that crop.
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'results_suffix': 'scenario_name', 'lookup_table': 'path/to/lookup_table', 'aoi_raster': 'path/to/aoi_raster', 'dataset_dir': 'path/to/dataset_dir/', 'yield_function': 'regression', 'percentile_column': 'yield_95th', 'fertilizer_dir':'path/to/fertilizer_rasters_dir/', 'irrigation_raster': 'path/to/is_irrigated_raster', 'compute_nutritional_contents': True, 'nutrient_table': 'path/to/nutrition_table', 'compute_financial_analysis': True, 'economics_table': 'path/to/economics_table' }
-
natcap.invest.crop_production.crop_production.
get_fertilizer_rasters
(fertilizer_dir, cache_dir, aoi_raster)¶ Get fertilizer rasters.
Parameters: - fertilizer_dir (str) – path to directory containing fertilizer rasters.
- cache_dir (str) – path to cache directory.
- aoi_raster (str) – path to aoi raster.
Returns: fertilizer_dict – mapping of fertilizers to their respective
raster paths.
Return type: dict
-
natcap.invest.crop_production.crop_production.
get_files_in_dir
(path)¶ Fetch mapping of files in directory.
- Each key in the mapping is the first part of the filename split by an
- underscore.
Each value in the mapping is the filepath.
Parameters: path (str) – path to directory. Returns: files_dict – dict([(filename, filepath), ...]). Return type: dict
-
natcap.invest.crop_production.crop_production.
get_global_dataset
(dataset_dir)¶ Get global dataset.
Parameters: dataset_dir (str) – path to spatial dataset. Returns: dataset_dict – tree-like structure of spatial dataset filenames and filepaths.Return type: dict
-
natcap.invest.crop_production.crop_production.
get_lookup_dict
(aoi_raster, lookup_table)¶ Get lookup information for AOI.
Parameters: - aoi_raster (str) – path to aoi raster.
- lookup_table (str) – path to lookup table.
Returns: lookup_dict – mapping of codes to lookup info for crops in aoi.
Return type: dict
-
natcap.invest.crop_production.crop_production.
get_percentile_yields
(percentile_tables, lookup_dict)¶ Get percentile yield information.
Parameters: - percentile_tables (dict) – mapping of crops to their respective tables. filepaths.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
Returns: percentile_yields_dict – mapping of crops to their respective
information.
Return type: dict
-
natcap.invest.crop_production.crop_production.
get_regression_coefficients
(regression_tables, lookup_dict)¶ Get regression coefficients.
Parameters: - regression_tables (dict) – mapping of codes to regression coeffeicent tables.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
Returns: regression_coefficients_dict – nested dictionary of regression
coefficients for each crop code.
Return type: dict
-
natcap.invest.crop_production.crop_production.
read_from_raster
(input_raster, offset_block)¶ Read numpy array from raster block.
Parameters: - input_raster (str) – filepath to input raster.
- offset_block (dict) – dictionary of offset information. Keys in the dictionary include ‘xoff’, ‘yoff’, ‘win_xsize’, and ‘win_ysize’.
Returns: array – a blocked array of the input raster.
Return type: np.array
-
natcap.invest.crop_production.crop_production.
reclass
(array, d, nodata=0.0)¶ Reclassify values in numpy ndarray.
Values in array that are not in d are reclassed to np.nan.
Parameters: - array (np.array) – input data.
- d (dict) – reclassification map.
- nodata (float) – reclass value for number not provided in reclassification map.
Returns: reclass_array – reclassified array.
Return type: np.array
-
natcap.invest.crop_production.crop_production.
reproject_global_rasters
(global_dataset_dict, cache_dir, aoi_raster, lookup_dict)¶ Reproject global rasters.
Parameters: - global_dataset_dict (dict) – mapping of crops to their respective data filepaths.
- cache_dir (str) – path to directory in which to store reprojected rasters.
- aoi_raster (str) – path to aoi raster.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
Returns: observed_yield_dict – mapping of crops to observed yield rasters.
Return type: dict
-
natcap.invest.crop_production.crop_production.
reproject_raster
(src_path, template_path, dst_path)¶ Reproject raster.
Block-size set to 256 x 256.
Parameters: - src_path (str) – path to source raster.
- template_path (str) – path to template raster.
- dst_path (str) – path to destination raster.
-
natcap.invest.crop_production.crop_production.
run_observed_yield
(global_dataset_dict, cache_dir, aoi_raster, lookup_dict, yield_raster)¶ Run observed yield model.
Parameters: - global_dataset_dict (dict) – mapping of crops to their respective data filepaths.
- cache_dir (str) – path to cache directory.
- aoi_raster (str) – path to aoi raster.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
- yield_raster (str) – path to output raster.
Returns: yield_dict – mapping from crop code to total
yield.
Return type: collections.Counter
-
natcap.invest.crop_production.crop_production.
run_percentile_yield
(climate_bin_maps, percentile_tables, cache_dir, aoi_raster, lookup_dict, yield_raster, percentile_yield)¶ Run percentile yield model.
Parameters: - climate_bin_dict (dict) – mapping of codes to climate bin rasters.
- percentile_tables (dict) – mapping of crops to their respective tables. filepaths.
- cache_dir (str) – path to cache directory.
- aoi_raster (str) – path to aoi raster.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
- yield_raster (str) – path to output raster.
- percentile_yield (str) – selected yield percentile.
Returns: yield_dict – mapping from crop code to total
yield.
Return type: collections.Counter
-
natcap.invest.crop_production.crop_production.
run_regression_yield
(climate_bin_maps, regression_tables, cache_dir, aoi_raster, fertilizer_dict, irrigation_raster, lookup_dict, yield_raster)¶ Run regression yield model.
Parameters: - climate_bin_maps (dict) – mapping of crops to climate bin rasters.
- regression_tables (dict) – mapping of codes to regression coeffeicent tables.
- cache_dir (str) – path to cache directory.
- aoi_raster (str) – path to aoi raster.
- fertilizer_dict (dict) – mapping of fertilizers to their respective raster paths.
- irrigation_raster (str) – path to intermediate is_irrigated raster.
- lookup_dict (dict) – mapping of codes to lookup info for crops in aoi.
- yield_raster (str) – path to output raster.
Returns: yield_dict – mapping from crop code to total
yield.
Return type: collections.Counter
-
natcap.invest.crop_production.crop_production.
write_to_raster
(output_raster, array, xoff, yoff)¶ Write numpy array to raster block.
Parameters: - output_raster (str) – filepath to output raster.
- array (np.array) – block to save to raster.
- xoff (int) – offset index for x-dimension.
- yoff (int) – offset index for y-dimension.
DBF accessing helpers.
FIXME: more documentation needed
Examples
Create new table, setup structure, add records:
dbf = Dbf(filename, new=True) dbf.addField(
(“NAME”, “C”, 15), (“SURNAME”, “C”, 25), (“INITIALS”, “C”, 10), (“BIRTHDATE”, “D”),) for (n, s, i, b) in (
(“John”, “Miller”, “YC”, (1980, 10, 11)), (“Andy”, “Larkin”, “”, (1980, 4, 11)),
- ):
- rec = dbf.newRecord() rec[“NAME”] = n rec[“SURNAME”] = s rec[“INITIALS”] = i rec[“BIRTHDATE”] = b rec.store()
dbf.close()
Open existed dbf, read some data:
dbf = Dbf(filename, True) for rec in dbf:
- for fldName in dbf.fieldNames:
- print ‘%s: %s (%s)’ % (fldName, rec[fldName],
- type(rec[fldName]))
dbf.close()
-
class
natcap.invest.dbfpy.dbf.
Dbf
(f, readOnly=False, new=False, ignoreErrors=False)¶ Bases:
object
DBF accessor.
- FIXME:
- docs and examples needed (dont’ forget to tell about problems adding new fields on the fly)
- Implementation notes:
_new
field is used to indicate whether this is a new data table. addField could be used only for the new tables! If at least one record was appended to the table it’s structure couldn’t be changed.
-
HeaderClass
¶ alias of
DbfHeader
-
INVALID_VALUE
= <INVALID>¶
-
RecordClass
¶ alias of
DbfRecord
-
__getitem__
(index)¶ Return DbfRecord instance.
-
__len__
()¶ Return number of records.
-
__setitem__
(index, record)¶ Write DbfRecord instance to the stream.
-
addField
(*defs)¶ Add field definitions.
For more information see header.DbfHeader.addField.
-
append
(record)¶ Append
record
to the database.
-
changed
¶
-
close
()¶
-
closed
¶
-
fieldDefs
¶
-
fieldNames
¶
-
flush
()¶ Flush data to the associated stream.
-
header
¶
-
ignoreErrors
¶ Error processing mode for DBF field value conversion
if set, failing field value conversion will return
INVALID_VALUE
instead of raising conversion error.
-
indexOfFieldName
(name)¶ Index of field named
name
.
-
name
¶
-
newRecord
()¶ Return new record, which belong to this table.
-
recordCount
¶
-
stream
¶
.DBF creation helpers.
- Note: this is a legacy interface. New code should use Dbf class
- for table creation (see examples in dbf.py)
- TODO:
- handle Memo fields.
- check length of the fields accoring to the http://www.clicketyclick.dk/databases/xbase/format/data_types.html
-
class
natcap.invest.dbfpy.dbfnew.
dbf_new
¶ Bases:
object
New .DBF creation helper.
Example Usage:
dbfn = dbf_new() dbfn.add_field(“name”,’C’,80) dbfn.add_field(“price”,’N’,10,2) dbfn.add_field(“date”,’D’,8) dbfn.write(“tst.dbf”)Note
This module cannot handle Memo-fields, they are special.
-
FieldDefinitionClass
¶ alias of
_FieldDefinition
-
add_field
(name, typ, len, dec=0)¶ Add field definition.
Parameters: - name – field name (str object). field name must not contain ASCII NULs and it’s length shouldn’t exceed 10 characters.
- typ – type of the field. this must be a single character from the “CNLMDT” set meaning character, numeric, logical, memo, date and date/time respectively.
- len – length of the field. this argument is used only for the character and numeric fields. all other fields have fixed length. FIXME: use None as a default for this argument?
- dec – decimal precision. used only for the numric fields.
-
fields
¶
-
write
(filename)¶ Create empty .DBF file using current structure.
-
DBF fields definitions.
- TODO:
- make memos work
-
natcap.invest.dbfpy.fields.
lookupFor
(typeCode)¶ Return field definition class for the given type code.
typeCode
must be a single character. That type should be previously registered.Use registerField to register new field class.
Returns: Return value is a subclass of the DbfFieldDef.
-
class
natcap.invest.dbfpy.fields.
DbfCharacterFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.dbfpy.fields.DbfFieldDef
Definition of the character field.
-
decodeValue
(value)¶ Return string object.
Return value is a
value
argument with stripped right spaces.
-
defaultValue
= ''¶
-
encodeValue
(value)¶ Return raw data string encoded from a
value
.
-
typeCode
= 'C'¶
-
-
class
natcap.invest.dbfpy.fields.
DbfFloatFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.dbfpy.fields.DbfNumericFieldDef
Definition of the float field - same as numeric.
-
typeCode
= 'F'¶
-
-
class
natcap.invest.dbfpy.fields.
DbfLogicalFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.dbfpy.fields.DbfFieldDef
Definition of the logical field.
-
decodeValue
(value)¶ Return True, False or -1 decoded from
value
.
-
defaultValue
= -1¶
-
encodeValue
(value)¶ Return a character from the “TF?” set.
Returns: Return value is “T” if value
is True ”?” if value is -1 or False otherwise.
-
length
= 1¶
-
typeCode
= 'L'¶
-
-
class
natcap.invest.dbfpy.fields.
DbfDateFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.dbfpy.fields.DbfFieldDef
Definition of the date field.
-
decodeValue
(value)¶ Return a
datetime.date
instance decoded fromvalue
.
-
defaultValue
= datetime.date(2016, 6, 10)¶
-
encodeValue
(value)¶ Return a string-encoded value.
value
argument should be a value suitable for the utils.getDate call.Returns: Return value is a string in format “yyyymmdd”.
-
length
= 8¶
-
typeCode
= 'D'¶
-
-
class
natcap.invest.dbfpy.fields.
DbfMemoFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.dbfpy.fields.DbfFieldDef
Definition of the memo field.
Note: memos aren’t currenly completely supported.
-
decodeValue
(value)¶ Return int .dbt block number decoded from the string object.
-
defaultValue
= ' '¶
-
encodeValue
(value)¶ Return raw data string encoded from a
value
.Note: this is an internal method.
-
length
= 10¶
-
typeCode
= 'M'¶
-
-
class
natcap.invest.dbfpy.fields.
DbfNumericFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.dbfpy.fields.DbfFieldDef
Definition of the numeric field.
-
decodeValue
(value)¶ Return a number decoded from
value
.If decimals is zero, value will be decoded as an integer; or as a float otherwise.
Returns: Return value is a int (long) or float instance.
-
defaultValue
= 0¶
-
encodeValue
(value)¶ Return string containing encoded
value
.
-
typeCode
= 'N'¶
-
-
class
natcap.invest.dbfpy.fields.
DbfCurrencyFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.dbfpy.fields.DbfFieldDef
Definition of the currency field.
-
decodeValue
(value)¶ Return float number decoded from
value
.
-
defaultValue
= 0.0¶
-
encodeValue
(value)¶ Return string containing encoded
value
.
-
length
= 8¶
-
typeCode
= 'Y'¶
-
-
class
natcap.invest.dbfpy.fields.
DbfIntegerFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.dbfpy.fields.DbfFieldDef
Definition of the integer field.
-
decodeValue
(value)¶ Return an integer number decoded from
value
.
-
defaultValue
= 0¶
-
encodeValue
(value)¶ Return string containing encoded
value
.
-
length
= 4¶
-
typeCode
= 'I'¶
-
-
class
natcap.invest.dbfpy.fields.
DbfDateTimeFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.dbfpy.fields.DbfFieldDef
Definition of the timestamp field.
-
JDN_GDN_DIFF
= 1721425¶
-
decodeValue
(value)¶ Return a datetime.datetime instance.
-
defaultValue
= datetime.datetime(2016, 6, 10, 0, 19, 7, 966409)¶
-
encodeValue
(value)¶ Return a string-encoded
value
.
-
length
= 8¶
-
typeCode
= 'T'¶
-
DBF header definition.
- TODO:
- handle encoding of the character fields (encoding information stored in the DBF header)
-
class
natcap.invest.dbfpy.header.
DbfHeader
(fields=None, headerLength=0, recordLength=0, recordCount=0, signature=3, lastUpdate=None, ignoreErrors=False)¶ Bases:
object
Dbf header definition.
For more information about dbf header format visit http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_STRUCT
Examples
- Create an empty dbf header and add some field definitions:
- dbfh = DbfHeader() dbfh.addField((“name”, “C”, 10)) dbfh.addField((“date”, “D”)) dbfh.addField(DbfNumericFieldDef(“price”, 5, 2))
- Create a dbf header with field definitions:
- dbfh = DbfHeader([
- (“name”, “C”, 10), (“date”, “D”), DbfNumericFieldDef(“price”, 5, 2),
])
-
__getitem__
(item)¶ Return a field definition by numeric index or name string
-
addField
(*defs)¶ Add field definition to the header.
Examples
- dbfh.addField(
- (“name”, “C”, 20), dbf.DbfCharacterFieldDef(“surname”, 20), dbf.DbfDateFieldDef(“birthdate”), (“member”, “L”),
) dbfh.addField((“price”, “N”, 5, 2)) dbfh.addField(dbf.DbfNumericFieldDef(“origprice”, 5, 2))
-
changed
¶
-
day
¶
-
fields
¶
-
classmethod
fromStream
(stream)¶ Return header object from the stream.
-
classmethod
fromString
(string)¶ Return header instance from the string object.
-
headerLength
¶
-
ignoreErrors
¶ Error processing mode for DBF field value conversion
if set, failing field value conversion will return
INVALID_VALUE
instead of raising conversion error.
-
lastUpdate
¶
-
month
¶
-
recordCount
¶
-
recordLength
¶
-
setCurrentDate
()¶ Update
self.lastUpdate
field with current date value.
-
signature
¶
-
toString
()¶ Returned 32 chars length string with encoded header.
-
write
(stream)¶ Encode and write header to the stream.
-
year
¶
DBF record definition.
-
class
natcap.invest.dbfpy.record.
DbfRecord
(dbf, index=None, deleted=False, data=None)¶ Bases:
object
DBF record.
Instances of this class shouldn’t be created manualy, use dbf.Dbf.newRecord instead.
Class implements mapping/sequence interface, so fields could be accessed via their names or indexes (names is a preffered way to access fields).
- Hint:
- Use store method to save modified record.
Examples
- Add new record to the database:
- db = Dbf(filename) rec = db.newRecord() rec[“FIELD1”] = value1 rec[“FIELD2”] = value2 rec.store()
Or the same, but modify existed (second in this case) record:
db = Dbf(filename) rec = db[2] rec[“FIELD1”] = value1 rec[“FIELD2”] = value2 rec.store()-
__getitem__
(key)¶ Return value by field name or field index.
-
__setitem__
(key, value)¶ Set field value by integer index of the field or string name.
-
asDict
()¶ Return a dictionary of fields.
Note
Change of the dicts’s values won’t change real values stored in this object.
-
asList
()¶ Return a flat list of fields.
Note
Change of the list’s values won’t change real values stored in this object.
-
dbf
¶
-
delete
()¶ Mark method as deleted.
-
deleted
¶
-
fieldData
¶
-
classmethod
fromStream
(dbf, index)¶ Return a record read from the stream.
Parameters: - dbf – A Dbf.Dbf instance new record should belong to.
- index – Index of the record in the records’ container. This argument can’t be None in this call.
Return value is an instance of the current class.
-
classmethod
fromString
(dbf, string, index=None)¶ Return record read from the string object.
Parameters: - dbf – A Dbf.Dbf instance new record should belong to.
- string – A string new record should be created from.
- index – Index of the record in the container. If this argument is None, record will be appended.
Return value is an instance of the current class.
-
index
¶
-
position
¶
-
classmethod
rawFromStream
(dbf, index)¶ Return raw record contents read from the stream.
Parameters: - dbf – A Dbf.Dbf instance containing the record.
- index – Index of the record in the records’ container. This argument can’t be None in this call.
Return value is a string containing record data in DBF format.
-
store
()¶ Store current record in the DBF.
If
self.index
is None, this record will be appended to the records of the DBF this records belongs to; or replaced otherwise.
-
toString
()¶ Return string packed record values.
String utilities.
- TODO:
- allow strings in getDateTime routine;
-
class
natcap.invest.dbfpy.utils.
classproperty
¶ Bases:
property
Works in the same way as a
property
, but for the classes.
-
natcap.invest.dbfpy.utils.
getDate
(date=None)¶ Return datetime.date instance.
- Type of the
date
argument could be one of the following: - None:
- use current date value;
- datetime.date:
- this value will be returned;
- datetime.datetime:
- the result of the date.date() will be returned;
- string:
- assuming “%Y%m%d” or “%y%m%dd” format;
- number:
- assuming it’s a timestamp (returned for example by the time.time() call;
- sequence:
- assuming (year, month, day, ...) sequence;
Additionaly, if
date
has callableticks
attribute, it will be used and result of the called would be treated as a timestamp value.- Type of the
-
natcap.invest.dbfpy.utils.
getDateTime
(value=None)¶ Return datetime.datetime instance.
- Type of the
value
argument could be one of the following: - None:
- use current date value;
- datetime.date:
- result will be converted to the datetime.datetime instance using midnight;
- datetime.datetime:
value
will be returned as is;- string:
- * CURRENTLY NOT SUPPORTED *;
- number:
- assuming it’s a timestamp (returned for example by the time.time() call;
- sequence:
- assuming (year, month, day, ...) sequence;
Additionaly, if
value
has callableticks
attribute, it will be used and result of the called would be treated as a timestamp value.- Type of the
-
natcap.invest.dbfpy.utils.
unzfill
(str)¶ Return a string without ASCII NULs.
This function searchers for the first NUL (ASCII 0) occurance and truncates string till that position.
inVEST finfish aquaculture filehandler for biophysical and valuation data
-
natcap.invest.finfish_aquaculture.finfish_aquaculture.
execute
(args)¶ Finfish Aquaculture.
This function will take care of preparing files passed into the finfish aquaculture model. It will handle all files/inputs associated with biophysical and valuation calculations and manipulations. It will create objects to be passed to the aquaculture_core.py module. It may write log, warning, or error messages to stdout.
Parameters: - workspace_dir (string) – The directory in which to place all result files.
- ff_farm_loc (string) – URI that points to a shape file of fishery locations
- farm_ID (string) – column heading used to describe individual farms. Used to link GIS location data to later inputs.
- g_param_a (float) – Growth parameter alpha, used in modeling fish growth, should be an int or float.
- g_param_b (float) – Growth parameter beta, used in modeling fish growth, should be an int or float.
- g_param_tau (float) – Growth parameter tau, used in modeling fish growth, should be an int or float
- use_uncertainty (boolean) –
- g_param_a_sd (float) – (description)
- g_param_b_sd (float) – (description)
- num_monte_carlo_runs (int) –
- water_temp_tbl (string) – URI to a CSV table where daily water temperature values are stored from one year
- farm_op_tbl (string) – URI to CSV table of static variables for calculations
- outplant_buffer (int) – This value will allow the outplanting start day to be flexible plus or minus the number of days specified here.
- do_valuation (boolean) – Boolean that indicates whether or not valuation should be performed on the aquaculture model
- p_per_kg (float) – Market price per kilogram of processed fish
- frac_p (float) – Fraction of market price that accounts for costs rather than profit
- discount (float) – Daily market discount rate
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'ff_farm_loc': 'path/to/shapefile', 'farm_ID': 'FarmID' 'g_param_a': 0.038, 'g_param_b': 0.6667, 'g_param_tau': 0.08, 'use_uncertainty': True, 'g_param_a_sd': 0.005, 'g_param_b_sd': 0.05, 'num_monte_carlo_runs': 1000, 'water_temp_tbl': 'path/to/water_temp_tbl', 'farm_op_tbl': 'path/to/farm_op_tbl', 'outplant_buffer': 3, 'do_valuation': True, 'p_per_kg': 2.25, 'frac_p': 0.3, 'discount': 0.000192, }
-
natcap.invest.finfish_aquaculture.finfish_aquaculture.
format_ops_table
(op_path, farm_ID, ff_aqua_args)¶ Takes in the path to the operating parameters table as well as the keyword to look for to identify the farm number to go with the parameters, and outputs a 2D dictionary that contains all parameters by farm and description. The outer key is farm number, and the inner key is a string description of the parameter.
- Input:
op_path: URI to CSV table of static variables for calculations farm_ID: The string to look for in order to identify the column in
which the farm numbers are stored. That column data will become the keys for the dictionary output.- ff_aqua_args: Dictionary of arguments being created in order to be
- passed to the aquaculture core function.
- Output:
- ff_aqua_args[‘farm_op_dict’]: A dictionary that is built up to store
- the static parameters for the aquaculture model run. This is a 2D dictionary, where the outer key is the farm ID number, and the inner keys are strings of parameter names.
Returns nothing.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture.
format_temp_table
(temp_path, ff_aqua_args)¶ This function is doing much the same thing as format_ops_table- it takes in information from a temperature table, and is formatting it into a 2D dictionary as an output.
- Input:
- temp_path: URI to a CSV file containing temperature data for 365 days
- for the farms on which we will look at growth cycles.
- ff_aqua_args: Dictionary of arguments that we are building up in order
- to pass it to the aquaculture core module.
- Output:
- ff_aqua_args[‘water_temp_dict’]: A 2D dictionary containing temperature
- data for 365 days. The outer keys are days of the year from 0 to 364 (we need to be able to check the day modulo 365) which we manually shift down by 1, and the inner keys are farm ID numbers.
Returns nothing.
Implementation of the aquaculture calculations, and subsequent outputs. This will pull from data passed in by finfish_aquaculture.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
calc_farm_cycles
(outplant_buffer, a, b, tau, water_temp_dict, farm_op_dict, dur)¶ - Input:
- outplant_buffer: The number of days surrounding the outplant day during
- which the fish growth cycle can still be started.
- a: Growth parameter alpha. Float used as a scaler in the fish growth
- equation.
- b: Growth paramater beta. Float used as an exponential multiplier in
- the fish growth equation.
- water_temp_dict: 2D dictionary which contains temperature values for
- farms. The outer keys are calendar days as strings, and the inner are farm numbers as strings.
- farm_op_dict: 2D dictionary which contains individual operating
- parameters for each farm. The outer key is farm number as a string, and the inner is string descriptors of each parameter.
- dur: Float which describes the length for the growth simulation to run
- in years.
Returns cycle_history where:
- cycle_history: Dictionary which contains mappings from farms to a
history of growth for each cycle completed on that farm. These entries are formatted as follows...
- Farm->List of Type (day of outplanting,day of harvest, fish weight
- (grams))
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
calc_hrv_weight
(farm_op_dict, frac, mort, cycle_history)¶ - Input:
- farm_op_dict: 2D dictionary which contains individual operating
- parameters for each farm. The outer key is farm number as a string, and the inner is string descriptors of each parameter.
- frac: A float representing the fraction of the fish that remains after
- processing.
- mort: A float referring to the daily mortality rate of fishes on an
- aquaculture farm.
- cycle_history: Farm->List of Type (day of outplanting,
- day of harvest, fish weight (grams))
- Returns a tuple (curr_cycle_totals,indiv_tpw_totals) where:
- curr_cycle_totals_: dictionary which will hold a mapping from every
- farm (as identified by farm_ID) to the total processed weight of each farm
- indiv_tpw_totals: dictionary which will hold a farm->list mapping,
- where the list holds the individual tpw for all cycles that the farm completed
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
compute_uncertainty_data
(args, output_dir)¶ Does uncertainty analysis via a Monte Carlo simulation.
Returns a tuple with two 2D dicts. -a dict containing relative file paths to produced histograms -a dict containining statistical results (mean and std deviation) Each dict has farm IDs as outer keys, and result types (e.g. ‘value’, ‘weight’, and ‘cycles’) as inner keys.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
create_HTML_table
(output_dir, args, cycle_history, sum_hrv_weight, hrv_weight, farms_npv, value_history, histogram_paths, uncertainty_stats)¶ - Inputs:
- output_dir: The directory in which we will be creating our .html file
- output.
- cycle_history: dictionary mapping farm ID->list of tuples, each of
- which contains 3 things- (day of outplanting, day of harvest,
- harvest weight of a single fish in grams)
- sum_hrv_weight: dictionary which holds a mapping from farm ID->total
- processed weight of each farm
- hrv_weight: dictionary which holds a farm->list mapping, where the list
- holds the individual tpw for all cycles that the farm completed
- do_valuation: boolean variable that says whether or not valuation is
- desired
- farms_npv: dictionary with a farm-> float mapping, where each float is
- the net processed value of the fish processed on that farm, in $1000s of dollars.
- value_history: dictionary which holds a farm->list mapping, where the
- list holds tuples containing (Net Revenue, Net Present Value) for each cycle completed by that farm
- Output:
- HTML file: contains 3 tables that summarize inputs and outputs for the
duration of the model. - Input Table: Farm Operations provided data, including Farm ID #,
Cycle Number, weight of fish at start, weight of fish at harvest, number of fish in farm, start day for growing, and length of fallowing period- Output Table 1: Farm Harvesting data, including a summary table
for each harvest cycle of each farm. Will show Farm ID, cycle number, days since outplanting date, harvested weight, net revenue, outplant day, and year.
- Output Table 2: Model outputs for each farm, including Farm ID,
net present value, number of completed harvest cycles, and total volume harvested.
Returns nothing.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
do_monte_carlo_simulation
(args)¶ Performs a Monte Carlo simulation and returns the results.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
execute
(args)¶ ‘ Runs the biophysical and valuation parts of the finfish aquaculture model. This will output: 1. a shape file showing farm locations w/ addition of # of harvest cycles,
total processed weight at that farm, and if valuation is true, total discounted net revenue at each farm location.- Three HTML tables summarizing all model I/O- summary of user-provided
data, summary of each harvest cycle, and summary of the outputs/farm
- A .txt file that is named according to the date and time the model is
run, which lists the values used during that run
Data in args should include the following: –Biophysical Arguments– args: a python dictionary containing the following data: args[‘workspace_dir’]- The directory in which to place all result files. args[‘ff_farm_file’]- An open shape file containing the locations of
individual fisheries- args[‘farm_ID’]- column heading used to describe individual farms. Used to
- link GIS location data to later inputs.
- args[‘g_param_a’]- Growth parameter alpha, used in modeling fish growth,
- should be int or a float.
- args[‘g_param_b’]- Growth parameter beta, used in modeling fish growth,
- should be int or a float.
- args[‘water_temp_dict’]- A dictionary which links a specific date to the
farm numbers, and their temperature values on that day. (Note: in this case, the outer keys 1 and 2 are calendar days out of 365, starting with January 1 (day 0), and the inner 1, 2, and 3 are farm numbers.)
- Format: {‘0’: ‘{‘1’: ‘8.447, ‘2’: ‘8.447’, ‘3’:‘8.947’, ...}’ ,
- ‘1’: ‘{‘1’: ‘8.406, ‘2’: ‘8.406’, ‘3’:‘8.906’, ...}’ ,
. . . . . . . . . }
- args[‘farm_op_dict’]- Dictionary which links a specific farm ID # to
another dictionary containing operating parameters mapped to their value for that particular farm (Note: in this case, the 1 and 2 are farm ID’s, not dates out of 365.)
- Format: {‘1’: ‘{‘Wt of Fish’: ‘0.06’, ‘Tar Weight’: ‘5.4’, ...}’,
- ‘2’: ‘{‘Wt of Fish’: ‘0.06’, ‘Tar Weight’: ‘5.4’, ...}’, . . . . . . . . . }
- args[‘frac_post_process’]- the fraction of edible fish left after
- processing is done to remove undesirable parts
- args[‘mort_rate_daily’]- mortality rate among fish in a year, divided by
- 365
args[‘duration’]- duration of the simulation, in years args[‘outplant_buffer’] - This value will allow the outplant start day to
be flexible plus or minus the number of days specified here.–Valuation arguments– args[‘do_valuation’]- boolean indicating whether or not to run the
valuation processargs[‘p_per_kg’]: Market price per kilogram of processed fish args[‘frac_p’]: Fraction of market price that accounts for costs rather
than profitargs[‘discount’]: Daily market discount rate
returns nothing
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
make_histograms
(farm, results, output_dir, total_num_runs)¶ Makes a histogram for the given farm and data.
Returns a dict mapping type (e.g. ‘value’, ‘weight’) to the relative file path for the respective histogram.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
valuation
(price_per_kg, frac_mrkt_price, discount, hrv_weight, cycle_history)¶ This performs the valuation calculations, and returns tuple containing a dictionary with a farm-> float mapping, where each float is the net processed value of the fish processed on that farm, in $1000s of dollars, and a dictionary containing a farm-> list mapping, where each entry in the list is a tuple of (Net Revenue, Net Present Value) for every cycle on that farm.
- Inputs:
- price_per_kg: Float representing the price per kilogram of finfish for
- valuation purposes.
- frac_mrkt_price: Float that represents the fraction of market price
- that is attributable to costs.
discount: Float that is the daily market discount rate. cycle_hisory: Farm->List of Type (day of outplanting,
day of harvest, fish weight (grams))hrv_weight: Farm->List of TPW for each cycle (kilograms)
- Returns a tuple (val_history, valuations):
- val_history: dictionary which will hold a farm->list mapping, where the
- list holds tuples containing (Net Revenue, Net Present Value) for each cycle completed by that farm
- valuations: dictionary with a farm-> float mapping, where each float is
- the net processed value of the fish processed on that farm
The Fisheries module contains the high-level code for excuting the fisheries model
-
natcap.invest.fisheries.fisheries.
execute
(args, create_outputs=True)¶ Fisheries.
Parameters: - args['workspace_dir'] (str) – location into which all intermediate and output files should be placed.
- args['results_suffix'] (str) – a string to append to output filenames
- args['aoi_uri'] (str) – location of shapefile which will be used as subregions for calculation. Each region must conatin a ‘Name’ attribute (case-sensitive) matching the given name in the population parameters csv file.
- args['timesteps'] (int) – represents the number of time steps that the user desires the model to run.
- args['population_type'] (str) – specifies whether the model is age-specific or stage-specific. Options will be either “Age Specific” or “Stage Specific” and will change which equation is used in modeling growth.
- args['sexsp'] (str) – specifies whether or not the age and stage classes are distinguished by sex.
- args['harvest_units'] (str) – specifies how the user wants to get the harvest data. Options are either “Individuals” or “Weight”, and will change the harvest equation used in core. (Required if args[‘val_cont’] is True)
- args['do_batch'] (bool) – specifies whether program will perform a single model run or a batch (set) of model runs.
- args['population_csv_uri'] (str) – location of the population parameters csv. This will contain all age and stage specific parameters. (Required if args[‘do_batch’] is False)
- args['population_csv_dir'] (str) – location of the directory that contains the Population Parameters CSV files for batch processing (Required if args[‘do_batch’] is True)
- args['spawn_units'] (str) – (description)
- args['total_init_recruits'] (float) – represents the initial number of recruits that will be used in calculation of population on a per area basis.
- args['recruitment_type'] (str) – Name corresponding to one of the built-in recruitment functions {‘Beverton-Holt’, ‘Ricker’, ‘Fecundity’, Fixed}, or ‘Other’, meaning that the user is passing in their own recruitment function as an anonymous python function via the optional dictionary argument ‘recruitment_func’.
- args['recruitment_func'] (function) – Required if args[‘recruitment_type’] is set to ‘Other’. See below for instructions on how to create a user-defined recruitment function.
- args['alpha'] (float) – must exist within args for BH or Ricker Recruitment. Parameter that will be used in calculation of recruitment.
- args['beta'] (float) – must exist within args for BH or Ricker Recruitment. Parameter that will be used in calculation of recruitment.
- args['total_recur_recruits'] (float) – must exist within args for Fixed Recruitment. Parameter that will be used in calculation of recruitment.
- args['migr_cont'] (bool) – if True, model uses migration
- args['migration_dir'] (str) – if this parameter exists, it means migration is desired. This is the location of the parameters folder containing files for migration. There should be one file for every age class which migrates. (Required if args[‘migr_cont’] is True)
- args['val_cont'] (bool) – if True, model computes valuation
- args['frac_post_process'] (float) – represents the fraction of the species remaining after processing of the whole carcass is complete. This will exist only if valuation is desired for the particular species. (Required if args[‘val_cont’] is True)
- args['unit_price'] (float) – represents the price for a single unit of harvest. Exists only if valuation is desired. (Required if args[‘val_cont’] is True)
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'results_suffix': 'scenario_name', 'aoi_uri': 'path/to/aoi_uri', 'total_timesteps': 100, 'population_type': 'Stage-Based', 'sexsp': 'Yes', 'harvest_units': 'Individuals', 'do_batch': False, 'population_csv_uri': 'path/to/csv_uri', 'population_csv_dir': '', 'spawn_units': 'Weight', 'total_init_recruits': 100000.0, 'recruitment_type': 'Ricker', 'alpha': 32.4, 'beta': 54.2, 'total_recur_recruits': 92.1, 'migr_cont': True, 'migration_dir': 'path/to/mig_dir/', 'val_cont': True, 'frac_post_process': 0.5, 'unit_price': 5.0, }
Creating a User-Defined Recruitment Function
An optional argument has been created in the Fisheries Model to allow users proficient in Python to pass their own recruitment function into the program via the args dictionary.
Using the Beverton-Holt recruitment function as an example, here’s how a user might create and pass in their own recruitment function:
import natcap.invest import numpy as np # define input data Matu = np.array([...]) # the Maturity vector in the Population Parameters File Weight = np.array([...]) # the Weight vector in the Population Parameters File LarvDisp = np.array([...]) # the LarvalDispersal vector in the Population Parameters File alpha = 2.0 # scalar value beta = 10.0 # scalar value sexsp = 2 # 1 = not sex-specific, 2 = sex-specific # create recruitment function def spawners(N_prev): return (N_prev * Matu * Weight).sum() def rec_func_BH(N_prev): N_0 = (LarvDisp * ((alpha * spawners( N_prev) / (beta + spawners(N_prev)))) / sexsp) return (N_0, spawners(N_prev)) # fill out args dictionary args = {} # ... define other arguments ... args['recruitment_type'] = 'Other' # lets program know to use user-defined function args['recruitment_func'] = rec_func_BH # pass recruitment function as 'anonymous' Python function # run model natcap.invest.fisheries.fisheries.execute(args)
Conditions that a new recruitment function must meet to run properly:
- The function must accept as an argument: a single numpy three-dimensional array (N_prev) representing the state of the population at the previous time step. N_prev has three dimensions: the indices of the first dimension correspond to the region (must be in same order as provided in the Population Parameters File), the indices of the second dimension represent the sex if it is specific (i.e. two indices representing female, then male if the model is ‘sex-specific’, else just a single zero index representing the female and male populations aggregated together), and the indicies of the third dimension represent age/stage in ascending order.
- The function must return: a tuple of two values. The first value (N_0) being a single numpy one-dimensional array representing the youngest age of the population for the next time step. The indices of the array correspond to the regions of the population (outputted in same order as provided). If the model is sex-specific, it is currently assumed that males and females are produced in equal number and that the returned array has been already been divided by 2 in the recruitment function. The second value (spawners) is the number or weight of the spawners created by the population from the previous time step, provided as a scalar float value (non-negative).
Example of How Recruitment Function Operates within Fisheries Model:
# input data N_prev_xsa = [[[region0-female-age0, region0-female-age1], [region0-male-age0, region1-male-age1]], [[region1-female-age0, region1-female-age1], [region1-male-age0], [region1-male-age1]]] # execute function N_0_x, spawners = rec_func(N_prev_xsa) # output data - where N_0 contains information about the youngest # age/stage of the population for the next time step: N_0_x = [region0-age0, region1-age0] # if sex-specific, rec_func should divide by two before returning type(spawners) is float
The Fisheries Habitat Scenario Tool module contains the high-level code for generating a new Population Parameters CSV File based on habitat area change and the dependencies that particular classes of the given species have on particular habitats.
-
natcap.invest.fisheries.fisheries_hst.
execute
(args)¶ Fisheries: Habitat Scenario Tool.
The Fisheries Habitat Scenario Tool generates a new Population Parameters CSV File with modified survival attributes across classes and regions based on habitat area changes and class-level dependencies on those habitats.
param str args[‘workspace_dir’]: location into which the resultant modified Population Parameters CSV file should be placed. param str args[‘sexsp’]: specifies whether or not the age and stage classes are distinguished by sex. Options: ‘Yes’ or ‘No’ param str args[‘population_csv_uri’]: location of the population parameters csv file. This file contains all age and stage specific parameters. param str args[‘habitat_chg_csv_uri’]: location of the habitat change parameters csv file. This file contains habitat area change information. param str args[‘habitat_dep_csv_uri’]: location of the habitat dependency parameters csv file. This file contains habitat-class dependency information. param float args[‘gamma’]: describes the relationship between a change in habitat area and a change in survival of life stages dependent on that habitat - Returns:
- None
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'sexsp': 'Yes', 'population_csv_uri': 'path/to/csv', 'habitat_chg_csv_uri': 'path/to/csv', 'habitat_dep_csv_uri': 'path/to/csv', 'gamma': 0.5, }
Note:
- Modified Population Parameters CSV File saved to ‘workspace_dir/output/’
‘’‘
# Parse, Verify Inputs vars_dict = io.fetch_args(args)
# Convert Data vars_dict = convert_survival_matrix(vars_dict)
# Generate Modified Population Parameters CSV File io.save_population_csv(vars_dict)
- def convert_survival_matrix(vars_dict):
‘’’ Creates a new survival matrix based on the information provided by the user related to habitat area changes and class-level dependencies on those habitats.
- Args:
- vars_dict (dictionary): see fisheries_preprocessor_io.fetch_args for
- example
- Returns:
- vars_dict (dictionary): modified vars_dict with new Survival matrix
- accessible using the key ‘Surv_nat_xsa_mod’ with element values that exist between [0,1]
Example Returns:
ret = { # Other Variables... 'Surv_nat_xsa_mod': np.ndarray([...]) }
The Fisheries Habitat Scenarios Tool IO module contains functions for handling inputs and outputs
-
exception
natcap.invest.fisheries.fisheries_hst_io.
MissingParameter
(msg)¶ Bases:
exceptions.StandardError
An exception class that may be raised when a necessary parameter is not provided by the user.
-
natcap.invest.fisheries.fisheries_hst_io.
fetch_args
(args)¶ Fetches input arguments from the user, verifies for correctness and completeness, and returns a list of variables dictionaries
Parameters: args (dictionary) – arguments from the user (same as Fisheries Preprocessor entry point) Returns: vars_dict – dictionary containing necessary variables Return type: dictionary Raises: ValueError
– parameter mismatch between Population and Habitat CSV filesExample Returns:
vars_dict = { 'workspace_dir': 'path/to/workspace_dir/', 'output_dir': 'path/to/output_dir/', 'sexsp': 2, 'gamma': 0.5, # Pop Vars 'population_csv_uri': 'path/to/csv_uri', 'Surv_nat_xsa': np.array( [[[...], [...]], [[...], [...]], ...]), 'Classes': np.array([...]), 'Class_vectors': { 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), }, 'Regions': np.array([...]), 'Region_vectors': { 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }, # Habitat Vars 'habitat_chg_csv_uri': 'path/to/csv', 'habitat_dep_csv_uri': 'path/to/csv', 'Habitats': ['habitat1', 'habitat2', ...], 'Hab_classes': ['class1', 'class2', ...], 'Hab_regions': ['region1', 'region2', ...], 'Hab_chg_hx': np.array( [[[...], [...]], [[...], [...]], ...]), 'Hab_dep_ha': np.array( [[[...], [...]], [[...], [...]], ...]), 'Hab_class_mvmt_a': np.array([...]), 'Hab_dep_num_a': np.array([...]), }
-
natcap.invest.fisheries.fisheries_hst_io.
read_habitat_chg_csv
(args)¶ Parses and verifies a Habitat Change Parameters CSV file and returns a dictionary of information related to the interaction between a species and the given habitats.
Parses the Habitat Change Parameters CSV file for the following vectors:
- Names of Habitats and Regions
- Habitat Area Change
Parameters: args (dictionary) – arguments from the user (same as Fisheries HST entry point)
Returns: habitat_chg_dict – dictionary containing necessary
variables
Return type: dictionary
Raises: MissingParameter
– required parameter not includedValueError
– values are out of bounds or of wrong typeIndexError
– likely a file formatting issue
Example Returns:
habitat_chg_dict = { 'Habitats': ['habitat1', 'habitat2', ...], 'Hab_regions': ['region1', 'region2', ...], 'Hab_chg_hx': np.array( [[[...], [...]], [[...], [...]], ...]), }
-
natcap.invest.fisheries.fisheries_hst_io.
read_habitat_dep_csv
(args)¶ Parses and verifies a Habitat Dependency Parameters CSV file and returns a dictionary of information related to the interaction between a species and the given habitats.
Parses the Habitat Parameters CSV file for the following vectors:
- Names of Habitats and Classes
- Habitat-Class Dependency
The following vectors are derived from the information given in the file:
- Classes where movement between habitats occurs
- Number of habitats that a particular class depends upon
Parameters: args (dictionary) – arguments from the user (same as Fisheries HST entry point)
Returns: habitat_dep_dict – dictionary containing necessary
variables
Return type: dictionary
Raises: - MissingParameter - required parameter not included
- ValueError - values are out of bounds or of wrong type
- IndexError - likely a file formatting issue
Example Returns:
habitat_dep_dict = { 'Habitats': ['habitat1', 'habitat2', ...], 'Hab_classes': ['class1', 'class2', ...], 'Hab_dep_ha': np.array( [[[...], [...]], [[...], [...]], ...]), 'Hab_class_mvmt_a': np.array([...]), 'Hab_dep_num_a': np.array([...]), }
-
natcap.invest.fisheries.fisheries_hst_io.
read_population_csv
(args)¶ Parses and verifies a single Population Parameters CSV file
Parses and verifies inputs from the Population Parameters CSV file. If not all necessary vectors are included, the function will raise a MissingParameter exception. Survival matrix will be arranged by class-elements, 2nd dim: sex, and 3rd dim: region. Class vectors will be arranged by class-elements, 2nd dim: sex (depending on whether model is sex-specific) Region vectors will be arraged by region-elements, sex-agnostic.
Parameters: args (dictionary) – arguments provided by user
Returns: pop_dict – dictionary containing verified population
arguments
Return type: dictionary
Raises: MissingParameter
– required parameter not includedValueError
– values are out of bounds or of wrong type
Example Returns:
pop_dict = { 'population_csv_uri': 'path/to/csv', 'Surv_nat_xsa': np.array( [[...], [...]], [[...], [...]], ...), # Class Vectors 'Classes': np.array([...]), 'Class_vector_names': [...], 'Class_vectors': { 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), }, # Region Vectors 'Regions': np.array([...]), 'Region_vector_names': [...], 'Region_vectors': { 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }, }
-
natcap.invest.fisheries.fisheries_hst_io.
save_population_csv
(vars_dict)¶ Creates a new Population Parameters CSV file based the provided inputs.
Parameters: vars_dict (dictionary) – variables generated by preprocessor arguments and run. Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'output_dir': 'path/to/output_dir/', 'sexsp': 2, 'population_csv_uri': 'path/to/csv', # original csv file 'Surv_nat_xsa': np.ndarray([...]), 'Surv_nat_xsa_mod': np.ndarray([...]), # Class Vectors 'Classes': np.array([...]), 'Class_vector_names': [...], 'Class_vectors': { 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), }, # Region Vectors 'Regions': np.array([...]), 'Region_vector_names': [...], 'Region_vectors': { 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }, # other arguments are ignored ... }
Note
- Creates a modified Population Parameters CSV file located in the ‘workspace/output/’ folder
- Currently appends ‘_modified’ to original filename for new filename
The Fisheries IO module contains functions for handling inputs and outputs
-
exception
natcap.invest.fisheries.fisheries_io.
MissingParameter
(msg)¶ Bases:
exceptions.StandardError
An exception class that may be raised when a necessary parameter is not provided by the user.
-
natcap.invest.fisheries.fisheries_io.
create_outputs
(vars_dict)¶ Creates outputs from variables generated in the run_population_model() function in the fisheries_model module
Creates the following:
- Results CSV File
- Results HTML Page
- Results Shapefile (if provided)
- Intermediate CSV File
Parameters: vars_dict (dictionary) – contains variables generated by model run
-
natcap.invest.fisheries.fisheries_io.
fetch_args
(args, create_outputs=True)¶ Fetches input arguments from the user, verifies for correctness and completeness, and returns a list of variables dictionaries
Parameters: args (dictionary) – arguments from the user Returns: model_list – set of variable dictionaries for each modelReturn type: list Example Returns:
model_list = [ { 'workspace_dir': 'path/to/workspace_dir', 'results_suffix': 'scenario_name', 'output_dir': 'path/to/output_dir', 'aoi_uri': 'path/to/aoi_uri', 'total_timesteps': 100, 'population_type': 'Stage-Based', 'sexsp': 2, 'harvest_units': 'Individuals', 'do_batch': False, 'spawn_units': 'Weight', 'total_init_recruits': 100.0, 'recruitment_type': 'Ricker', 'alpha': 32.4, 'beta': 54.2, 'total_recur_recruits': 92.1, 'migr_cont': True, 'val_cont': True, 'frac_post_process': 0.5, 'unit_price': 5.0, # Pop Params 'population_csv_uri': 'path/to/csv_uri', 'Survnaturalfrac': np.array( [[[...], [...]], [[...], [...]], ...]), 'Classes': np.array([...]), 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), 'Regions': np.array([...]), 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), # Mig Params 'migration_dir': 'path/to/mig_dir', 'Migration': [np.matrix, np.matrix, ...] }, { ... # additional dictionary doesn't exist when 'do_batch' # is false } ]
Note
This function receives an unmodified ‘args’ dictionary from the user
-
natcap.invest.fisheries.fisheries_io.
read_migration_tables
(args, class_list, region_list)¶ Parses, verifies and orders list of migration matrices necessary for program.
Parameters: - args (dictionary) – same args as model entry point
- class_list (list) – list of class names
- region_list (list) – list of region names
Returns: mig_dict – see example below
Return type: dictionary
Example Returns:
mig_dict = { 'Migration': [np.matrix, np.matrix, ...] }
Note
If migration matrices are not provided for all classes, the function will generate identity matrices for missing classes
-
natcap.invest.fisheries.fisheries_io.
read_population_csv
(args, uri)¶ Parses and verifies a single Population Parameters CSV file
Parses and verifies inputs from the Population Parameters CSV file. If not all necessary vectors are included, the function will raise a MissingParameter exception. Survival matrix will be arranged by class-elements, 2nd dim: sex, and 3rd dim: region. Class vectors will be arranged by class-elements, 2nd dim: sex (depending on whether model is sex-specific) Region vectors will be arraged by region-elements, sex-agnostic.
Parameters: - args (dictionary) – arguments provided by user
- uri (string) – the particular Population Parameters CSV file to parse and verifiy
Returns: pop_dict – dictionary containing verified population
arguments
Return type: dictionary
Example Returns:
pop_dict = { 'population_csv_uri': 'path/to/csv', 'Survnaturalfrac': np.array( [[...], [...]], [[...], [...]], ...), # Class Vectors 'Classes': np.array([...]), 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), # Region Vectors 'Regions': np.array([...]), 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }
-
natcap.invest.fisheries.fisheries_io.
read_population_csvs
(args)¶ Parses and verifies the Population Parameters CSV files
Parameters: args (dictionary) – arguments provided by user Returns: pop_list – list of dictionaries containing verified population argumentsReturn type: list Example Returns:
pop_list = [ { 'Survnaturalfrac': np.array( [[...], [...]], [[...], [...]], ...), # Class Vectors 'Classes': np.array([...]), 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), # Region Vectors 'Regions': np.array([...]), 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }, { ... } ]
The Fisheries Model module contains functions for running the model
Variable Suffix Notation: t: time x: area/region a: age/class s: sex
-
natcap.invest.fisheries.fisheries_model.
initialize_vars
(vars_dict)¶ Initializes variables for model run
Parameters: vars_dict (dictionary) – verified arguments and variables Returns: vars_dict – modified vars_dict with additional variables Return type: dictionary Example Returns:
vars_dict = { # (original vars) 'Survtotalfrac': np.array([...]), # a,s,x 'G_survtotalfrac': np.array([...]), # (same) 'P_survtotalfrac': np.array([...]), # (same) 'N_tasx': np.array([...]), # Index Order: t,a,s,x 'H_tx': np.array([...]), # t,x 'V_tx': np.array([...]), # t,x 'Spawners_t': np.array([...]), }
-
natcap.invest.fisheries.fisheries_model.
run_population_model
(vars_dict, init_cond_func, cycle_func, harvest_func)¶ Runs the model
Parameters: - vars_dict (dictionary) –
- init_cond_func (lambda function) – sets initial conditions
- cycle_func (lambda function) – computes numbers for the next time step
- harvest_func (lambda function) – computes harvest and valuation
Returns: vars_dict (dictionary)
Example Returned Dictionary:
{ # (other items) ... 'N_tasx': np.array([...]), # Index Order: time, class, sex, region 'H_tx': np.array([...]), # Index Order: time, region 'V_tx': np.array([...]), # Index Order: time, region 'Spawners_t': np,array([...]), 'equilibrate_timestep': <int>, }
-
natcap.invest.fisheries.fisheries_model.
set_cycle_func
(vars_dict, rec_func)¶ Creates a function to run a single cycle in the model
Parameters: - vars_dict (dictionary) –
- rec_func (lambda function) – recruitment function
Example Output of Returned Cycle Function:
N_asx = np.array([...]) spawners = <int> N_next, spawners = cycle_func(N_prev)
-
natcap.invest.fisheries.fisheries_model.
set_harvest_func
(vars_dict)¶ Creates harvest function that calculates the given harvest and valuation of the fisheries population over each time step for a given region. Returns None if harvest isn’t selected by user.
Example Outputs of Returned Harvest Function:
H_x, V_x = harv_func(N_tasx) H_x = np.array([3.0, 4.5, 2.5, ...]) V_x = np.array([6.0, 9.0, 5.0, ...])
-
natcap.invest.fisheries.fisheries_model.
set_init_cond_func
(vars_dict)¶ Creates a function to set the initial conditions of the model
Parameters: vars_dict (dictionary) – variables Returns: init_cond_func – initial conditions function Return type: lambda function Example Return Array:
N_asx = np.ndarray([...])
-
natcap.invest.fisheries.fisheries_model.
set_recru_func
(vars_dict)¶ Creates recruitment function that calculates the number of recruits for class 0 at time t for each region (currently sex agnostic). Also returns number of spawners
Parameters: vars_dict (dictionary) – Returns: rec_func – recruitment function Return type: function Example Output of Returned Recruitment Function:
N_next[0], spawners = rec_func(N_prev)
InVEST Habitat Quality model
-
natcap.invest.habitat_quality.habitat_quality.
check_projections
(ds_uri_dict, proj_unit)¶ Check that a group of gdal datasets are projected and that they are projected in a certain unit.
ds_uri_dict - a dictionary of uris to gdal datasets proj_unit - a float that specifies what units the projection should be
in. ex: 1.0 is meters.- returns - False if one of the datasets is not projected or not in the
- correct projection type, otherwise returns True if datasets are properly projected
-
natcap.invest.habitat_quality.habitat_quality.
execute
(args)¶ Habitat Quality.
Open files necessary for the portion of the habitat_quality model.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation (required)
- landuse_cur_uri (string) – a uri to an input land use/land cover raster (required)
- landuse_fut_uri (string) – a uri to an input land use/land cover raster (optional)
- landuse_bas_uri (string) – a uri to an input land use/land cover raster (optional, but required for rarity calculations)
- threat_folder (string) – a uri to the directory that will contain all threat rasters (required)
- threats_uri (string) – a uri to an input CSV containing data of all the considered threats. Each row is a degradation source and each column a different attribute of the source with the following names: ‘THREAT’,’MAX_DIST’,’WEIGHT’ (required).
- access_uri (string) – a uri to an input polygon shapefile containing data on the relative protection against threats (optional)
- sensitivity_uri (string) – a uri to an input CSV file of LULC types, whether they are considered habitat, and their sensitivity to each threat (required)
- half_saturation_constant (float) – a python float that determines the spread and central tendency of habitat quality scores (required)
- suffix (string) – a python string that will be inserted into all raster uri paths just before the file extension.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'landuse_cur_uri': 'path/to/landuse_cur_raster', 'landuse_fut_uri': 'path/to/landuse_fut_raster', 'landuse_bas_uri': 'path/to/landuse_bas_raster', 'threat_raster_folder': 'path/to/threat_rasters/', 'threats_uri': 'path/to/threats_csv', 'access_uri': 'path/to/access_shapefile', 'sensitivity_uri': 'path/to/sensitivity_csv', 'half_saturation_constant': 0.5, 'suffix': '_results', }
Returns: none
-
natcap.invest.habitat_quality.habitat_quality.
make_dictionary_from_csv
(csv_uri, key_field)¶ Make a basic dictionary representing a CSV file, where the keys are a unique field from the CSV file and the values are a dictionary representing each row
csv_uri - a string for the path to the csv file key_field - a string representing which field is to be used
from the csv file as the key in the dictionaryreturns - a python dictionary
-
natcap.invest.habitat_quality.habitat_quality.
make_linear_decay_kernel_uri
(max_distance, kernel_uri)¶
-
natcap.invest.habitat_quality.habitat_quality.
map_raster_to_dict_values
(key_raster_uri, out_uri, attr_dict, field, out_nodata, raise_error)¶ Creates a new raster from ‘key_raster’ where the pixel values from ‘key_raster’ are the keys to a dictionary ‘attr_dict’. The values corresponding to those keys is what is written to the new raster. If a value from ‘key_raster’ does not appear as a key in ‘attr_dict’ then raise an Exception if ‘raise_error’ is True, otherwise return a ‘out_nodata’
- key_raster_uri - a GDAL raster uri dataset whose pixel values relate to
- the keys in ‘attr_dict’
out_uri - a string for the output path of the created raster attr_dict - a dictionary representing a table of values we are interested
in making into a raster- field - a string of which field in the table or key in the dictionary
- to use as the new raster pixel values
out_nodata - a floating point value that is the nodata value. raise_error - a string that decides how to handle the case where the
value from ‘key_raster’ is not found in ‘attr_dict’. If ‘raise_error’ is ‘values_required’, raise Exception, if ‘none’, return ‘out_nodata’- returns - a GDAL raster, or raises an Exception and fail if:
- raise_error is True and
- the value from ‘key_raster’ is not a key in ‘attr_dict’
-
natcap.invest.habitat_quality.habitat_quality.
raster_pixel_count
(dataset_uri)¶ Determine how many of each unique pixel lies in the dataset (dataset)
dataset_uri - a GDAL raster dataset
- returns - a dictionary whose keys are the unique pixel values and
- whose values are the number of occurrences
-
natcap.invest.habitat_quality.habitat_quality.
resolve_ambiguous_raster_path
(uri, raise_error=True)¶ Get the real uri for a raster when we don’t know the extension of how the raster may be represented.
- uri - a python string of the file path that includes the name of the
- file but not its extension
- raise_error - a Boolean that indicates whether the function should
- raise an error if a raster file could not be opened.
return - the resolved uri to the rasster
-
natcap.invest.habitat_quality.habitat_quality.
threat_names_match
(threat_dict, sens_dict, prefix)¶ Check that the threat names in the threat table match the columns in the sensitivity table that represent the sensitivity of each threat on a lulc.
- threat_dict - a dictionary representing the threat table:
- {‘crp’:{‘THREAT’:’crp’,’MAX_DIST’:‘8.0’,’WEIGHT’:‘0.7’},
- ‘urb’:{‘THREAT’:’urb’,’MAX_DIST’:‘5.0’,’WEIGHT’:‘0.3’}, ... }
- sens_dict - a dictionary representing the sensitivity table:
- {‘1’:{‘LULC’:‘1’, ‘NAME’:’Residential’, ‘HABITAT’:‘1’,
- ‘L_crp’:‘0.4’, ‘L_urb’:‘0.45’...},
- ‘11’:{‘LULC’:‘11’, ‘NAME’:’Urban’, ‘HABITAT’:‘1’,
- ‘L_crp’:‘0.6’, ‘L_urb’:‘0.3’...},
...}
- prefix - a string that specifies the prefix to the threat names that is
- found in the sensitivity table
- returns - False if there is a mismatch in threat names or True if
- everything passes
This will be the preperatory module for HRA. It will take all unprocessed and pre-processed data from the UI and pass it to the hra_core module.
-
exception
natcap.invest.habitat_risk_assessment.hra.
DQWeightNotFound
¶ Bases:
exceptions.Exception
An exception to be passed if there is a shapefile within the spatial criteria directory, but no corresponing data quality and weight to support it. This would likely indicate that the user is try to run HRA without having added the criteria name into hra_preprocessor properly.
-
exception
natcap.invest.habitat_risk_assessment.hra.
ImproperAOIAttributeName
¶ Bases:
exceptions.Exception
An exception to pass in hra non core if the AOIzone files do not contain the proper attribute name for individual indentification. The attribute should be named ‘name’, and must exist for every shape in the AOI layer.
-
exception
natcap.invest.habitat_risk_assessment.hra.
ImproperCriteriaAttributeName
¶ Bases:
exceptions.Exception
An excepion to pass in hra non core if the criteria provided by the user for use in spatially explicit rating do not contain the proper attribute name. The attribute should be named ‘RATING’, and must exist for every shape in every layer provided.
-
natcap.invest.habitat_risk_assessment.hra.
add_crit_rasters
(dir, crit_dict, habitats, h_s_e, h_s_c, grid_size)¶ This will take in the dictionary of criteria shapefiles, rasterize them, and add the URI of that raster to the proper subdictionary within h/s/h-s.
- Input:
- dir- Directory into which the raserized criteria shapefiles should be
- placed.
- crit_dict- A multi-level dictionary of criteria shapefiles. The
outermost keys refer to the dictionary they belong with. The structure will be as follows:
- {‘h’:
- {‘HabA’:
- {‘CriteriaName: “Shapefile Datasource URI”...}, ...
},
- ‘h_s_c’:
- {(‘HabA’, ‘Stress1’):
- {‘CriteriaName: “Shapefile Datasource URI”, ...}, ...
},
- ‘h_s_e’
- {(‘HabA’, ‘Stress1’):
- {‘CriteriaName: “Shapefile Datasource URI”, ...}, ...
}
}
- h_s_c- A multi-level structure which holds numerical criteria
ratings, as well as weights and data qualities for criteria rasters. h-s will hold only criteria that apply to habitat and stressor overlaps. The structure’s outermost keys are tuples of (Habitat, Stressor) names. The overall structure will be as pictured:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}, ‘DS’: “HabitatStressor Raster URI”
}
- habitats- Similar to the h-s dictionary, a multi-level
- dictionary containing all habitat-specific criteria ratings and raster information. The outermost keys are habitat names. Within the dictionary, the habitats[‘habName’][‘DS’] will be the URI of the raster of that habitat.
- h_s_e- Similar to the h-s dictionary, a multi-level dictionary
- containing all stressor-specific criteria ratings and raster information. The outermost keys are tuples of (Habitat, Stressor) names.
- grid_size- An int representing the desired pixel size for the criteria
- rasters.
- Output:
- A set of rasterized criteria files. The criteria shapefiles will be
- burned based on their ‘Rating’ attribute. These will be placed in the ‘dir’ folder.
An appended version of habitats, h_s_e, and h_s_c which will include entries for criteria rasters at ‘Rating’ in the appropriate dictionary. ‘Rating’ will map to the URI of the corresponding criteria dataset.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra.
add_hab_rasters
(dir, habitats, hab_list, grid_size, grid_path)¶ Want to get all shapefiles within any directories in hab_list, and burn them to a raster.
- Input:
- dir- Directory into which all completed habitat rasters should be
- placed.
- habitats- A multi-level dictionary containing all habitat and
- species-specific criteria ratings and rasters.
- hab_list- File URI’s for all shapefile in habitats dir, species dir, or
- both.
- grid_size- Int representing the desired pixel dimensions of
- both intermediate and ouput rasters.
- grid_path- A string for a raster file path on disk. Used as a
- universal base raster to create other rasters which to burn vectors onto.
- Output:
- A modified version of habitats, into which we have placed the URI to
- the rasterized version of the habitat shapefile. It will be placed at habitats[habitatName][‘DS’].
-
natcap.invest.habitat_risk_assessment.hra.
calc_max_rating
(risk_eq, max_rating)¶ Should take in the max possible risk, and return the highest possible per pixel risk that would be seen on a H-S raster pixel.
- Input:
risk_eq- The equation that will be used to determine risk. max_rating- The highest possible value that could be given as a
criteria rating, data quality, or weight.
Returns: An int representing the highest possible risk value for any given h-s overlap raster.
-
natcap.invest.habitat_risk_assessment.hra.
execute
(args)¶ Habitat Risk Assessment.
This function will prepare files passed from the UI to be sent on to the hra_core module.
All inputs are required.
Parameters: - workspace_dir (string) – The location of the directory into which intermediate and output files should be placed.
- csv_uri (string) – The location of the directory containing the CSV files of habitat, stressor, and overlap ratings. Will also contain a .txt JSON file that has directory locations (potentially) for habitats, species, stressors, and criteria.
- grid_size (int) – Represents the desired pixel dimensions of both intermediate and ouput rasters.
- risk_eq (string) – A string identifying the equation that should be used in calculating risk scores for each H-S overlap cell. This will be either ‘Euclidean’ or ‘Multiplicative’.
- decay_eq (string) – A string identifying the equation that should be used in calculating the decay of stressor buffer influence. This can be ‘None’, ‘Linear’, or ‘Exponential’.
- max_rating (int) – An int representing the highest potential value that should be represented in rating, data quality, or weight in the CSV table.
- max_stress (int) – This is the highest score that is used to rate a criteria within this model run. These values would be placed within the Rating column of the habitat, species, and stressor CSVs.
- aoi_tables (string) – A shapefile containing one or more planning regions for a given model. This will be used to get the average risk value over a larger area. Each potential region MUST contain the attribute “name” as a way of identifying each individual shape.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'csv_uri': 'path/to/csv', 'grid_size': 200, 'risk_eq': 'Euclidean', 'decay_eq': 'None', 'max_rating': 3, 'max_stress': 4, 'aoi_tables': 'path/to/shapefile', }
Returns: None
-
natcap.invest.habitat_risk_assessment.hra.
listdir
(path)¶ A replacement for the standar os.listdir which, instead of returning only the filename, will include the entire path. This will use os as a base, then just lambda transform the whole list.
- Input:
- path- The location container from which we want to gather all files.
Returns: A list of full URIs contained within ‘path’.
-
natcap.invest.habitat_risk_assessment.hra.
make_add_overlap_rasters
(dir, habitats, stress_dict, h_s_c, h_s_e, grid_size)¶ For every pair in h_s_c and h_s_e, want to get the corresponding habitat and stressor raster, and return the overlap of the two. Should add that as the ‘DS’ entry within each (h, s) pair key in h_s_e and h_s_c.
- Input:
- dir- Directory into which all completed h-s overlap files shoudl be
- placed.
- habitats- The habitats criteria dictionary, which will contain a
dict[Habitat][‘DS’]. The structure will be as follows:
- {Habitat A:
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {
- ‘DS’: “CritName Raster URI”, ‘Weight’: 1.0, ‘DQ’: 1.0
}
},
‘DS’: “A Dataset URI” }
}
- stress_dict- A dictionary containing all stressor DS’s. The key will be
- the name of the stressor, and it will map to the URI of the stressor DS.
- h_s_c- A multi-level structure which holds numerical criteria
ratings, as well as weights and data qualities for criteria rasters. h-s will hold criteria that apply to habitat and stressor overlaps, and be applied to the consequence score. The structure’s outermost keys are tuples of (Habitat, Stressor) names. The overall structure will be as pictured:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- h_s_e- Similar to the h_s dictionary, a multi-level
- dictionary containing habitat-stressor-specific criteria ratings and raster information which should be applied to the exposure score. The outermost keys are tuples of (Habitat, Stressor) names.
- grid_size- The desired pixel size for the rasters that will be created
- for each habitat and stressor.
- Output:
- An edited versions of h_s_e and h_s_c, each of which contains an overlap DS at dict[(Hab, Stress)][‘DS’]. That key will map to the URI for the corresponding raster DS.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra.
make_exp_decay_array
(dist_trans_uri, out_uri, buff, nodata)¶ Should create a raster where the area around the land is a function of exponential decay from the land values.
- Input:
- dist_trans_uri- uri to a gdal raster where each pixel value represents
- the distance to the closest piece of land.
out_uri- uri for the gdal raster output with the buffered outputs buff- The distance surrounding the land that the user desires to buffer
with exponentially decaying values.- nodata- The value which should be placed into anything not land or
- buffer area.
Returns: Nothing
-
natcap.invest.habitat_risk_assessment.hra.
make_lin_decay_array
(dist_trans_uri, out_uri, buff, nodata)¶ Should create a raster where the area around land is a function of linear decay from the values representing the land.
- Input:
- dist_trans_uri- uri to a gdal raster where each pixel value represents
- the distance to the closest piece of land.
out_uri- uri for the gdal raster output with the buffered outputs buff- The distance surrounding the land that the user desires to buffer
with linearly decaying values.- nodata- The value which should be placed into anything not land or
- buffer area.
Returns: Nothing
-
natcap.invest.habitat_risk_assessment.hra.
make_no_decay_array
(dist_trans_uri, out_uri, buff, nodata)¶ Should create a raster where the buffer zone surrounding the land is buffered with the same values as the land, essentially creating an equally weighted larger landmass.
- Input:
- dist_trans_uri- uri to a gdal raster where each pixel value represents
- the distance to the closest piece of land.
out_uri- uri for the gdal raster output with the buffered outputs buff- The distance surrounding the land that the user desires to buffer
with land data values.- nodata- The value which should be placed into anything not land or
- buffer area.
Returns: Nothing
-
natcap.invest.habitat_risk_assessment.hra.
make_stress_rasters
(dir, stress_list, grid_size, decay_eq, buffer_dict, grid_path)¶ Creating a simple dictionary that will map stressor name to a rasterized version of that stressor shapefile. The key will be a string containing stressor name, and the value will be the URI of the rasterized shapefile.
- Input:
dir- The directory into which completed shapefiles should be placed. stress_list- A list containing stressor shapefile URIs for all
stressors desired within the given model run.- grid_size- The pixel size desired for the rasters produced based on the
- shapefiles.
- decay_eq- A string identifying the equation that should be used
- in calculating the decay of stressor buffer influence.
- buffer_dict- A dictionary that holds desired buffer sizes for each
- stressors. The key is the name of the stressor, and the value is an int which correlates to desired buffer size.
- grid_path- A string for a raster file path on disk. Used as a
- universal base raster to create other rasters which to burn vectors onto.
- Output:
- A potentially buffered and rasterized version of each stressor
- shapefile provided, which will be stored in ‘dir’.
Returns: stress_dict- A simple dictionary which maps a string key of the stressor name to the URI for the output raster.
-
natcap.invest.habitat_risk_assessment.hra.
make_zero_buff_decay_array
(dist_trans_uri, out_uri, nodata)¶ Creates a raster in the case of a zero buffer width, where we should have is land and nodata values.
- Input:
- dist_trans_uri- uri to a gdal raster where each pixel value represents
- the distance to the closest piece of land.
out_uri- uri for the gdal raster output with the buffered outputs nodata- The value which should be placed into anything that is not
land.
Returns: Nothing
-
natcap.invest.habitat_risk_assessment.hra.
merge_bounding_boxes
(bb1, bb2, mode)¶ Merge two bounding boxes through union or intersection.
Parameters: - bb1 (list) – [upper_left_x, upper_left_y, lower_right_x, lower_right_y]
- bb2 (list) – [upper_left_x, upper_left_y, lower_right_x, lower_right_y]
- mode (string) –
Returns: A list of the merged bounding boxes.
-
natcap.invest.habitat_risk_assessment.hra.
unpack_over_dict
(csv_uri, args)¶ This throws the dictionary coming from the pre-processor into the equivalent dictionaries in args so that they can be processed before being passed into the core module.
- Input:
- csv_uri- Reference to the folder location of the CSV tables containing
- all habitat and stressor rating information.
- args- The dictionary into which the individual ratings dictionaries
- should be placed.
- Output:
A modified args dictionary containing dictionary versions of the CSV tables located in csv_uri. The dictionaries should be of the forms as follows.
- h_s_c- A multi-level structure which will hold all criteria ratings,
both numerical and raster that apply to habitat and stressor overlaps. The structure, whose keys are tuples of (Habitat, Stressor) names and map to an inner dictionary will have 2 outer keys containing numeric-only criteria, and raster-based criteria. At this time, we should only have two entries in a criteria raster entry, since we have yet to add the rasterized versions of the criteria.
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- habitats- Similar to the h-s dictionary, a multi-level
- dictionary containing all habitat-specific criteria ratings and weights and data quality for the rasters.
- h_s_e- Similar to the h-s dictionary, a multi-level dictionary
- containing habitat stressor-specific criteria ratings and weights and data quality for the rasters.
Returns nothing.
This is the core module for HRA functionality. This will perform all HRA calcs, and return the appropriate outputs.
-
natcap.invest.habitat_risk_assessment.hra_core.
aggregate_multi_rasters_uri
(aoi_rast_uri, rast_uris, rast_labels, ignore_value_list=[])¶ Will take a stack of rasters and an AOI, and return a dictionary containing the number of overlap pixels, and the value of those pixels for each overlap of raster and AOI.
- Input:
- aoi_uri- The location of an AOI raster which MUST have individual ID
- numbers with the attribute name ‘BURN_ID’ for each feature on the map.
- rast_uris- List of locations of the rasters which should be overlapped
- with the AOI.
- rast_labels- Names for each raster layer that will be retrievable from
- the output dictionary.
- ignore_value_list- Optional argument that provides a list of values
- which should be ignored if they crop up for a pixel value of one of the layers.
Returns: layer_overlap_info- {AOI Data Value 1: {rast_label: [#of pix, pix value], rast_label: [200, 2567.97], ...}
-
natcap.invest.habitat_risk_assessment.hra_core.
calc_C_raster
(out_uri, h_s_list, h_s_denom_dict, h_list, h_denom_dict, h_uri, h_s_uri)¶ Should return a raster burned with a ‘C’ raster that is a combination of all the rasters passed in within the list, divided by the denominator.
- Input:
- out_uri- The location to which the calculated C raster should be
- bGurned.
- h_s_list- A list of rasters burned with the equation r/dq*w for every
- criteria applicable for that h, s pair.
- h_s_denom_dict- A dictionary containing criteria names applicable to
- this particular h,s pair. Each criteria string name maps to a double representing the denominator for that raster, using the equation 1/dq*w.
- h_list- A list of rasters burned with the equation r/dq*w for every
- criteria applicable for that s.
- h_denom_dict- A dictionary containing criteria names applicable to this
- particular habitat. Each criteria string name maps to a double representing the denominator for that raster, using the equation 1/dq*w.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
calc_E_raster
(out_uri, h_s_list, denom_dict, h_s_base_uri, h_base_uri)¶ Should return a raster burned with an ‘E’ raster that is a combination of all the rasters passed in within the list, divided by the denominator.
- Input:
out_uri- The location to which the E raster should be burned. h_s_list- A list of rasters burned with the equation r/dq*w for every
criteria applicable for that h, s pair.- denom_dict- A double representing the sum total of all applicable
- criteria using the equation 1/dq*w. criteria applicable for that s.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
copy_raster
(in_uri, out_uri)¶ Quick function that will copy the raster in in_raster, and put it into out_raster.
-
natcap.invest.habitat_risk_assessment.hra_core.
execute
(args)¶ This provides the main calculation functionaility of the HRA model. This will call all parts necessary for calculation of final outputs.
- Inputs:
- args- Dictionary containing everything that hra_core will need to
- complete the rest of the model run. It will contain the following.
- args[‘workspace_dir’]- Directory in which all data resides. Output
- and intermediate folders will be subfolders of this one.
- args[‘h_s_c’]- The same as intermediate/’h-s’, but with the addition
of a 3rd key ‘DS’ to the outer dictionary layer. This will map to a dataset URI that shows the potentially buffered overlap between the habitat and stressor. Additionally, any raster criteria will be placed in their criteria name subdictionary. The overall structure will be as pictured:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {
- ‘DS’: “CritName Raster URI”, ‘Weight’: 1.0, ‘DQ’: 1.0
}
},
‘DS’: “A-1 Dataset URI” }
}
- args[‘habitats’]- Similar to the h-s dictionary, a multi-level
- dictionary containing all habitat-specific criteria ratings and rasters. In this case, however, the outermost key is by habitat name, and habitats[‘habitatName’][‘DS’] points to the rasterized habitat shapefile URI provided by the user.
- args[‘h_s_e’]- Similar to the h_s_c dictionary, a multi-level
- dictionary containing habitat-stressor-specific criteria ratings and shapes. The same as intermediate/’h-s’, but with the addition of a 3rd key ‘DS’ to the outer dictionary layer. This will map to a dataset URI that shows the potentially buffered overlap between the habitat and stressor. Additionally, any raster criteria will be placed in their criteria name subdictionary.
- args[‘risk_eq’]- String which identifies the equation to be used
- for calculating risk. The core module should check for possibilities, and send to a different function when deciding R dependent on this.
- args[‘max_risk’]- The highest possible risk value for any given pairing
- of habitat and stressor.
- args[‘max_stress’]- The largest number of stressors that the user
- believes will overlap. This will be used to get an accurate estimate of risk.
- args[‘aoi_tables’]- May or may not exist within this model run, but if
- it does, the user desires to have the average risk values by stressor/habitat using E/C axes for each feature in the AOI layer specified by ‘aoi_tables’. If the risk_eq is ‘Euclidean’, this will create risk plots, otherwise it will just create the standard HTML table for either ‘Euclidean’ or ‘Multiplicative.’
- args[‘aoi_key’]- The form of the word ‘Name’ that the aoi layer uses
- for this particular model run.
- args[‘warnings’]- A dictionary containing items which need to be
acted upon by hra_core. These will be split into two categories. ‘print’ contains statements which will be printed using logger.warn() at the end of a run. ‘unbuff’ is for pairs which should use the unbuffered stressor file in lieu of the decayed rated raster.
- {‘print’: [‘This is a warning to the user.’, ‘This is another.’],
- ‘unbuff’: [(HabA, Stress1), (HabC, Stress2)]
}
- Outputs:
--Intermediate-- These should be the temp risk and criteria files needed for the final output calcs. --Output-- - /output/maps/recov_potent_H[habitatname].tif- Raster layer
- depicting the recovery potential of each individual habitat.
- /output/maps/cum_risk_H[habitatname]- Raster layer depicting the
- cumulative risk for all stressors in a cell for the given habitat.
- /output/maps/ecosys_risk- Raster layer that depicts the sum of all
- cumulative risk scores of all habitats for that cell.
- /output/maps/[habitatname]_HIGH_RISK- A raster-shaped shapefile
- containing only the “high risk” areas of each habitat, defined as being above a certain risk threshold.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_aoi_tables
(out_dir, aoi_pairs)¶ This function will take in an shapefile containing multiple AOIs, and output a table containing values averaged over those areas.
- Input:
- out_dir- The directory into which the completed HTML tables should be
- placed.
- aoi_pairs- Replacement for avgs_dict, holds all the averaged values on
a H, S basis.
- {‘AOIName’:
}
- Output:
- A set of HTML tables which will contain averaged values of E, C, and risk for each H, S pair within each AOI. Additionally, the tables will contain a column for risk %, which is the averaged risk value in that area divided by the total potential risk for a given pixel in the map.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_ecosys_risk_raster
(dir, h_dict)¶ This will make the compiled raster for all habitats within the ecosystem. The ecosystem raster will be a direct sum of each of the included habitat rasters.
- Input:
dir- The directory in which all completed should be placed. h_dict- A dictionary of raster dataset URIs which can be combined to
create an overall ecosystem raster. The key is the habitat name, and the value is the dataset URI.
{‘Habitat A’: “Overall Habitat A Risk Map URI”, ‘Habitat B’: “Overall Habitat B Risk URI”
...}
- Output:
- ecosys_risk.tif- An overall risk raster for the ecosystem. It will
- be placed in the dir folder.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_hab_risk_raster
(dir, risk_dict)¶ This will create a combined raster for all habitat-stressor pairings within one habitat. It should return a list of open rasters that correspond to all habitats within the model.
- Input:
- dir- The directory in which all completed habitat rasters should be
- placed.
- risk_dict- A dictionary containing the risk rasters for each pairing of
habitat and stressor. The key is the tuple of (habitat, stressor), and the value is the raster dataset URI corresponding to that combination.
{(‘HabA’, ‘Stress1’): “A-1 Risk Raster URI”, (‘HabA’, ‘Stress2’): “A-2 Risk Raster URI”, ... }
- Output:
- A cumulative risk raster for every habitat included within the model.
Returns: - h_rasters- A dictionary containing habitat names mapped to the dataset
- URI of the overarching habitat risk map for this model run.
{‘Habitat A’: “Overall Habitat A Risk Map URI”, ‘Habitat B’: “Overall Habitat B Risk URI”
...}
- h_s_rasters- A dictionary that maps a habitat name to the risk rasters
- for each of the applicable stressors.
- {‘HabA’: [“A-1 Risk Raster URI”, “A-2 Risk Raster URI”, ...],
- ‘HabB’: [“B-1 Risk Raster URI”, “B-2 Risk Raster URI”, ...], ...
}
-
natcap.invest.habitat_risk_assessment.hra_core.
make_recov_potent_raster
(dir, crit_lists, denoms)¶ This will do the same h-s calculation as used for the individual E/C calculations, but instead will use r/dq as the equation for each criteria. The full equation will be:
SUM HAB CRITS( 1/dq )
- Input:
dir- Directory in which the completed raster files should be placed. crit_lists- A dictionary containing pre-burned criteria which can be
combined to get the E/C for that H-S pairing.
- {‘Risk’: {
- ‘h_s_c’: {
- (hab1, stressA):
- [“indiv num raster URI”,
- “raster 1 URI”, ...],
(hab1, stressB): ...
},
- ‘h’: {
- hab1: [“indiv num raster URI”, “raster 1 URI”],
- ...
},
- ‘h_s_e’: { (hab1, stressA): [“indiv num raster URI”]
- }
}
- ‘Recovery’: { hab1: [“indiv num raster URI”, ...],
- hab2: ...
}
}
- denoms- Dictionary containing the combined denominator for a given
H-S overlap. Once all of the rasters are combined, each H-S raster can be divided by this.
- {‘Risk’: {
- ‘h_s_c’: {
- (hab1, stressA): {
- ‘CritName’: 2.0, ...},
- (hab1, stressB): {‘CritName’: 1.3, ...}
- },
- ‘h’: { hab1: {‘CritName’: 1.3, ...},
- ...
},
- ‘h_s_e’: { (hab1, stressA): {‘CritName’: 1.3, ...}
- }
}
- ‘Recovery’: { hab1: {‘critname’: 1.6, ...}
- hab2: ...
}
}
- Output:
- A raster file for each of the habitats included in the model displaying
- the recovery potential within each potential grid cell.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_euc
(base_uri, e_uri, c_uri, risk_uri)¶ Combines the E and C rasters according to the euclidean combination equation.
- Input:
- base- The h-s overlap raster, including potentially decayed values from
- the stressor layer.
- e_rast- The r/dq*w burned raster for all stressor-specific criteria
- in this model run.
- c_rast- The r/dq*w burned raster for all habitat-specific and
- habitat-stressor-specific criteria in this model run.
risk_uri- The file path to which we should be burning our new raster.
Returns a raster representing the euclidean calculated E raster, C raster, and the base raster. The equation will be sqrt((C-1)^2 + (E-1)^2)
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_mult
(base_uri, e_uri, c_uri, risk_uri)¶ Combines the E and C rasters according to the multiplicative combination equation.
- Input:
- base- The h-s overlap raster, including potentially decayed values from
- the stressor layer.
- e_rast- The r/dq*w burned raster for all stressor-specific criteria
- in this model run.
- c_rast- The r/dq*w burned raster for all habitat-specific and
- habitat-stressor-specific criteria in this model run.
risk_uri- The file path to which we should be burning our new raster.
- Returns the URI for a raster representing the multiplied E raster,
- C raster, and the base raster.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_plots
(out_dir, aoi_pairs, max_risk, max_stress, num_stress, num_habs)¶ This function will produce risk plots when the risk equation is euclidean.
Parameters: - out_dir (string) – The directory into which the completed risk plots should be placed.
- aoi_pairs (dictionary) –
- {‘AOIName’:
}
- max_risk (float) – Double representing the highest potential value for a single h-s raster. The amount of risk for a given Habitat raster would be SUM(s) for a given h.
- max_stress (float) – The largest number of stressors that the user believes will overlap. This will be used to get an accurate estimate of risk.
- num_stress (dict) – A dictionary that simply associates every habaitat with the number of stressors associated with it. This will help us determine the max E/C we should be expecting in our overarching ecosystem plot.
Returns: None
- Outputs:
A set of .png images containing the matplotlib plots for every H-S combination. Within that, each AOI will be displayed as plotted by (E,C) values.
A single png that is the “ecosystem plot” where the E’s for each AOI are the summed
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_rasters
(h_s_c, habs, inter_dir, crit_lists, denoms, risk_eq, warnings)¶ This will combine all of the intermediate criteria rasters that we pre-processed with their r/dq*w. At this juncture, we should be able to straight add the E/C within themselves. The way in which the E/C rasters are combined depends on the risk equation desired.
- Input:
- h_s_c- Args dictionary containing much of the H-S overlap data in
- addition to the H-S base rasters. (In this function, we are only using it for the base h-s raster information.)
- habs- Args dictionary containing habitat criteria information in
- addition to the habitat base rasters. (In this function, we are only using it for the base raster information.)
- inter_dir- Intermediate directory in which the H_S risk-burned rasters
- can be placed.
- crit_lists- A dictionary containing pre-burned criteria which can be
combined to get the E/C for that H-S pairing.
- {‘Risk’: {
- ‘h_s_c’: {
- (hab1, stressA): [“indiv num raster URI”,
- “raster 1 URI”, ...],
(hab1, stressB): ...
},
- ‘h’: {
- hab1: [“indiv num raster URI”,
- “raster 1 URI”, ...],
...
},
- ‘h_s_e’: { (hab1, stressA): [“indiv num raster URI”,
- ...]
}
}
- ‘Recovery’: { hab1: [“indiv num raster URI”, ...],
- hab2: ...
}
}
- denoms- Dictionary containing the denomincator scores for each overlap
for each criteria. These can be combined to get the final denom by which the rasters should be divided.
- {‘Risk’: { ‘h_s_c’: { (hab1, stressA): {‘CritName’: 2.0,...},
- (hab1, stressB): {CritName’: 1.3, ...}
},
- ‘h’: { hab1: {‘CritName’: 2.5, ...},
- ...
},
- ‘h_s_e’: { (hab1, stressA): {‘CritName’: 2.3},
- }
}
- ‘Recovery’: { hab1: {‘CritName’: 3.4},
- hab2: ...
}
}
- risk_eq- A string description of the desired equation to use when
- preforming risk calculation.
- warnings- A dictionary containing items which need to be acted upon by
hra_core. These will be split into two categories. ‘print’ contains statements which will be printed using logger.warn() at the end of a run. ‘unbuff’ is for pairs which should use the unbuffered stressor file in lieu of the decayed rated raster.
- {‘print’: [‘This is a warning to the user.’, ‘This is another.’],
- ‘unbuff’: [(HabA, Stress1), (HabC, Stress2)]
}
- Output:
- A new raster file for each overlapping of habitat and stressor. This file will be the overall risk for that pairing from all H/S/H-S subdictionaries.
Returns: risk_rasters- A simple dictionary that maps a tuple of (Habitat, Stressor) to the URI for the risk raster created when the various sub components (H/S/H_S) are combined. {(‘HabA’, ‘Stress1’): “A-1 Risk Raster URI”, (‘HabA’, ‘Stress2’): “A-2 Risk Raster URI”, ... }
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_shapes
(dir, crit_lists, h_dict, h_s_dict, max_risk, max_stress)¶ This function will take in the current rasterized risk files for each habitat, and output a shapefile where the areas that are “HIGH RISK” (high percentage of risk over potential risk) are the only existing polygonized areas.
Additonally, we also want to create a shapefile which is only the “low risk” areas- actually, those that are just not high risk (it’s the combination of low risk areas and medium risk areas).
Since the pygeoprocessing.geoprocessing function can only take in ints, want to predetermine
what areas are or are not going to be shapefile, and pass in a raster that is only 1 or nodata.
- Input:
dir- Directory in which the completed shapefiles should be placed. crit_lists- A dictionary containing pre-burned criteria which can be
combined to get the E/C for that H-S pairing.
- {‘Risk’: {
- ‘h_s_c’: { (hab1, stressA): [“indiv num raster URI”,
- “raster 1 URI”, ...],
(hab1, stressB): ...
},
- ‘h’: {
- hab1: [“indiv num raster URI”, “raster 1 URI”],
- ...
},
- ‘h_s_e’: {(hab1, stressA): [“indiv num raster URI”]
- }
}
- ‘Recovery’: { hab1: [“indiv num raster URI”, ...],
- hab2: ...
}
}
- h_dict- A dictionary that contains raster dataset URIs corresponding
- to each of the habitats in the model. The key in this dictionary is the name of the habiat, and it maps to the open dataset.
- h_s_dict- A dictionary that maps a habitat name to the risk rasters
for each of the applicable stressors.
- {‘HabA’: [“A-1 Risk Raster URI”, “A-2 Risk Raster URI”, ...],
- ‘HabB’: [“B-1 Risk Raster URI”, “B-2 Risk Raster URI”, ...], ...
}
- max_risk- Double representing the highest potential value for a single
- h-s raster. The amount of risk for a given Habitat raster would be SUM(s) for a given h.
- max_stress- The largest number of stressors that the user believes will
- overlap. This will be used to get an accurate estimate of risk.
- Output:
- Returns two shapefiles for every habitat, one which shows features only for the areas that are “high risk” within that habitat, and one which shows features only for the combined low + medium risk areas.
- Return:
- num_stress- A dictionary containing the number of stressors being
- associated with each habitat. The key is the string name of the habitat, and it maps to an int counter of number of stressors.
-
natcap.invest.habitat_risk_assessment.hra_core.
pre_calc_avgs
(inter_dir, risk_dict, aoi_uri, aoi_key, risk_eq, max_risk)¶ This funtion is a helper to make_aoi_tables, and will just handle pre-calculation of the average values for each aoi zone.
- Input:
- inter_dir- The directory which contains the individual E and C rasters.
- We can use these to get the avg. E and C values per area. Since we don’t really have these in any sort of dictionary, will probably just need to explicitly call each individual file based on the names that we pull from the risk_dict keys.
- risk_dict- A simple dictionary that maps a tuple of
(Habitat, Stressor) to the URI for the risk raster created when the various sub components (H/S/H_S) are combined.
{(‘HabA’, ‘Stress1’): “A-1 Risk Raster URI”, (‘HabA’, ‘Stress2’): “A-2 Risk Raster URI”, ... }
- aoi_uri- The location of the AOI zone files. Each feature within this
- file (identified by a ‘name’ attribute) will be used to average an area of E/C/Risk values.
- risk_eq- A string identifier, either ‘Euclidean’ or ‘Multiplicative’
- that tells us which equation should be used for calculation of risk. This will be used to get the risk value for the average E and C.
max_risk- The user reported highest risk score present in the CSVs.
Returns: - avgs_dict- A multi level dictionary to hold the average values that
- will be placed into the HTML table.
- {‘HabitatName’:
- {‘StressorName’:
- [{‘Name’: AOIName, ‘E’: 4.6, ‘C’: 2.8, ‘Risk’: 4.2},
- {...},
... ]
}
aoi_names- Quick and dirty way of getting the AOI keys.
-
natcap.invest.habitat_risk_assessment.hra_core.
pre_calc_denoms_and_criteria
(dir, h_s_c, hab, h_s_e)¶ Want to return two dictionaries in the format of the following: (Note: the individual num raster comes from the crit_ratings subdictionary and should be pre-summed together to get the numerator for that particular raster. )
- Input:
- dir- Directory into which the rasterized criteria can be placed. This
- will need to have a subfolder added to it specifically to hold the rasterized criteria for now.
- h_s_c- A multi-level structure which holds all criteria ratings,
both numerical and raster that apply to habitat and stressor overlaps. The structure, whose keys are tuples of (Habitat, Stressor) names and map to an inner dictionary will have 3 outer keys containing numeric-only criteria, raster-based criteria, and a dataset that shows the potentially buffered overlap between the habitat and stressor. The overall structure will be as pictured:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {
- ‘DS’: “CritName Raster URI”,
- ‘Weight’: 1.0, ‘DQ’: 1.0}
},
‘DS’: “A-1 Raster URI” }
}
- hab- Similar to the h-s dictionary, a multi-level
- dictionary containing all habitat-specific criteria ratings and rasters. In this case, however, the outermost key is by habitat name, and habitats[‘habitatName’][‘DS’] points to the rasterized habitat shapefile URI provided by the user.
- h_s_e- Similar to the h_s_c dictionary, a multi-level
- dictionary containing habitat-stressor-specific criteria ratings and rasters. The outermost key is by (habitat, stressor) pair, but the criteria will be applied to the exposure portion of the risk calcs.
- Output:
- Creates a version of every criteria for every h-s paring that is burned with both a r/dq*w value for risk calculation, as well as a r/dq burned raster for recovery potential calculations.
Returns: - crit_lists- A dictionary containing pre-burned criteria URI which can
- be combined to get the E/C for that H-S pairing.
- {‘Risk’: {
- ‘h_s_c’:
- { (hab1, stressA): [“indiv num raster”, “raster 1”, ...],
- (hab1, stressB): ...
},
- ‘h’: {
- hab1: [“indiv num raster URI”,
- “raster 1 URI”, ...],
...
},
- ‘h_s_e’: {
- (hab1, stressA):
- [“indiv num raster URI”, ...]
}
}
- ‘Recovery’: { hab1: [“indiv num raster URI”, ...],
- hab2: ...
}
}
- denoms- Dictionary containing the combined denominator for a given
- H-S overlap. Once all of the rasters are combined, each H-S raster
can be divided by this.
- {‘Risk’: {
- ‘h_s_c’: {
- (hab1, stressA): {‘CritName’: 2.0, ...},
- (hab1, stressB): {‘CritName’: 1.3, ...}
},
- ‘h’: { hab1: {‘CritName’: 1.3, ...},
- ...
},
- ‘h_s_e’: { (hab1, stressA): {‘CritName’: 1.3, ...}
- }
}
- ‘Recovery’: { hab1: 1.6,
- hab2: ...
}
}
-
natcap.invest.habitat_risk_assessment.hra_core.
raster_to_polygon
(raster_uri, out_uri, layer_name, field_name)¶ This will take in a raster file, and output a shapefile of the same area and shape.
- Input:
- raster_uri- The raster that needs to be turned into a shapefile. This
- is only the URI to the raster, we will need to get the band.
out_uri- The desired URI for the new shapefile. layer_name- The name of the layer going into the new shapefile. field-name- The name of the field that will contain the raster pixel
value.- Output:
- This will be a shapefile in the shape of the raster. The raster being passed in will be solely “high risk” areas that conatin data, and nodata values for everything else.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
rewrite_avgs_dict
(avgs_dict, aoi_names)¶ Aftermarket rejigger of the avgs_dict setup so that everything is AOI centric instead. Should produce something like the following:
- {‘AOIName’:
}
Entry point for the Habitat Risk Assessment module
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
ImproperCriteriaSpread
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor which can be passed if there are not one or more criteria in each of the 3 criteria categories: resilience, exposure, and sensitivity.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
ImproperECSelection
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor that should catch selections for exposure vs consequence scoring that are not either E or C. The user must decide in this column which the criteria applies to, and my only designate this with an ‘E’ or ‘C’.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
MissingHabitatsOrSpecies
¶ Bases:
exceptions.Exception
An exception to pass if the hra_preprocessor args dictionary being passed is missing a habitats directory or a species directory.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
MissingSensOrResilException
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor that catches h-s pairings who are missing either Sensitivity or Resilience or C criteria, though not both. The user must either zero all criteria for that pair, or make sure that both E and C are represented.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
NA_RatingsError
¶ Bases:
exceptions.Exception
An exception that is raised on an invalid ‘NA’ input.
When one or more Ratings value is set to “NA” for a habitat - stressor pair, but not ALL are set to “NA”. If ALL Rating values for a habitat - stressor pair are “NA”, then the habitat - stressor pair is considered to have NO interaction.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
NotEnoughCriteria
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor which can be passed if the number of criteria in the resilience, exposure, and sensitivity categories all sums to less than 4.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
UnexpectedString
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor that should catch any strings that are left over in the CSVs. Since everything from the CSV’s are being cast to floats, this will be a hook off of python’s ValueError, which will re-raise our exception with a more accurate message.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
ZeroDQWeightValue
¶ Bases:
exceptions.Exception
An exception specifically for the parsing of the preprocessor tables in which the model should break loudly if a user tries to enter a zero value for either a data quality or a weight. However, we should confirm that it will only break if the rating is not also zero. If they’re removing the criteria entirely from that H-S overlap, it should be allowed.
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
error_check
(line, hab_name, stress_name)¶ Throwing together a simple error checking function for all of the inputs coming from the CSV file. Want to do checks for strings vs floats, as well as some explicit string checking for ‘E’/’C’.
- Input:
- line- An array containing a line of H-S overlap data. The format of a
line would look like the following:
[‘CritName’, ‘Rating’, ‘Weight’, ‘DataQuality’, ‘Exp/Cons’]
The following restrictions should be placed on the data:
- CritName- This will be propogated by default by
- HRA_Preprocessor. Since it’s coming in as a string, we shouldn’t need to check anything.
- Rating- Can either be the explicit string ‘SHAPE’, which would
- be placed automatically by HRA_Preprocessor, or a float. ERROR: if string that isn’t ‘SHAPE’.
- Weight- Must be a float (or an int), but cannot be 0.
- ERROR: if string, or anything not castable to float, or 0.
- DataQuality- Most be a float (or an int), but cannot be 0.
- ERROR: if string, or anything not castable to float, or 0.
- Exp/Cons- Most be the string ‘E’ or ‘C’.
- ERROR: if string that isn’t one of the acceptable ones, or ANYTHING else.
Returns nothing, should raise exception if there’s an issue.
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
execute
(args)¶ Habitat Risk Assessment Preprocessor.
Want to read in multiple hab/stressors directories, in addition to named criteria, and make an appropriate csv file.
Parameters: - args['workspace_dir'] (string) – The directory to dump the output CSV files to. (required)
- args['habitats_dir'] (string) – A directory of shapefiles that are habitats. This is not required, and may not exist if there is a species layer directory. (optional)
- args['species_dir'] (string) – Directory which holds all species shapefiles, but may or may not exist if there is a habitats layer directory. (optional)
- args['stressors_dir'] (string) – A directory of ArcGIS shapefiles that are stressors. (required)
- args['exposure_crits'] (list) – list containing string names of exposure criteria (hab-stress) which should be applied to the exposure score. (optional)
- args['sensitivity-crits'] (list) – List containing string names of sensitivity (habitat-stressor overlap specific) criteria which should be applied to the consequence score. (optional)
- args['resilience_crits'] (list) – List containing string names of resilience (habitat or species-specific) criteria which should be applied to the consequence score. (optional)
- args['criteria_dir'] (string) – Directory which holds the criteria shapefiles. May not exist if the user does not desire criteria shapefiles. This needs to be in a VERY specific format, which shall be described in the user’s guide. (optional)
Returns: None
This function creates a series of CSVs within
args['workspace_dir']
. There will be one CSV for every habitat/species. These files will contain information relevant to each habitat or species, including all criteria. The criteria will be broken up into those which apply to only the habitat, and those which apply to the overlap of that habitat, and each stressor.JSON file containing vars that need to be passed on to hra non-core when that gets run. Should live inside the preprocessor folder which will be created in
args['workspace_dir']
. It will contain habitats_dir, species_dir, stressors_dir, and criteria_dir.
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
listdir
(path)¶ A replacement for the standar os.listdir which, instead of returning only the filename, will include the entire path. This will use os as a base, then just lambda transform the whole list.
- Input:
- path- The location container from which we want to gather all files.
Returns: A list of full URIs contained within ‘path’.
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
make_crit_shape_dict
(crit_uri)¶ This will take in the location of the file structure, and will return a dictionary containing all the shapefiles that we find. Hypothetically, we should be able to parse easily through the files, since it should be EXACTLY of the specs that we laid out.
- Input:
- crit_uri- Location of the file structure containing all of the
- shapefile criteria.
Returns: A dictionary containing shapefile URI’s, indexed by their criteria name, in addition to which dictionaries and h-s pairs they apply to. The structure will be as follows: - {‘h’:
- {‘HabA’:
- {‘CriteriaName: “Shapefile Datasource URI”...}, ...
},
- ‘h_s_c’:
- {(‘HabA’, ‘Stress1’):
- {‘CriteriaName: “Shapefile Datasource URI”, ...}, ...
},
- ‘h_s_e’
- {(‘HabA’, ‘Stress1’):
- {‘CriteriaName: “Shapefile Datasource URI”, ...}, ...
}
}
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
parse_hra_tables
(folder_uri)¶ This takes in the directory containing the criteria rating csv’s, and returns a coherent set of dictionaries that can be used to do EVERYTHING in non-core and core.
It will return a massive dictionary containing all of the subdictionaries needed by non core, as well as directory URI’s. It will be of the following form:
{‘habitats_dir’: ‘Habitat Directory URI’, ‘species_dir’: ‘Species Directory URI’, ‘stressors_dir’: ‘Stressors Directory URI’, ‘criteria_dir’: ‘Criteria Directory URI’, ‘buffer_dict’:
{‘Stressor 1’: 50, ‘Stressor 2’: ..., },- ‘h_s_c’:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
},
- ‘h_s_c’:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
},
- ‘habitats’:
- {Habitat A:
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- ‘warnings’:
- {‘print’:
- [‘This is a warning to the user.’, ‘This is another.’],
- ‘unbuff’:
- [(HabA, Stress1), (HabC, Stress2)]
}
}
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
parse_overlaps
(uri, habs, h_s_e, h_s_c)¶ This function will take in a location, and update the dictionaries being passed with the new Hab/Stress subdictionary info that we’re getting from the CSV at URI.
- Input:
- uri- The location of the CSV that we want to get ratings info from.
- This will contain information for a given habitat’s individual criteria ratings, as well as criteria ratings for the overlap of every stressor.
- habs- A dictionary which contains all resilience specific criteria
info. The key for these will be the habitat name. It will map to a subdictionary containing criteria information. The whole dictionary will look like the following:
- {Habitat A:
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- h_s_e- A dictionary containing all information applicable to exposure
- criteria. The dictionary will look identical to the ‘habs’ dictionary, but each key will be a tuple of two strings - (HabName, StressName).
- h_s_c- A dictionary containing all information applicable to
- sensitivity criteria. The dictionary will look identical to the ‘habs’ dictionary, but each key will be a tuple of two strings - (HabName, StressName).
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
parse_stress_buffer
(uri)¶ This will take the stressor buffer CSV and parse it into a dictionary where the stressor name maps to a float of the about by which it should be buffered.
- Input:
- uri- The location of the CSV file from which we should pull the buffer
- amounts.
Returns: A dictionary containing stressor names mapped to their corresponding buffer amounts. The float may be 0, but may not be a string. The form will be the following: {‘Stress 1’: 2000, ‘Stress 2’: 1500, ‘Stress 3’: 0, ...}
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
zero_check
(h_s_c, h_s_e, habs)¶ Any criteria that have a rating of 0 mean that they are not a desired input to the assessment. We should delete the criteria’s entire subdictionary out of the dictionary.
- Input:
- habs- A dictionary which contains all resilience specific criteria
info. The key for these will be the habitat name. It will map to a subdictionary containing criteria information. The whole dictionary will look like the following:
- {Habitat A:
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- h_s_e- A dictionary containing all information applicable to exposure
- criteria. The dictionary will look identical to the ‘habs’ dictionary, but each key will be a tuple of two strings - (HabName, StressName).
- h_s_c- A dictionary containing all information applicable to
- sensitivity criteria. The dictionary will look identical to the ‘habs’ dictionary, but each key will be a tuple of two strings - (HabName, StressName).
- Output:
- Will update each of the three dictionaries by deleting any criteria where the rating aspect is 0.
Returns: warnings- A dictionary containing items which need to be acted upon by hra_core. These will be split into two categories. ‘print’ contains statements which will be printed using logger.warn() at the end of a run. ‘unbuff’ is for pairs which should use the unbuffered stressor file in lieu of the decayed rated raster. - {‘print’: [‘This is a warning to the user.’, ‘This is another.’],
- ‘unbuff’: [(HabA, Stress1), (HabC, Stress2)]
}
Module that contains the core computational components for the hydropower model including the water yield, water scarcity, and valuation functions
-
natcap.invest.hydropower.hydropower_water_yield.
add_dict_to_shape
(shape_uri, field_dict, field_name, key)¶ Add a new field to a shapefile with values from a dictionary. The dictionaries keys should match to the values of a unique fields values in the shapefile
- shape_uri - a URI path to a ogr datasource on disk with a unique field
- ‘key’. The field ‘key’ should have values that correspond to the keys of ‘field_dict’
- field_dict - a python dictionary with keys mapping to values. These
- values will be what is filled in for the new field
field_name - a string for the name of the new field to add
- key - a string for the field name in ‘shape_uri’ that represents
- the unique features
returns - nothing
-
natcap.invest.hydropower.hydropower_water_yield.
compute_rsupply_volume
(watershed_results_uri)¶ Calculate the total realized water supply volume and the mean realized water supply volume per hectare for the given sheds. Output units in cubic meters and cubic meters per hectare respectively.
- watershed_results_uri - a URI path to an OGR shapefile to get water yield
- values from
returns - Nothing
-
natcap.invest.hydropower.hydropower_water_yield.
compute_water_yield_volume
(shape_uri, pixel_area)¶ Calculate the water yield volume per sub-watershed or watershed. Add results to shape_uri, units are cubic meters
- shape_uri - a URI path to an ogr datasource for the sub-watershed
- or watershed shapefile. This shapefiles features should have a ‘wyield_mn’ attribute, which calculations are derived from
- pixel_area - the area in meters squared of a pixel from the wyield
- raster.
returns - Nothing
-
natcap.invest.hydropower.hydropower_water_yield.
compute_watershed_valuation
(watersheds_uri, val_dict)¶ Computes and adds the net present value and energy for the watersheds to an output shapefile.
- watersheds_uri - a URI path to an OGR shapefile for the
- watershed results. Where the results will be added.
- val_dict - a python dictionary that has all the valuation parameters for
- each watershed
returns - Nothing
-
natcap.invest.hydropower.hydropower_water_yield.
execute
(args)¶ Annual Water Yield: Reservoir Hydropower Production.
Executes the hydropower/water_yield model
Parameters: - args['workspace_dir'] (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- args['lulc_uri'] (string) – a uri to a land use/land cover raster whose LULC indexes correspond to indexes in the biophysical table input. Used for determining soil retention and other biophysical properties of the landscape. (required)
- args['depth_to_root_rest_layer_uri'] (string) – a uri to an input raster describing the depth of “good” soil before reaching this restrictive layer (required)
- args['precipitation_uri'] (string) – a uri to an input raster describing the average annual precipitation value for each cell (mm) (required)
- args['pawc_uri'] (string) – a uri to an input raster describing the plant available water content value for each cell. Plant Available Water Content fraction (PAWC) is the fraction of water that can be stored in the soil profile that is available for plants’ use. PAWC is a fraction from 0 to 1 (required)
- args['eto_uri'] (string) – a uri to an input raster describing the annual average evapotranspiration value for each cell. Potential evapotranspiration is the potential loss of water from soil by both evaporation from the soil and transpiration by healthy Alfalfa (or grass) if sufficient water is available (mm) (required)
- args['watersheds_uri'] (string) – a uri to an input shapefile of the watersheds of interest as polygons. (required)
- args['sub_watersheds_uri'] (string) – a uri to an input shapefile of
the subwatersheds of interest that are contained in the
args['watersheds_uri']
shape provided as input. (optional) - args['biophysical_table_uri'] (string) – a uri to an input CSV table of land use/land cover classes, containing data on biophysical coefficients such as root_depth (mm) and Kc, which are required. A column with header LULC_veg is also required which should have values of 1 or 0, 1 indicating a land cover type of vegetation, a 0 indicating non vegetation or wetland, water. NOTE: these data are attributes of each LULC class rather than attributes of individual cells in the raster map (required)
- args['seasonality_constant'] (float) – floating point value between 1 and 10 corresponding to the seasonal distribution of precipitation (required)
- args['results_suffix'] (string) – a string that will be concatenated onto the end of file names (optional)
- args['demand_table_uri'] (string) – a uri to an input CSV table of LULC classes, showing consumptive water use for each landuse / land-cover type (cubic meters per year) (required for water scarcity)
- args['valuation_table_uri'] (string) – a uri to an input CSV table of hydropower stations with the following fields (required for valuation): (‘ws_id’, ‘time_span’, ‘discount’, ‘efficiency’, ‘fraction’, ‘cost’, ‘height’, ‘kw_price’)
Returns: None
-
natcap.invest.hydropower.hydropower_water_yield.
filter_dictionary
(dict_data, values)¶ Create a subset of a dictionary given keys found in a list.
- The incoming dictionary should have keys that point to dictionary’s.
- Create a subset of that dictionary by using the same outer keys but only using the inner key:val pair if that inner key is found in the values list.
Parameters: - dict_data (dictionary) – a dictionary that has keys which point to dictionary’s.
- values (list) – a list of keys to keep from the inner dictionary’s of ‘dict_data’
Returns: a dictionary
-
natcap.invest.hydropower.hydropower_water_yield.
write_new_table
(filename, fields, data)¶ Create a new csv table from a dictionary
filename - a URI path for the new table to be written to disk
- fields - a python list of the column names. The order of the fields in
- the list will be the order in how they are written. ex: [‘id’, ‘precip’, ‘total’]
- data - a python dictionary representing the table. The dictionary
should be constructed with unique numerical keys that point to a dictionary which represents a row in the table: data = {0 : {‘id’:1, ‘precip’:43, ‘total’: 65},
1 : {‘id’:2, ‘precip’:65, ‘total’: 94}}
returns - nothing
DBF accessing helpers.
FIXME: more documentation needed
Examples
Create new table, setup structure, add records:
dbf = Dbf(filename, new=True) dbf.addField(
(“NAME”, “C”, 15), (“SURNAME”, “C”, 25), (“INITIALS”, “C”, 10), (“BIRTHDATE”, “D”),) for (n, s, i, b) in (
(“John”, “Miller”, “YC”, (1980, 10, 11)), (“Andy”, “Larkin”, “”, (1980, 4, 11)),
- ):
- rec = dbf.newRecord() rec[“NAME”] = n rec[“SURNAME”] = s rec[“INITIALS”] = i rec[“BIRTHDATE”] = b rec.store()
dbf.close()
Open existed dbf, read some data:
dbf = Dbf(filename, True) for rec in dbf:
- for fldName in dbf.fieldNames:
- print ‘%s: %s (%s)’ % (fldName, rec[fldName],
- type(rec[fldName]))
dbf.close()
-
class
natcap.invest.iui.dbfpy.dbf.
Dbf
(f, readOnly=False, new=False, ignoreErrors=False)¶ Bases:
object
DBF accessor.
- FIXME:
- docs and examples needed (dont’ forget to tell about problems adding new fields on the fly)
- Implementation notes:
_new
field is used to indicate whether this is a new data table. addField could be used only for the new tables! If at least one record was appended to the table it’s structure couldn’t be changed.
-
HeaderClass
¶ alias of
DbfHeader
-
INVALID_VALUE
= <INVALID>¶
-
RecordClass
¶ alias of
DbfRecord
-
__getitem__
(index)¶ Return DbfRecord instance.
-
__len__
()¶ Return number of records.
-
__setitem__
(index, record)¶ Write DbfRecord instance to the stream.
-
addField
(*defs)¶ Add field definitions.
For more information see header.DbfHeader.addField.
-
append
(record)¶ Append
record
to the database.
-
changed
¶
-
close
()¶
-
closed
¶
-
fieldDefs
¶
-
fieldNames
¶
-
flush
()¶ Flush data to the associated stream.
-
header
¶
-
ignoreErrors
¶ Error processing mode for DBF field value conversion
if set, failing field value conversion will return
INVALID_VALUE
instead of raising conversion error.
-
indexOfFieldName
(name)¶ Index of field named
name
.
-
name
¶
-
newRecord
()¶ Return new record, which belong to this table.
-
recordCount
¶
-
stream
¶
.DBF creation helpers.
- Note: this is a legacy interface. New code should use Dbf class
- for table creation (see examples in dbf.py)
- TODO:
- handle Memo fields.
- check length of the fields accoring to the http://www.clicketyclick.dk/databases/xbase/format/data_types.html
-
class
natcap.invest.iui.dbfpy.dbfnew.
dbf_new
¶ Bases:
object
New .DBF creation helper.
Example Usage:
dbfn = dbf_new() dbfn.add_field(“name”,’C’,80) dbfn.add_field(“price”,’N’,10,2) dbfn.add_field(“date”,’D’,8) dbfn.write(“tst.dbf”)Note
This module cannot handle Memo-fields, they are special.
-
FieldDefinitionClass
¶ alias of
_FieldDefinition
-
add_field
(name, typ, len, dec=0)¶ Add field definition.
Parameters: - name – field name (str object). field name must not contain ASCII NULs and it’s length shouldn’t exceed 10 characters.
- typ – type of the field. this must be a single character from the “CNLMDT” set meaning character, numeric, logical, memo, date and date/time respectively.
- len – length of the field. this argument is used only for the character and numeric fields. all other fields have fixed length. FIXME: use None as a default for this argument?
- dec – decimal precision. used only for the numric fields.
-
fields
¶
-
write
(filename)¶ Create empty .DBF file using current structure.
-
DBF fields definitions.
- TODO:
- make memos work
-
natcap.invest.iui.dbfpy.fields.
lookupFor
(typeCode)¶ Return field definition class for the given type code.
typeCode
must be a single character. That type should be previously registered.Use registerField to register new field class.
Returns: Return value is a subclass of the DbfFieldDef.
-
class
natcap.invest.iui.dbfpy.fields.
DbfCharacterFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.iui.dbfpy.fields.DbfFieldDef
Definition of the character field.
-
decodeValue
(value)¶ Return string object.
Return value is a
value
argument with stripped right spaces.
-
defaultValue
= ''¶
-
encodeValue
(value)¶ Return raw data string encoded from a
value
.
-
typeCode
= 'C'¶
-
-
class
natcap.invest.iui.dbfpy.fields.
DbfFloatFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.iui.dbfpy.fields.DbfNumericFieldDef
Definition of the float field - same as numeric.
-
typeCode
= 'F'¶
-
-
class
natcap.invest.iui.dbfpy.fields.
DbfLogicalFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.iui.dbfpy.fields.DbfFieldDef
Definition of the logical field.
-
decodeValue
(value)¶ Return True, False or -1 decoded from
value
.
-
defaultValue
= -1¶
-
encodeValue
(value)¶ Return a character from the “TF?” set.
Returns: Return value is “T” if value
is True ”?” if value is -1 or False otherwise.
-
length
= 1¶
-
typeCode
= 'L'¶
-
-
class
natcap.invest.iui.dbfpy.fields.
DbfDateFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.iui.dbfpy.fields.DbfFieldDef
Definition of the date field.
-
decodeValue
(value)¶ Return a
datetime.date
instance decoded fromvalue
.
-
defaultValue
= datetime.date(2016, 6, 10)¶
-
encodeValue
(value)¶ Return a string-encoded value.
value
argument should be a value suitable for the utils.getDate call.Returns: Return value is a string in format “yyyymmdd”.
-
length
= 8¶
-
typeCode
= 'D'¶
-
-
class
natcap.invest.iui.dbfpy.fields.
DbfMemoFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.iui.dbfpy.fields.DbfFieldDef
Definition of the memo field.
Note: memos aren’t currenly completely supported.
-
decodeValue
(value)¶ Return int .dbt block number decoded from the string object.
-
defaultValue
= ' '¶
-
encodeValue
(value)¶ Return raw data string encoded from a
value
.Note: this is an internal method.
-
length
= 10¶
-
typeCode
= 'M'¶
-
-
class
natcap.invest.iui.dbfpy.fields.
DbfNumericFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.iui.dbfpy.fields.DbfFieldDef
Definition of the numeric field.
-
decodeValue
(value)¶ Return a number decoded from
value
.If decimals is zero, value will be decoded as an integer; or as a float otherwise.
Returns: Return value is a int (long) or float instance.
-
defaultValue
= 0¶
-
encodeValue
(value)¶ Return string containing encoded
value
.
-
typeCode
= 'N'¶
-
-
class
natcap.invest.iui.dbfpy.fields.
DbfCurrencyFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.iui.dbfpy.fields.DbfFieldDef
Definition of the currency field.
-
decodeValue
(value)¶ Return float number decoded from
value
.
-
defaultValue
= 0.0¶
-
encodeValue
(value)¶ Return string containing encoded
value
.
-
length
= 8¶
-
typeCode
= 'Y'¶
-
-
class
natcap.invest.iui.dbfpy.fields.
DbfIntegerFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.iui.dbfpy.fields.DbfFieldDef
Definition of the integer field.
-
decodeValue
(value)¶ Return an integer number decoded from
value
.
-
defaultValue
= 0¶
-
encodeValue
(value)¶ Return string containing encoded
value
.
-
length
= 4¶
-
typeCode
= 'I'¶
-
-
class
natcap.invest.iui.dbfpy.fields.
DbfDateTimeFieldDef
(name, length=None, decimalCount=None, start=None, stop=None, ignoreErrors=False)¶ Bases:
natcap.invest.iui.dbfpy.fields.DbfFieldDef
Definition of the timestamp field.
-
JDN_GDN_DIFF
= 1721425¶
-
decodeValue
(value)¶ Return a datetime.datetime instance.
-
defaultValue
= datetime.datetime(2016, 6, 10, 0, 19, 13, 408748)¶
-
encodeValue
(value)¶ Return a string-encoded
value
.
-
length
= 8¶
-
typeCode
= 'T'¶
-
DBF header definition.
- TODO:
- handle encoding of the character fields (encoding information stored in the DBF header)
-
class
natcap.invest.iui.dbfpy.header.
DbfHeader
(fields=None, headerLength=0, recordLength=0, recordCount=0, signature=3, lastUpdate=None, ignoreErrors=False)¶ Bases:
object
Dbf header definition.
For more information about dbf header format visit http://www.clicketyclick.dk/databases/xbase/format/dbf.html#DBF_STRUCT
Examples
- Create an empty dbf header and add some field definitions:
- dbfh = DbfHeader() dbfh.addField((“name”, “C”, 10)) dbfh.addField((“date”, “D”)) dbfh.addField(DbfNumericFieldDef(“price”, 5, 2))
- Create a dbf header with field definitions:
- dbfh = DbfHeader([
- (“name”, “C”, 10), (“date”, “D”), DbfNumericFieldDef(“price”, 5, 2),
])
-
__getitem__
(item)¶ Return a field definition by numeric index or name string
-
addField
(*defs)¶ Add field definition to the header.
Examples
- dbfh.addField(
- (“name”, “C”, 20), dbf.DbfCharacterFieldDef(“surname”, 20), dbf.DbfDateFieldDef(“birthdate”), (“member”, “L”),
) dbfh.addField((“price”, “N”, 5, 2)) dbfh.addField(dbf.DbfNumericFieldDef(“origprice”, 5, 2))
-
changed
¶
-
day
¶
-
fields
¶
-
classmethod
fromStream
(stream)¶ Return header object from the stream.
-
classmethod
fromString
(string)¶ Return header instance from the string object.
-
headerLength
¶
-
ignoreErrors
¶ Error processing mode for DBF field value conversion
if set, failing field value conversion will return
INVALID_VALUE
instead of raising conversion error.
-
lastUpdate
¶
-
month
¶
-
recordCount
¶
-
recordLength
¶
-
setCurrentDate
()¶ Update
self.lastUpdate
field with current date value.
-
signature
¶
-
toString
()¶ Returned 32 chars length string with encoded header.
-
write
(stream)¶ Encode and write header to the stream.
-
year
¶
DBF record definition.
-
class
natcap.invest.iui.dbfpy.record.
DbfRecord
(dbf, index=None, deleted=False, data=None)¶ Bases:
object
DBF record.
Instances of this class shouldn’t be created manualy, use dbf.Dbf.newRecord instead.
Class implements mapping/sequence interface, so fields could be accessed via their names or indexes (names is a preffered way to access fields).
- Hint:
- Use store method to save modified record.
Examples
- Add new record to the database:
- db = Dbf(filename) rec = db.newRecord() rec[“FIELD1”] = value1 rec[“FIELD2”] = value2 rec.store()
Or the same, but modify existed (second in this case) record:
db = Dbf(filename) rec = db[2] rec[“FIELD1”] = value1 rec[“FIELD2”] = value2 rec.store()-
__getitem__
(key)¶ Return value by field name or field index.
-
__setitem__
(key, value)¶ Set field value by integer index of the field or string name.
-
asDict
()¶ Return a dictionary of fields.
Note
Change of the dicts’s values won’t change real values stored in this object.
-
asList
()¶ Return a flat list of fields.
Note
Change of the list’s values won’t change real values stored in this object.
-
dbf
¶
-
delete
()¶ Mark method as deleted.
-
deleted
¶
-
fieldData
¶
-
classmethod
fromStream
(dbf, index)¶ Return a record read from the stream.
Parameters: - dbf – A Dbf.Dbf instance new record should belong to.
- index – Index of the record in the records’ container. This argument can’t be None in this call.
Return value is an instance of the current class.
-
classmethod
fromString
(dbf, string, index=None)¶ Return record read from the string object.
Parameters: - dbf – A Dbf.Dbf instance new record should belong to.
- string – A string new record should be created from.
- index – Index of the record in the container. If this argument is None, record will be appended.
Return value is an instance of the current class.
-
index
¶
-
position
¶
-
classmethod
rawFromStream
(dbf, index)¶ Return raw record contents read from the stream.
Parameters: - dbf – A Dbf.Dbf instance containing the record.
- index – Index of the record in the records’ container. This argument can’t be None in this call.
Return value is a string containing record data in DBF format.
-
store
()¶ Store current record in the DBF.
If
self.index
is None, this record will be appended to the records of the DBF this records belongs to; or replaced otherwise.
-
toString
()¶ Return string packed record values.
String utilities.
- TODO:
- allow strings in getDateTime routine;
-
class
natcap.invest.iui.dbfpy.utils.
classproperty
¶ Bases:
property
Works in the same way as a
property
, but for the classes.
-
natcap.invest.iui.dbfpy.utils.
getDate
(date=None)¶ Return datetime.date instance.
- Type of the
date
argument could be one of the following: - None:
- use current date value;
- datetime.date:
- this value will be returned;
- datetime.datetime:
- the result of the date.date() will be returned;
- string:
- assuming “%Y%m%d” or “%y%m%dd” format;
- number:
- assuming it’s a timestamp (returned for example by the time.time() call;
- sequence:
- assuming (year, month, day, ...) sequence;
Additionaly, if
date
has callableticks
attribute, it will be used and result of the called would be treated as a timestamp value.- Type of the
-
natcap.invest.iui.dbfpy.utils.
getDateTime
(value=None)¶ Return datetime.datetime instance.
- Type of the
value
argument could be one of the following: - None:
- use current date value;
- datetime.date:
- result will be converted to the datetime.datetime instance using midnight;
- datetime.datetime:
value
will be returned as is;- string:
- * CURRENTLY NOT SUPPORTED *;
- number:
- assuming it’s a timestamp (returned for example by the time.time() call;
- sequence:
- assuming (year, month, day, ...) sequence;
Additionaly, if
value
has callableticks
attribute, it will be used and result of the called would be treated as a timestamp value.- Type of the
-
natcap.invest.iui.dbfpy.utils.
unzfill
(str)¶ Return a string without ASCII NULs.
This function searchers for the first NUL (ASCII 0) occurance and truncates string till that position.
Single entry point for all InVEST applications.
-
natcap.invest.iui.cli.
iui_dir
()¶ Return the path to the IUI folder.
-
natcap.invest.iui.cli.
list_models
()¶ List all models that have .json files defined in the iui dir.
Returns: A sorted list of model names.
-
natcap.invest.iui.cli.
load_config
()¶ Load configuration options from a config file and assume defaults if they aren’t there.
-
natcap.invest.iui.cli.
main
()¶ Single entry point for all InVEST model user interfaces.
This function provides a CLI for calling InVEST models, though it it very primitive. Apart from displaying a help messsage and the version, this function will also (optionally) list the known models (based on the found json filenames) and will fire up an IUI interface based on the model name provided.
-
natcap.invest.iui.cli.
print_models
()¶ Pretty-print available models.
-
natcap.invest.iui.cli.
write_console_files
(out_dir, extension)¶ Write out console files for each of the target models to the output dir.
Parameters: - out_dir – The directory in which to save the console files.
- extension – The extension of the output files (e.g. ‘bat’, ‘sh’)
Returns: Nothing. Writes files to out_dir, though.
executor module for natcap.invest.iui
-
class
natcap.invest.iui.executor.
Controller
¶ Bases:
object
The Controller class manages two Thread objects: Executor and PrintQueueChecker. Executor runs models and queues up print statements in a local printqueue list. Printqueue checks on Executor’s printqueue and fetches the next message at a specified interval.
The printqueuechecker exists to offload the work of list-related operations from the main thread, which leaves the main thread free to perform UI-related tasks.
-
add_operation
(op, args=None, uri=None, index=None)¶ Wrapper method for Executor.addOperation. Creates new executor and message checker thread instances if necessary.
Returns nothing.
-
cancel_executor
()¶ Trigger the executor’s cancel event. Returns nothing.
-
finished
()¶ Set the executor and message checker thread objects to none and set the thread_finished variable to True.
Returns nothing.
-
get_message
()¶ Check to see if the message checker thread is alive and returns the current message if so. If the message checker thread is not alive, None is returned and self.finished() is called.
-
is_finished
()¶ Returns True if the threads are finished. False if not.
-
start_executor
()¶ Starts the executor and message checker threads. Returns nothing.
-
-
class
natcap.invest.iui.executor.
Executor
¶ Bases:
threading.Thread
-
addOperation
(op, args=None, uri=None, index=None)¶
-
cancel
()¶
-
flush
()¶
-
format_time
(seconds)¶ Render the integer number of seconds in a string. Returns a string.
-
getMessage
()¶
-
hasMessages
()¶
-
isCancelled
()¶
-
isThreadFailed
()¶
-
printTraceback
()¶
-
print_args
(args_dict)¶ - Write args_dict to a formatted string to the self.write() function.
- args_dict - a dictionary.
returns noting
-
print_system_info
(function=None)¶
-
run
()¶
-
runModel
(module, args)¶
-
runValidator
(uri, args)¶
-
saveParamsToDisk
(data=None)¶
-
setThreadFailed
(state, exception=None)¶ Set the flag of whether the thread has failed. exception should be a pointer to a python Exception or a boolean.
-
write
(string)¶
-
-
exception
natcap.invest.iui.executor.
InsufficientDiskSpace
¶ Bases:
exceptions.Exception
This class is to be used if certain WindowsErrors or IOErrors are encountered.
-
class
natcap.invest.iui.executor.
PrintQueueChecker
(executor_object)¶ Bases:
threading.Thread
PrintQueueChecker is a thread class that checks on a specified executor thread object. By placing the responsibility of this operation in a separate thread, we allow the main thread to attend to more pressing UI related tasks.
-
get_message
()¶ Check to see if there is a new message available.
Returns the string message, if one is available. None if not.
-
run
()¶ Fetch messages as long as the executor is alive or has messages.
This method is reimplemented from threading.Thread and is started by calling self.start().
This function calls the executor object function getMessage(), which uses the collections.deque queue object to manage the printqueue.
The new message is only fetched from the executor if the main thread has fetched the current message from this PrintQueueChecker instance.
returns nothing.
-
-
natcap.invest.iui.executor.
locate_module
(module_list, path=None)¶ Search for and return an executable module object as long as the target module is within the pythonpath. This method recursively uses the find_module and load_module functions of the python imp module to locate the target module by its heirarchical module name.
- module_list - a python list of strings, where each element is the name
- of a contained module. For example, os.path would be represented here as [‘os’, ‘path’].
- path=None - the base path to search. If None, the pythonpath will be
- used.
returns an executeable python module object if it can be found. Returns None if not.
InVEST fileio module
-
class
natcap.invest.iui.fileio.
AbstractTableHandler
(uri)¶ Bases:
object
This class provides an abstract class for specific reimplementation for each tabular filetype
-
__iter__
()¶ Reimplemented, allows the user to iterate through an instance of AbstractTableHandler without actually returning self.table. Having this function allows this class to actually be iterable.
-
get_fieldnames
(case='lower')¶ Returns a python list of the original fieldnames, true to their original case.
- case=’lower’ - a python string representing the desired status of the
- fieldnames. ‘lower’ for lower case, ‘orig’ for original case.
returns a python list of strings.
-
get_file_object
()¶ Getter function for the underlying file object. If the file object has not been retrieved, retrieve it before returning the file object.
returns a file object.
-
get_map
(key_field, value_field)¶ Returns a python dictionary mapping values contained in key_field to values contained in value_field. If duplicate keys are found, they are overwritten in the output dictionary.
This is implemented as a dictionary comprehension on top of self.get_table_list(), so there shouldn’t be a need to reimplement this for each subclass of AbstractTableHandler.
If the table list has not been retrieved, it is retrieved before generating the map.
key_field - a python string. value_field - a python string.
returns a python dictionary mapping key_fields to value_fields.
-
get_table_dictionary
(key_field)¶ Returns a python dictionary mapping a key value to all values in that particular row dictionary (including the key field). If duplicate keys are found, the are overwritten in the output dictionary.
- key_field - a python string of the desired field value to be used as
- the key for the returned dictionary.
returns a python dictionary of dictionaries.
-
get_table_row
(key_field, key_value)¶ Return the first full row where the value of key_field is equivalent to key_value. Raises a KeyError if key_field does not exist.
key_field - a python string. key_value - a value of appropriate type for this field.
returns a python dictionary of the row, or None if the row does not exist.
-
set_field_mask
(regexp=None, trim=0)¶ Set a mask for the table’s self.fieldnames. Any fieldnames that match regexp will have trim number of characters stripped off the front.
- regexp=None - a python string or None. If a python string, this
- will be a regular expression. If None, this represents no regular expression.
trim - a python int.
Returns nothing.
-
update
(uri)¶ Update the URI associated with this AbstractTableHandler object. Updating the URI also rebuilds the fieldnames and internal representation of the table.
- uri - a python string target URI to be set as the new URI of this
- AbstractTableHandler.
Returns nothing.
-
-
class
natcap.invest.iui.fileio.
CSVHandler
(uri)¶
-
class
natcap.invest.iui.fileio.
DBFHandler
(uri)¶
-
class
natcap.invest.iui.fileio.
JSONHandler
(uri)¶ Bases:
object
-
delete
()¶
-
get_attributes
()¶
-
write_to_disk
(dict)¶
-
-
class
natcap.invest.iui.fileio.
LastRunHandler
(modelname, version=None)¶
-
class
natcap.invest.iui.fileio.
OGRHandler
(uri)¶
-
class
natcap.invest.iui.fileio.
ResourceHandler
(resource_dir)¶ Bases:
natcap.invest.iui.fileio.JSONHandler
This class allows actually handles reading a resource handler file from disk.
-
check
(dictionary=None)¶ Iterate through all nested key-value pairs in this resource file and print an error message if the file cannot be found. Returns nothing.
-
icon
(icon_key)¶ Fetch the URI based on the icon_key. If the key is not found, raises a keyError.
- icon_key - a python string key to be accessed from the resources
- file.
Returns an absolute path to the resource.
-
-
class
natcap.invest.iui.fileio.
ResourceManager
(user_resource_dir='')¶ Bases:
object
ResourceManager reconciles overrides supplied by the user against the default values saved to the internal iui_resources resource file. It adheres to the ResourceInterface interface and will print messages to stdout when defaulting to iui’s internal resources.
-
icon
(icon_key)¶ Return the appropriate icon path based on the path returned by the user’s resource file and the path returned by the default resource file. Defaults are used if the specified python string key cannot be found in the user_resources file
icon_key - a python string key for the desired icon.Returns a python string.
-
-
natcap.invest.iui.fileio.
find_handler
(uri)¶ Attempt to open the file provided by uri.
uri - a string URI to a table on disk.returns the appropriate file’s Handler. Returns None if an appropriate handler cannot be found.
-
natcap.invest.iui.fileio.
save_model_run
(arguments, module, out_file)¶ Save an arguments list and module to a new python file that can be executed on its own.
arguments - a python dictionary of arguments. module - the python module path in python package notation (e.g.
natcap.invest.pollination.pollination)- out_file - the file to which the output file should be written. If the
- file exists, it will be overwritten.
This function returns nothing.
-
natcap.invest.iui.fileio.
save_model_run_json
(arguments, module, out_file)¶
-
natcap.invest.iui.fileio.
settings_folder
()¶ Return the file location of the user’s settings folder. This folder location is OS-dependent.
This module provides validation functionality for the IUI package. In a nutshell, this module will validate a value if given a dictionary that specifies how the value should be validated.
-
class
natcap.invest.iui.iui_validator.
CSVChecker
¶ Bases:
natcap.invest.iui.iui_validator.TableChecker
-
open
(valid_dict)¶ Attempt to open the CSV file
-
-
class
natcap.invest.iui.iui_validator.
Checker
¶ Bases:
natcap.invest.iui.registrar.Registrar
The Checker class defines a superclass for all classes that actually perform validation. Specific subclasses exist for validating specific features. These can be broken up into two separate groups based on the value of the field in the UI:
- URI-based values (such as files and folders)
- Represented by the URIChecker class and its subclasses
- Scalar values (such as strings and numbers)
- Represented by the PrimitiveChecker class and its subclasses
- There are two steps to validating a user’s input:
First, the user’s input is preprocessed by looping through a list of operations. Functions can be added to this list by calling self.add_check_function(). All functions that are added to this list must take a single argument, which is the entire validation dictionary. This is useful for guaranteeing that a given function is performed (such as opening a file and saving its reference to self.file) before any other validation happens.
Second, the user’s input is validated according to the validation dictionary in no particular order. All functions in this step must take a single argument which represents the user-defined value for this particular key.
- For example, if we have the following validation dictionary:
- valid_dict = {‘type’: ‘OGR’,
‘value’: ‘/tmp/example.shp’, ‘layers: [{layer_def ...}]}
The OGRChecker class would expect the function associated with the ‘layers’ key to take a list of python dictionaries.
-
add_check_function
(func, index=None)¶ Add a function to the list of check functions.
- func - A function. Must accept a single argument: the entire
- validation dictionary for this element.
- index=None - an int. If provided, the function will be inserted
- into the check function list at this index. If no index is provided, the check function will be appended to the list of check functions.
returns nothing
-
run_checks
(valid_dict)¶ Run all checks in their appropriate order. This operation is done in two steps:
- preprocessing
In the preprocessing step, all functions in the list of check functions are executed. All functions in this list must take a single argument: the dictionary passed in as valid_dict.
- attribute validation
In this step, key-value pairs in the valid_dict dictionary are evaluated in arbitrary order unless the key of a key-value pair is present in the list self.ignore.
-
class
natcap.invest.iui.iui_validator.
DBFChecker
¶ Bases:
natcap.invest.iui.iui_validator.TableChecker
-
open
(valid_dict, read_only=True)¶ Attempt to open the DBF.
-
-
class
natcap.invest.iui.iui_validator.
FileChecker
¶ Bases:
natcap.invest.iui.iui_validator.URIChecker
This subclass of URIChecker is tweaked to validate a file on disk.
In contrast to the FolderChecker class, this class validates that a specific file exists on disk.
-
open
(valid_dict)¶ Checks to see if the file at self.uri can be opened by python.
This function can be overridden by subclasses as appropriate for the filetype.
Returns an error string if the file cannot be opened. None if otherwise.
-
-
class
natcap.invest.iui.iui_validator.
FlexibleTableChecker
¶ Bases:
natcap.invest.iui.iui_validator.TableChecker
This class validates a file in a generic ‘table’ format.
Currently, this supports DBF and CSV formats.
This class is essentially a wrapper that first determines which file format we’re dealing with, and then delegates the rest of the work to the appropriate Checker class for that specific file format.
-
open
(valid_dict)¶ Attempt to open the file
-
-
class
natcap.invest.iui.iui_validator.
FolderChecker
¶ Bases:
natcap.invest.iui.iui_validator.URIChecker
This subclass of URIChecker is tweaked to validate a folder.
-
check_contents
(files)¶ Verify that the files listed in files exist. Paths in files must be relative to the Folder path that we are validating. This function strictly validates the presence of these files.
- files - a list of string file URIs, where each file is relative to
- the Folder stored in self.uri.
Conforming with all Checker classes, this function returns a string error if one of the files does not exist or None if all required files are found.
-
check_exists
(valid_dict)¶ Verify that the file at valid_dict[‘value’] exists. Reimplemented from URIChecker class to provide more helpful, folder-oriented error message.
-
-
class
natcap.invest.iui.iui_validator.
GDALChecker
¶ Bases:
natcap.invest.iui.iui_validator.FileChecker
This class subclasses FileChecker to provide GDAL-specific validation.
-
open
(valid_dict)¶ Attempt to open the GDAL object. URI must exist. This is an overridden FileChecker.open()
Returns an error string if in error. Returns none otherwise.
-
-
class
natcap.invest.iui.iui_validator.
NumberChecker
¶ Bases:
natcap.invest.iui.iui_validator.PrimitiveChecker
-
greater_than
(b)¶
-
greater_than_equal_to
(b)¶
-
less_than
(b)¶
-
less_than_equal_to
(b)¶
-
-
class
natcap.invest.iui.iui_validator.
OGRChecker
¶ Bases:
natcap.invest.iui.iui_validator.TableChecker
-
check_layers
(layer_list)¶ Attempt to open the layer specified in self.valid.
-
open
(valid_dict)¶ Attempt to open the shapefile.
-
-
class
natcap.invest.iui.iui_validator.
PrimitiveChecker
¶ Bases:
natcap.invest.iui.iui_validator.Checker
-
check_regexp
(valid_dict)¶ Check an input regular expression contained in valid_dict.
valid_dict - a python dictionary with the following structure: valid_dict[‘value’] - (required) a python string to be matched valid_dict[‘allowed_values’] - (required) a python dictionary with the
following entries:- valid_dict[‘allowed_values’][‘pattern’] - (‘required’) must match
- one of the following formats:
A python string regular expression formatted according to the re module (http://docs.python.org/library/re.html)
A python list of values to be matched. These are treated as logical or (‘|’ in the built regular expression). Note that the entire input pattern will be matched if you use this option. For more fine-tuned matching, use the dict described below.
- A python dict with the following entries:
- ‘values’ - (optional) a python list of strings that are
joined by the ‘join’ key to create a single regular expression. If this a ‘values’ list is not provided, it’s assumed to be [‘.*’], which matches all patterns.
- ‘join’ - (optional) the character with which to join all
provided values to form a single regular expression. If the ‘join’ value is not provided, it defaults to ‘|’, the operator for logical or.
- ‘sub’ - (optional) a string on which string substitution
will be performed for all elements in the ‘values’ list. If this value is not provided, it defaults to ‘^%s$’, which causes the entire string to be matched. This string uses python’s standard string formatting operations: http://docs.python.org/library/stdtypes.html#string-formatting-operations but should only use a single ‘%s’
- valid_dict[‘allowed_values’][‘flag’] - (optional) a python string
- representing one of the python re module’s available regexp flags. Available values are: ‘ignoreCase’, ‘verbose’, ‘debug’, ‘locale’, ‘multiline’, ‘dotAll’. If a different string is provided, no flags are applied to the regular expression matching.
- example valid_dicts:
# This would try to match ‘[a-z_]* in ‘sample_string_pattern’ valid_dict = {‘value’ : ‘sample_string pattern’,
‘allowed_values’ : {‘pattern’: ‘[a-z_]*’}}# This would try to match ‘^test$|^word$’ in ‘sample_list_pattern’ valid_dict = {‘value’ : ‘sample_list_pattern’,
‘allowed_values’: {‘pattern’: [‘test’, ‘word’]}}# This would try to match ‘test.words’ in sample_dict_pattern valid_dict = {‘value’ : ‘sample_dict_pattern’,
- ‘allowed_values’: {‘pattern’: {
- ‘values’: [‘test’, ‘words’], ‘join’: ‘.’, ‘sub’: ‘%s’}
This function builds a single regular expression string (if necessary) and checks to see if valid_dict[‘value’] matches that string. If not, a python string with an error message is returned. Otherwise, None is returned.
-
-
class
natcap.invest.iui.iui_validator.
TableChecker
¶ Bases:
natcap.invest.iui.iui_validator.FileChecker
,natcap.invest.iui.iui_validator.ValidationAssembler
This class provides a template for validation of table-based files.
-
get_matching_fields
(field_defn)¶
-
verify_fields_exist
(field_list)¶ This is a function stub for reimplementation. field_list is a python list of strings where each string in the list is a required fieldname. List order is not validated. Returns the error string if an error is found. Returns None if no error found.
-
verify_restrictions
(restriction_list)¶
-
-
class
natcap.invest.iui.iui_validator.
URIChecker
¶ Bases:
natcap.invest.iui.iui_validator.Checker
This subclass of Checker provides functionality for URI-based inputs.
-
check_exists
(valid_dict)¶ Verify that the file at valid_dict[‘value’] exists.
-
check_permissions
(permissions)¶ Verify that the URI has the given permissions.
- permissions - a string containing the characters ‘r’ for readable,
- ‘w’ for writeable, and/or ‘x’ for executable. Multiple characters may be specified, and all specified permissions will be checked. ‘rwx’ will check all 3 permissions. ‘rx’ will check only read and execute. ‘’ will not check any permissions.
Returns a string with and error message, if one is found, or else None.
-
-
class
natcap.invest.iui.iui_validator.
ValidationAssembler
¶ Bases:
object
This class allows other checker classes (such as the abstract TableChecker class) to assemble sub-elements for evaluation as primitive values. In other words, if an input validation dictionary contains two fields in a table, the ValidationAssembler class provides a framework to fetch the value from the table.
-
assemble
(value, valid_dict)¶ Assembles a dictionary containing the input value and the assembled values.
-
-
class
natcap.invest.iui.iui_validator.
ValidationThread
(validate_funcs, type_checker, valid_dict)¶ Bases:
threading.Thread
This class subclasses threading.Thread to provide validation in a separate thread of control. Functionally, this allows the work of validation to be offloaded from the user interface thread, thus providing a snappier UI. Generally, this thread is created and managed by the Validator class.
-
get_error
()¶ Returns a tuple containing the error message and the error state, both being python strings. If no error message is present, None is returned.
-
run
()¶ Reimplemented from threading.Thread.run(). Performs the actual work of the thread.
-
set_error
(error, state='error')¶ Set the local variable error_msg to the input error message. This local variable is necessary to allow for another thread to be able to retrieve it from this thread object.
error - a string. state - a python string indicating the kind of message being
reported (e.g. ‘error’ or ‘warning’)returns nothing.
-
-
class
natcap.invest.iui.iui_validator.
Validator
(validator_type)¶ Bases:
natcap.invest.iui.registrar.Registrar
Validator class contains a reference to an object’s type-specific checker. It is assumed that one single iui input element will have its own validator.
Validation can be performed at will and is performed in a new thread to allow other processes (such as the UI) to proceed without interruption.
Validation is available for a number of different values: files of various types (see the FileChecker and its subclasses), strings (see the PrimitiveChecker class) and numbers (see the NumberChecker class).
element - a reference to the element in question.
-
get_error
()¶ Gets the error message returned by the validator.
Returns a tuple with (error_state, error_message). Tuple is (None, None) if no error has been found or if the validator thread has not been created.
-
init_type_checker
(validator_type)¶ Initialize the type checker based on the input validator_type.
validator_type - a string representation of the validator type.
- Returns an instance of a checker class if validator_type matches an
- existing checker class. Returns None otherwise.
-
thread_finished
()¶ Check to see whether the validator has finished. This is done by calling the active thread’s is_alive() function.
Returns a boolean. True if the thread is alive.
-
validate
(valid_dict)¶ Validate the element. This is a two step process: first, all functions in the Validator’s validateFuncs list are executed. Then, The validator’s type checker class is invoked to actually check the input against the defined restrictions.
Note that this is done in a separate thread.
returns a string if an error is found. Returns None otherwise.
-
-
natcap.invest.iui.iui_validator.
get_fields
(feature)¶ Return a dict with all fields in the given feature.
feature - an OGR feature.
Returns an assembled python dict with a mapping of fieldname -> fieldvalue
-
class
natcap.invest.iui.registrar.
DatatypeRegistrar
¶ Bases:
natcap.invest.iui.registrar.Registrar
-
eval
(mapKey, opValues)¶
-
Functions to assist with remote logging of InVEST usage.
-
class
natcap.invest.iui.usage_logger.
LoggingServer
(database_filepath)¶ Bases:
object
RPC server for logging invest runs and getting database summaries.
-
get_run_summary_db
()¶ Retrieve the raw sqlite database of runs as a binary stream.
-
log_invest_run
(data, mode)¶ Log some parameters of an InVEST run.
Metadata is saved to a new record in the sqlite database found at self.database_filepath. The mode specifies if it is a log or an exit status notification. The appropriate table name and fields will be used in that case.
Parameters: - data (dict) – a flat dictionary with data about the InVEST run where the keys of the dictionary are at least self._LOG_FIELD_NAMES
- mode (string) – one of ‘log’ or ‘exit’. If ‘log’ uses self._LOG_TABLE_NAME and parameters, while ‘exit’ logs to self._LOG_EXIT_TABLE_NAME
Returns: None
-
-
natcap.invest.iui.usage_logger.
execute
(args)¶ Function to start a remote procedure call server.
Parameters: - args['database_filepath'] (string) – local filepath to the sqlite database
- args['hostname'] (string) – network interface to bind to
- args['port'] (int) – TCP port to bind to
Returns: never
InVEST Marine Water Quality Biophysical module at the “uri” level
-
natcap.invest.marine_water_quality.marine_water_quality_biophysical.
execute
(args)¶ Marine Water Quality.
Main entry point for the InVEST 3.0 marine water quality biophysical model.
Parameters: - args['workspace_dir'] (string) – Directory to place outputs
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['aoi_poly_uri'] (string) – OGR polygon Datasource indicating region of interest to run the model. Will define the grid.
- args['land_poly_uri'] (string) – OGR polygon DataSource indicating areas where land is.
- args['pixel_size'] (float) – float indicating pixel size in meters of output grid.
- args['layer_depth'] (float) – float indicating the depth of the grid cells in meters.
- args['source_points_uri'] (string) – OGR point Datasource indicating point sources of pollution.
- args['source_point_data_uri'] (string) – csv file indicating the biophysical properties of the point sources.
- args['kps'] (float) – float indicating decay rate of pollutant (kg/day)
- args['tide_e_points_uri'] (string) – OGR point Datasource with spatial information about the E parameter
- args['adv_uv_points_uri'] (string) – optional OGR point Datasource with spatial advection u and v vectors.
Returns: nothing
-
natcap.invest.marine_water_quality.marine_water_quality_core.
diffusion_advection_solver
(source_point_data, kps, in_water_array, tide_e_array, adv_u_array, adv_v_array, nodata, cell_size, layer_depth)¶ - 2D Water quality model to track a pollutant in the ocean. Three input
- arrays must be of the same shape. Returns the solution in an array of the same shape.
- source_point_data - dictionary of the form:
- { source_point_id_0: {‘point’: [row_point, col_point] (in gridspace),
- ‘WPS’: float (loading?), ‘point’: ...},
source_point_id_1: ...}
kps - absorption rate for the source point pollutants in_water_array - 2D numpy array of booleans where False is a land pixel and
True is a water pixel.- tide_e_array - 2D numpy array with tidal E values or nodata values, must
- be same shape as in_water_array (m^2/sec)
- adv_u_array, adv_v_array - the u and v components of advection, must be
- same shape as in_water_array (units?)
nodata - the value in the input arrays that indicate a nodata value. cell_size - the length of the side of a cell in meters layer_depth - float indicating the depth of the grid cells in
meters.
Module for the execution of the biophysical component of the InVEST Nutrient Deposition model.
-
natcap.invest.ndr.ndr.
add_fields_to_shapefile
(key_field, field_summaries, output_layer, field_header_order=None)¶ Adds fields and their values indexed by key fields to an OGR layer open for writing.
- key_field - name of the key field in the output_layer that
- uniquely identifies each polygon.
- field_summaries - a dictionary indexed by the desired field
- name to place in the polygon that indexes to another dictionary indexed by key_field value to map to that particular polygon. ex {‘field_name_1’: {key_val1: value, key_val2: value}, ‘field_name_2’: {key_val1: value, etc.
output_layer - an open writable OGR layer field_header_order - a list of field headers in the order we
wish them to appear in the output table, if None then random key order in field summaries is used.returns nothing
-
natcap.invest.ndr.ndr.
execute
(args)¶ Nutrient Delivery Ratio.
Parameters: - args['workspace_dir'] (string) – path to current workspace
- args['dem_uri'] (string) – path to digital elevation map raster
- args['lulc_uri'] (string) – a path to landcover map raster
- args['runoff_proxy_uri'] (string) – a path to a runoff proxy raster
- args['watersheds_uri'] (string) – path to the watershed shapefile
- args['biophysical_table_uri'] (string) –
path to csv table on disk containing nutrient retention values.
For each nutrient type [t] in args[‘calc_[t]’] that is true, must contain the following headers:
‘load_[t]’, ‘eff_[t]’, ‘crit_len_[t]’
If args[‘calc_n’] is True, must also contain the header ‘proportion_subsurface_n’ field.
- args['calc_p'] (boolean) – if True, phosphorous is modeled, additionally if True then biophysical table must have p fields in them
- args['calc_n'] (boolean) – if True nitrogen will be modeled, additionally biophysical table must have n fields in them.
- args['results_suffix'] (string) – (optional) a text field to append to all output files
- args['threshold_flow_accumulation'] – a number representing the flow accumulation in terms of upstream pixels.
- args['_prepare'] – (optional) The preprocessed set of data created by the ndr._prepare call. This argument could be used in cases where the call to this function is scripted and can save a significant amount DEM processing runtime.
Returns: None
-
natcap.invest.nearshore_wave_and_erosion.CPf_SignalSmooth.
smooth
(x, window_len=11, window='hanning')¶ smooth the data using a window with requested size.
This method is based on the convolution of a scaled window with the signal. The signal is prepared by introducing reflected copies of the signal (with the window size) in both ends so that transient parts are minimized in the begining and end part of the output signal.
- input:
x: the input signal window_len: the dimension of the smoothing window; should be an odd integer window: the type of window from ‘flat’, ‘hanning’, ‘hamming’, ‘bartlett’, ‘blackman’
flat window will produce a moving average smoothing.- output:
- the smoothed signal
example:
t=linspace(-2,2,0.1) x=sin(t)+randn(len(t))*0.1 y=smooth(x)
see also:
numpy.hanning, numpy.hamming, numpy.bartlett, numpy.blackman, numpy.convolve scipy.signal.lfilter
TODO: the window parameter could be the window itself if an array instead of a string
Invest overlap analysis filehandler for data passed in through UI
-
natcap.invest.overlap_analysis.overlap_analysis.
create_hubs_raster
(hubs_shape_uri, decay, aoi_raster_uri, hubs_out_uri)¶ This will create a rasterized version of the hubs shapefile where each pixel on the raster will be set accourding to the decay function from the point values themselves. We will rasterize the shapefile so that all land is 0, and nodata is the distance from the closest point.
- Input:
- hubs_shape_uri - Open point shapefile containing the hub locations
- as points.
- decay - Double representing the rate at which the hub importance
- depreciates relative to the distance from the location.
- aoi_raster_uri - The URI to the area interest raster on which we
- want to base our new hubs raster.
- hubs_out_uri - The URI location at which the new hubs raster should
- be placed.
- Output:
- This creates a raster within hubs_out_uri whose data will be a function of the decay around points provided from hubs shape.
Returns nothing.
-
natcap.invest.overlap_analysis.overlap_analysis.
create_unweighted_raster
(output_dir, aoi_raster_uri, raster_files_uri)¶ This will create the set of unweighted rasters- both the AOI and individual rasterizations of the activity layers. These will all be combined to output a final raster displaying unweighted activity frequency within the area of interest.
- Input:
- output_dir- This is the directory in which the final frequency raster
- will be placed. That file will be named ‘hu_freq.tif’.
- aoi_raster_uri- The uri to the rasterized version of the AOI file
- passed in with args[‘zone_layer_file’]. We will use this within the combination function to determine where to place nodata values.
- raster_files_uri - The uris to the rasterized version of the files
- passed in through args[‘over_layer_dict’]. Each raster file shows the presence or absence of the activity that it represents.
- Output:
- A raster file named [‘workspace_dir’]/output/hu_freq.tif. This depicts the unweighted frequency of activity within a gridded area or management zone.
Returns nothing.
-
natcap.invest.overlap_analysis.overlap_analysis.
create_weighted_raster
(out_dir, intermediate_dir, aoi_raster_uri, inter_weights_dict, layers_dict, intra_name, do_inter, do_intra, do_hubs, hubs_raster_uri, raster_uris, raster_names)¶ This function will create an output raster that takes into account both inter-activity weighting and intra-activity weighting. This will produce a map that looks both at where activities are occurring, and how much people value those activities and areas.
- Input:
- out_dir- This is the directory into which our completed raster file
- should be placed when completed.
- intermediate_dir- The directory in which the weighted raster files can
- be stored.
- inter_weights_dict- The dictionary that holds the mappings from layer
- names to the inter-activity weights passed in by CSV. The dictionary key is the string name of each shapefile, minus the .shp extension. This ID maps to a double representing ther inter-activity weight of each activity layer.
- layers_dict- This dictionary contains all the activity layers that are
- included in the particular model run. This maps the name of the shapefile (excluding the .shp extension) to the open datasource itself.
- intra_name- A string which represents the desired field name in our
- shapefiles. This field should contain the intra-activity weight for that particular shape.
- do_inter- A boolean that indicates whether inter-activity weighting is
- desired.
- do_intra- A boolean that indicates whether intra-activity weighting is
- desired.
- aoi_raster_uri - The uri to the dataset for our Area Of Interest.
- This will be the base map for all following datasets.
- raster_uris - A list of uris to the open unweighted raster files
- created by make_indiv_rasters that begins with the AOI raster. This will be used when intra-activity weighting is not desired.
- raster_names- A list of file names that goes along with the unweighted
- raster files. These strings can be used as keys to the other ID-based dictionaries, and will be in the same order as the ‘raster_files’ list.
- Output:
- weighted_raster- A raster file output that takes into account both
- inter-activity weights and intra-activity weights.
Returns nothing.
-
natcap.invest.overlap_analysis.overlap_analysis.
execute
(args)¶ Overlap Analysis.
This function will take care of preparing files passed into the overlap analysis model. It will handle all files/inputs associated with calculations and manipulations. It may write log, warning, or error messages to stdout.
Parameters: - args – A python dictionary created by the UI and passed to this method. It will contain the following data.
- args['workspace_dir'] (string) – The directory in which to place all resulting files, will come in as a string. (required)
- args['zone_layer_uri'] (string) – A URI pointing to a shapefile with the analysis zones on it. (required)
- args['grid_size'] (int) – This is an int specifying how large the gridded squares over the shapefile should be. (required)
- args['overlap_data_dir_uri'] (string) – URI pointing to a directory where multiple shapefiles are located. Each shapefile represents an activity of interest for the model. (required)
- args['do-inter'] (bool) – Boolean that indicates whether or not inter-activity weighting is desired. This will decide if the overlap table will be created. (required)
- args['do_intra'] (bool) – Boolean which indicates whether or not intra-activity weighting is desired. This will will pull attributes from shapefiles passed in in ‘zone_layer_uri’. (required)
- args['do_hubs'] (bool) – Boolean which indicates if human use hubs are desired. (required)
- args['overlap_layer_tbl'] (string) – URI to a CSV file that holds relational data and identifier data for all layers being passed in within the overlap analysis directory. (optional)
- args['intra_name'] (string) – string which corresponds to a field within the layers being passed in within overlap analysis directory. This is the intra-activity importance for each activity. (optional)
- args['hubs_uri'] (string) – The location of the shapefile containing points for human use hub calculations. (optional)
- args['decay_amt'] (float) – A double representing the decay rate of value from the human use hubs. (optional)
Returns: None
-
natcap.invest.overlap_analysis.overlap_analysis.
format_over_table
(over_tbl)¶ This CSV file contains a string which can be used to uniquely identify a .shp file to which the values in that string’s row will correspond. This string, therefore, should be used as the key for the ovlap_analysis dictionary, so that we can get all corresponding values for a shapefile at once by knowing its name.
- Input:
- over_tbl- A CSV that contains a list of each interest shapefile,
- and the inter activity weights corresponding to those layers.
- Returns:
- over_dict- The analysis layer dictionary that maps the unique name
- of each layer to the optional parameter of inter-activity weight. For each entry, the key will be the string name of the layer that it represents, and the value will be the inter-activity weight for that layer.
-
natcap.invest.overlap_analysis.overlap_analysis.
make_indiv_rasters
(out_dir, overlap_shape_uris, aoi_raster_uri)¶ This will pluck each of the files out of the dictionary and create a new raster file out of them. The new file will be named the same as the original shapefile, but with a .tif extension, and will be placed in the intermediate directory that is being passed in as a parameter.
- Input:
- out_dir- This is the directory into which our completed raster files
- should be placed when completed.
- overlap_shape_uris- This is a dictionary containing all of the open
- shapefiles which need to be rasterized. The key for this dictionary is the name of the file itself, minus the .shp extension. This key maps to the open shapefile of that name.
- aoi_raster_uri- The dataset for our AOI. This will be the base map for
- all following datasets.
Returns: - raster_files- This is a list of the datasets that we want to sum. The
- first will ALWAYS be the AOI dataset, and the rest will be the variable number of other datasets that we want to sum.
- raster_names- This is a list of layer names that corresponds to the
- files in ‘raster_files’. The first layer is guaranteed to be the AOI, but all names after that will be in the same order as the files so that it can be used for indexing later.
-
natcap.invest.overlap_analysis.overlap_analysis.
make_indiv_weight_rasters
(input_dir, aoi_raster_uri, layers_dict, intra_name)¶ This is a helper function for create_weighted_raster, which abstracts some of the work for getting the intra-activity weights per pixel to a separate function. This function will take in a list of the activities layers, and using the aoi_raster as a base for the tranformation, will rasterize the shapefile layers into rasters where the burn value is based on a per-pixel intra-activity weight (specified in each polygon on the layer). This function will return a tuple of two lists- the first is a list of the rasterized shapefiles, starting with the aoi. The second is a list of the shapefile names (minus the extension) in the same order as they were added to the first list. This will be used to reference the dictionaries containing the rest of the weighting information for the final weighted raster calculation.
- Input:
- input_dir: The directory into which the weighted rasters should be
- placed.
- aoi_raster_uri: The uri to the rasterized version of the area of
- interest. This will be used as a basis for all following rasterizations.
- layers_dict: A dictionary of all shapefiles to be rasterized. The key
- is the name of the original file, minus the file extension. The value is an open shapefile datasource.
- intra_name: The string corresponding to the value we wish to pull out
- of the shapefile layer. This is an attribute of all polygons corresponding to the intra-activity weight of a given shape.
Returns: - A list of raster versions of the original
- activity shapefiles. The first file will ALWAYS be the AOI, followed by the rasterized layers.
- weighted_names: A list of the filenames minus extensions, of the
- rasterized files in weighted_raster_files. These can be used to reference properties of the raster files that are located in other dictionaries.
Return type: weighted_raster_files
This is the preperatory class for the management zone portion of overlap analysis.
-
natcap.invest.overlap_analysis.overlap_analysis_mz.
execute
(args)¶ Overlap Analysis: Management Zones.
Parameters: - args – A python dictionary created by the UI and passed to this method. It will contain the following data.
- args['workspace_dir'] (string) – The directory in which to place all resulting files, will come in as a string. (required)
- args['zone_layer_loc'] (string) – A URI pointing to a shapefile with the analysis zones on it. (required)
- args['overlap_data_dir_loc'] (string) – URI pointing to a directory where multiple shapefiles are located. Each shapefile represents an activity of interest for the model. (required)
Returns: None
This is the core module for the management zone analysis portion of the Overlap Analysis model.
-
natcap.invest.overlap_analysis.overlap_analysis_mz_core.
execute
(args)¶ This is the core module for the management zone model, which was extracted from the overlap analysis model. This particular one will take in a shapefile conatining a series of AOI’s, and a folder containing activity layers, and will return a modified shapefile of AOI’s, each of which will have an attribute stating how many activities take place within that polygon.
- Input:
- args[‘workspace_dir’]- The folder location into which we can write an
- Output or Intermediate folder as necessary, and where the final shapefile will be placed.
- args[‘zone_layer_file’]- An open shapefile which contains our
- management zone polygons. It should be noted that this should not be edited directly but instead, should have a copy made in order to add the attribute field.
- args[‘over_layer_dict’] - A dictionary which maps the name of the
- shapefile (excluding the .shp extension) to the open datasource itself. These files are each an activity layer that will be counted within the totals per management zone.
- Output:
- A file named [workspace_dir]/Ouput/mz_frequency.shp which is a copy of args[‘zone_layer_file’] with the added attribute “ACTIV_CNT” that will total the number of activities taking place in each polygon.
Returns nothing.
Core module for both overlap analysis and management zones. This function can be used by either of the secondary modules within the OA model.
-
natcap.invest.overlap_analysis.overlap_core.
get_files_dict
(folder)¶ Returns a dictionary of all .shp files in the folder.
- Input:
- folder- The location of all layer files. Among these, there should
- be files with the extension .shp. These will be used for all activity calculations.
Returns: file_dict- A dictionary which maps the name (minus file extension) of a shapefile to the open datasource itself. The key in this dictionary is the name of the file (not including file path or extension), and the value is the open shapefile.
-
natcap.invest.overlap_analysis.overlap_core.
listdir
(path)¶ A replacement for the standar os.listdir which, instead of returning only the filename, will include the entire path. This will use os as a base, then just lambda transform the whole list.
- Input:
- path- The location container from which we want to gather all files.
Returns: A list of full URIs contained within ‘path’.
Pollinator service model for InVEST.
-
exception
natcap.invest.pollination.pollination.
MissingFields
¶ Bases:
exceptions.ValueError
-
natcap.invest.pollination.pollination.
build_uri
(directory, basename, suffix=[])¶ Take the input directory and basename, inserting the provided suffixes just before the file extension. Each string in the suffix list will be underscore-separated.
directory - a python string folder path basename - a python string filename suffix=’’ - a python list of python strings to be separated by
underscores and concatenated with the basename just before the extension.returns a python string of the complete path with the correct filename.
-
natcap.invest.pollination.pollination.
execute
(args)¶ Pollinator Abundance: Crop Pollination.
Execute the pollination model from the topmost, user-accessible level.
Parameters: - workspace_dir (string) – a URI to the workspace folder. Not required to exist on disk. Additional folders will be created inside of this folder. If there are any file name collisions, this model will overwrite those files.
- landuse_cur_uri (string) – a URI to a GDAL raster on disk. ‘do_valuation’ - A boolean. Indicates whether valuation should be performed. This applies to all scenarios.
- landuse_attributes_uri (string) – a URI to a CSV on disk. See the model’s documentation for details on the structure of this table.
- landuse_fut_uri (string) – (Optional) a URI to a GDAL dataset on disk. If this args dictionary entry is provided, this model will process both the current and future scenarios.
- do_valuation (boolean) – Indicates whether the model should include valuation
- half_saturation (float) – a number between 0 and 1 indicating the half-saturation constant. See the pollination documentation for more information.
- wild_pollination_proportion (float) – a number between 0 and 1 indicating the proportion of all pollinators that are wild. See the pollination documentation for more information.
- guilds_uri (string) – a URI to a CSV on disk. See the model’s documentation for details on the structure of this table.
- ag_classes (string) – (Optional) a space-separated list of land cover classes that are to be considered as agricultural. If this input is not provided, all land cover classes are considered to be agricultural.
- farms_shapefile (string) – (Optional) shapefile containing points representing data collection points on the landscape.
- results_suffix (string) – inserted into the URI of each file created by this model, right before the file extension.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'landuse_cur_uri': 'path/to/raster', 'landuse_attributes_uri': 'path/to/csv', 'landuse_fut_uri': 'path/to/raster', 'do_valuation': 'example', 'half_saturation': 'example', 'wild_pollination_proportion': 'example', 'guilds_uri': 'path/to/csv', 'ag_classes': 'example', 'farms_shapefile': 'example', 'results_suffix': 'example', }
The following args dictionary entries are optional, and will affect the behavior of the model if provided:
- landuse_fut_uri
- ag_classes
- results_suffix
- farms_shapefile
If args[‘do_valuation’] is set to True, the following args dictionary entries are also required:
- half_saturation
- wild_pollination_proportion
This function has no return value, though it does save a number of rasters to disk. See the user’s guide for details.
-
natcap.invest.pollination.pollination.
get_point
(raster_uri, point)¶ Get the value of the point at the raster located at raster_uri. This operation is completed without using numpy.
raster_uri - a URI that GDAL can open. point - an OGR Feature to use to extract a raster value.Returns the value at the point on the raster.
InVEST Pollination model core module
-
natcap.invest.pollination.pollination_core.
add_two_rasters
(raster_1, raster_2, out_uri)¶ Add two rasters where pixels in raster_1 are not nodata. Pixels are considered to have a nodata value iff the pixel value in raster_1 is nodata. Raster_2’s pixel value is not checked for nodata.
raster_1 - a uri to a GDAL dataset raster_2 - a uri to a GDAL dataset out_uri - the uri at which to save the resulting raster.
Returns nothing.
-
natcap.invest.pollination.pollination_core.
calculate_abundance
(landuse, lu_attr, guild, nesting_fields, floral_fields, uris)¶ Calculate pollinator abundance on the landscape. The calculated pollinator abundance raster will be created at uris[‘species_abundance’].
landuse - a URI to a GDAL dataset of the LULC. lu_attr - a TableHandler guild - a dictionary containing information about the pollinator. All
entries are required: ‘alpha’ - the typical foraging distance in m ‘species_weight’ - the relative weight resource_n - One entry for each nesting field for each fieldname
denoted in nesting_fields. This value must be either 0 and 1, indicating whether the pollinator uses this nesting resource for nesting sites.- resource_f - One entry for each floral field denoted in
- floral_fields. This value must be between 0 and 1, representing the liklihood that this species will forage during this season.
- nesting_fields - a list of string fieldnames. Used to extract nesting
- fields from the guild dictionary, so fieldnames here must exist in guild.
- floral_fields - a list of string fieldnames. Used to extract floral
- season fields from the guild dictionary. Fieldnames here must exist in guild.
- uris - a dictionary with these entries:
‘nesting’ - a URI to where the nesting raster will be saved. ‘floral’ - a URI to where the floral resource raster will be saved. ‘species_abundance’ - a URI to where the species abundance raster
will be saved.‘temp’ - a URI to a folder where temp files will be saved
Returns nothing.
-
natcap.invest.pollination.pollination_core.
calculate_farm_abundance
(species_abundance, ag_map, alpha, uri, temp_dir)¶ Calculate the farm abundance raster. The final farm abundance raster will be saved to uri.
species_abundance - a URI to a GDAL dataset of species abundance. ag_map - a uri to a GDAL dataset of values where ag pixels are 1
and non-ag pixels are 0.alpha - the typical foraging distance of the current pollinator. uri - the output URI for the farm_abundance raster. temp_dir- the output folder for temp files
Returns nothing.
-
natcap.invest.pollination.pollination_core.
calculate_service
(rasters, nodata, alpha, part_wild, out_uris)¶ Calculate the service raster. The finished raster will be saved to out_uris[‘service_value’].
- rasters - a dictionary with these entries:
‘farm_value’ - a GDAL dataset. ‘farm_abundance’ - a GDAL dataset. ‘species_abundance’ - a GDAL dataset. ‘ag_map’ - a GDAL dataset. Values are either nodata, 0 (if not an
ag pixel) or 1 (if an ag pixel).
nodata - the nodata value for output rasters. alpha - the expected distance part_wild - a number between 0 and 1 representing the proportion of all
pollination that is done by wild pollinators.- out_uris - a dictionary with these entries:
- ‘species_value’ - a URI. The raster created at this URI will
- represent the part of the farm’s value that is attributed to the current species.
- ‘species_value_blurred’ - the raster created at this URI
- will be a copy of the species_value raster that has had a exponential convolution filter applied to it.
- ‘service_value’ - a URI. The raster created at this URI will be the
- calculated service value raster.
‘temp’ - a folder in which to store temp files.
Returns nothing.
-
natcap.invest.pollination.pollination_core.
calculate_yield
(in_raster, out_uri, half_sat, wild_poll, out_nodata)¶ Calculate the yield raster.
in_raster - a uri to a GDAL dataset out_uri -a uri for the output (yield) dataset half_sat - the half-saturation constant, a python int or float wild_poll - the proportion of crops that are pollinated by wild
pollinators. An int or float from 0 to 1.out_nodata - the nodata value for the output raster
Returns nothing
-
natcap.invest.pollination.pollination_core.
divide_raster
(raster, divisor, uri)¶ Divide all non-nodata values in raster_1 by divisor and save the output raster to uri.
raster - a uri to a GDAL dataset divisor - the divisor (a python scalar) uri - the uri to which to save the output raster.
Returns nothing.
-
natcap.invest.pollination.pollination_core.
execute_model
(args)¶ Execute the biophysical component of the pollination model.
- args - a python dictionary with at least the following entries:
‘landuse’ - a URI to a GDAL dataset ‘landuse_attributes’ - A fileio AbstractTableHandler object ‘guilds’ - A fileio AbstractTableHandler object ‘ag_classes’ - a python list of ints representing agricultural
classes in the landuse map. This list may be empty to represent the fact that no landuse classes are to be designated as strictly agricultural.‘nesting_fields’ - a python list of string nesting fields ‘floral fields’ - a python list of string floral fields ‘do_valuation’ - a boolean indicating whether to do valuation ‘paths’ - a dictionary with the following entries:
‘workspace’ - the workspace path ‘intermediate’ - the intermediate folder path ‘output’ - the output folder path ‘temp’ - a temp folder path.
Additionally, the args dictionary should contain these URIs, which must all be python strings of either type str or else utf-8 encoded unicode.
‘ag_map’ - a URI ‘foraging_average’ - a URI ‘abundance_total’ - a URI ‘farm_value_sum’ - a URI (Required if do_valuation == True) ‘service_value_sum’ - a URI (Required if do_valuation == True)The args dictionary must also have a dictionary containing species-specific information:
- ‘species’ - a python dictionary with a contained dictionary for each
species to be considered by the model. The key to each dictionary should be the species name. For example:
args[‘species’][‘Apis’] = { ... species_dictionary ... }- The species-specific dictionary must contain these elements:
- ‘floral’ - a URI ‘nesting’ - a URI ‘species_abundance’ - a URI ‘farm_abundance’ - a URI
If do_valuation == True, the following entries are also required to be in the species-specific dictionary:
‘farm_value’ - a URI ‘value_abundance_ratio’ - a URI ‘value_abundance_ratio_blurred’ - a URI ‘service_value’ - a URI
returns nothing.
-
natcap.invest.pollination.pollination_core.
map_attribute
(base_raster, attr_table, guild_dict, resource_fields, out_uri, list_op)¶ Make an intermediate raster where values are mapped from the base raster according to the mapping specified by key_field and value_field.
base_raster - a URI to a GDAL dataset attr_table - a subclass of fileio.AbstractTableHandler guild_dict - a python dictionary representing the guild row for this
species.resource_fields - a python list of string resource fields out_uri - a uri for the output dataset list_op - a python callable that takes a list of numerical arguments
and returns a python scalar. Examples: sum; maxreturns nothing.
-
natcap.invest.pollination.pollination_core.
reclass_ag_raster
(landuse_uri, out_uri, ag_classes, nodata)¶ Reclassify the landuse raster into a raster demarcating the agricultural state of a given pixel. The reclassed ag raster will be saved to uri.
landuse - a GDAL dataset. The land use/land cover raster. out_uri - the uri of the output, reclassified ag raster. ag_classes - a list of landuse classes that are agricultural. If an
empty list is provided, all landcover classes are considered to be agricultural.nodata - an int or float.
Returns nothing.
Buffered file manager module.
-
class
natcap.invest.recreation.buffered_numpy_disk_map.
BufferedNumpyDiskMap
(manager_filename, max_bytes_to_buffer)¶ Bases:
object
Persistent object to append and read numpy arrays to unique keys.
This object is abstractly a key/value pair map where the operations are to append, read, and delete numpy arrays associated with those keys. The object attempts to keep data in RAM as much as possible and saves data to files on disk to manage memory and persist between instantiations.
-
append
(array_id, array_data)¶ Append data to the file.
Parameters: - array_id (int) – unique key to identify the array node
- array_data (numpy.ndarray) – data to append to node.
Returns: None
-
delete
(array_id)¶ Delete node array_id from disk and cache.
-
flush
()¶ Method to flush data in memory to disk.
-
read
(array_id)¶ Read the entirety of the file.
Internally this might mean that part of the file is read from disk and the end from the buffer or any combination of those.
Parameters: array_id (string) – unique node id to read Returns: contents of node as a numpy.ndarray.
-
A hierarchical spatial index for fast culling of points in 2D space.
-
class
natcap.invest.recreation.out_of_core_quadtree.
OutOfCoreQuadTree
¶ Bases:
object
An out of core quad tree spatial indexing structure.
-
add_points
()¶ Add a list of points to the current node.
This function will split the current node if the added points exceed the maximum number of points allowed per node and is already not at the maximum level.
Parameters: - point_list (numpy.ndarray) – a numpy array of (data, x_coord, y_coord) tuples
- left_bound (int) – left index inclusive of points to consider under point_list
- right_bound (int) – right index non-inclusive of points to consider under point_list
Returns: None
-
build_node_shapes
()¶ Add features to an ogr.Layer to visualize quadtree segmentation.
Parameters: ogr_polygon_layer (ogr.layer) – an ogr polygon layer with fields ‘n_points’ (int) and ‘bb_box’ (string) defined. Returns: None
-
flush
()¶ Flush any cached data to disk.
-
get_intersecting_points_in_bounding_box
()¶ Get list of data that is contained by bounding_box.
This function takes in a bounding box and returns a list of (data, lat, lng) tuples that are contained in the leaf nodes that intersect that bounding box.
Parameters: bounding_box (list) – of the form [xmin, ymin, xmax, ymax] Returns: numpy.ndarray array of (data, x_coord, lng) of nodes that intersect the bounding box.
-
get_intersecting_points_in_polygon
()¶ Return the points contained in shapely_prepared_polygon.
This function is a high performance test routine to return the points contained in the shapely_prepared_polygon that are stored in self‘s representation of a quadtree.
Parameters: shapely_polygon (ogr.DataSource) – a polygon datasource to bound against Returns: deque of (data, x_coord, y_coord) of nodes that are contained in shapely_prepared_polygon.
-
n_nodes
()¶ Return the number of nodes in the quadtree
-
n_points
()¶ Return the number of nodes in the quadtree
-
next_available_blob_id
= 0¶
-
InVEST Recreation Client.
-
natcap.invest.recreation.recmodel_client.
delay_op
(last_time, time_delay, func)¶ Execute func if last_time + time_delay >= current time.
Parameters: - last_time (float) – last time in seconds that func was triggered
- time_delay (float) – time to wait in seconds since last_time before triggering func
- func (function) – parameterless function to invoke if current_time >= last_time + time_delay
Returns: If func was triggered, return the time which it was triggered in seconds, otherwise return last_time.
-
natcap.invest.recreation.recmodel_client.
execute
(args)¶ Recreation.
Execute recreation client model on remote server.
Parameters: - args['workspace_dir'] (string) – path to workspace directory
- args['aoi_path'] (string) – path to AOI vector
- args['hostname'] (string) – FQDN to recreation server
- args['port'] (string or int) – port on hostname for recreation server
- args['start_year'] (string) – start year in form YYYY. This year is the inclusive lower bound to consider points in the PUD and regression.
- args['end_year'] (string) – end year in form YYYY. This year is the inclusive upper bound to consider points in the PUD and regression.
- args['grid_aoi'] (boolean) – if true the polygon vector in args[‘aoi_path’] should be gridded into a new vector and the recreation model should be executed on that
- args['grid_type'] (string) – optional, but must exist if args[‘grid_aoi’] is True. Is one of ‘hexagon’ or ‘square’ and indicates the style of gridding.
- args['cell_size'] (string/float) – optional, but must exist if args[‘grid_aoi’] is True. Indicates the cell size of square pixels and the width of the horizontal axis for the hexagonal cells.
- args['compute_regression'] (boolean) – if True, then process the predictor table and scenario table (if present).
- args['predictor_table_path'] (string) –
required if args[‘compute_regression’] is True. Path to a table that describes the regression predictors, their IDs and types. Must contain the fields ‘id’, ‘path’, and ‘type’ where:
- ‘id’: is a <=10 character length ID that is used to uniquely describe the predictor. It will be added to the output result shapefile attribute table which is an ESRI Shapefile, thus limited to 10 characters.
- ‘path’: an absolute or relative (to this table) path to the predictor dataset, either a vector or raster type.
- ‘type’: one of the following,
- ‘raster_mean’: mean of values in the raster under the response polygon
- ‘raster_sum’: sum of values in the raster under the response polygon
- ‘point_count’: count of the points contained in the response polygon
- ‘point_nearest_distance’: distance to the nearest point from the response polygon
- ‘line_intersect_length’: length of lines that intersect with the response polygon in projected units of AOI
- ‘polygon_area’: area of the polygon contained within response polygon in projected units of AOI
- args['scenario_predictor_table_path'] (string) – (optional) if present runs the scenario mode of the recreation model with the datasets described in the table on this path. Field headers are identical to args[‘predictor_table_path’] and ids in the table are required to be identical to the predictor list.
- args['results_suffix'] (string) – optional, if exists is appended to any output file paths.
Returns: None
InVEST Recreation Server.
-
class
natcap.invest.recreation.recmodel_server.
RecModel
(*args, **kwargs)¶ Bases:
object
Class that manages RPCs for calculating photo user days.
-
calc_photo_user_days_in_aoi
(*args, **kwargs)¶ General purpose try/except wrapper.
-
fetch_workspace_aoi
(*args, **kwargs)¶ General purpose try/except wrapper.
-
get_valid_year_range
()¶ Return the min and max year queriable.
Returns: (min_year, max_year)
-
get_version
()¶ Return the rec model server version.
This string can be used to uniquely identify the PUD database and algorithm for publication in terms of reproducibility.
-
-
natcap.invest.recreation.recmodel_server.
build_quadtree_shape
(quad_tree_shapefile_path, quadtree, spatial_reference)¶ Generate a vector of the quadtree geometry.
Parameters: - quad_tree_shapefile_path (string) – path to save the vector
- quadtree (out_of_core_quadtree.OutOfCoreQuadTree) – quadtree data structure
- spatial_reference (osr.SpatialReference) – spatial reference for the output vector
Returns: None
-
natcap.invest.recreation.recmodel_server.
construct_userday_quadtree
(initial_bounding_box, raw_photo_csv_table, cache_dir, max_points_per_node)¶ Construct a spatial quadtree for fast querying of userday points.
Parameters: - initial_bounding_box (list of int) –
- () (raw_photo_csv_table) –
- cache_dir (string) – path to a directory that can be used to cache the quadtree files on disk
- max_points_per_node (int) – maximum number of points to allow per node of the quadree. A larger amount will cause the quadtree to subdivide.
Returns: None
-
natcap.invest.recreation.recmodel_server.
execute
(args)¶ Launch recreation server and parse/generate quadtree if necessary.
A call to this function registers a Pyro RPC RecModel entry point given the configuration input parameters described below.
There are many methods to launch a server, including at a Linux command line as shown:
- nohup python -u -c “import natcap.invest.recreation.recmodel_server;
- args={‘hostname’:’$LOCALIP’,
- ‘port’:$REC_SERVER_PORT, ‘raw_csv_point_data_path’: $POINT_DATA_PATH, ‘max_year’: $MAX_YEAR, ‘min_year’: $MIN_YEAR, ‘cache_workspace’: $CACHE_WORKSPACE_PATH’};
natcap.invest.recreation.recmodel_server.execute(args)”
Parameters: - args['raw_csv_point_data_path'] (string) – path to a csv file of the format
- args['hostname'] (string) – hostname to host Pyro server.
- args['port'] (int/or string representation of int) – port number to host Pyro entry point.
- args['max_year'] (int) – maximum year allowed to be queries by user
- args['min_year'] (int) – minimum valid year allowed to be queried by user
Returns: Never returns
InVEST recreation workspace fetcher.
-
natcap.invest.recreation.recmodel_workspace_fetcher.
execute
(args)¶ Fetch workspace from remote server.
After the call a .zip file exists at args[‘workspace_dir’] named args[‘workspace_id’] + ‘.zip’ and contains the zipped workspace of that model run.
Parameters: - args['workspace_dir'] (string) – path to workspace directory
- args['hostname'] (string) – FQDN to recreation server
- args['port'] (string or int) – port on hostname for recreation server
- args['workspace_id'] (string) – workspace identifier
Returns: None
Utilities for creating simple HTML report files.
-
class
natcap.invest.reporting.html.
Element
(tag, content='', end_tag=True, **attrs)¶ Bases:
object
Represents a generic HTML element.
Any Element object can be passed to HTMLDocument.add()
Example
doc = html.HTMLDocument(...) details_elem = doc.add(html.Element(‘details’)) details_elem.add(
html.Element(‘img’, src=’images/my_pic.png’, end_tag=False))-
add
(elem)¶ Add a child element (which is returned for convenience).
-
html
()¶ Returns an HTML string for this element (and its children).
-
-
class
natcap.invest.reporting.html.
HTMLDocument
(uri, title, header)¶ Bases:
object
Utility class for creating simple HTML files.
- Example usage:
# Create the document object. doc = html.HTMLDocument(‘myfile.html’, ‘My Page’, ‘A Page About Me’)
# Add some text. doc.write_header(‘My Early Life’) doc.write_paragraph(‘I lived in a small barn.’)
# Add a table. table = doc.add(html.Table()) table.add_row([‘Age’, ‘Weight’], is_header=True) table.add_row([‘1 year’, ‘20 pounds’]) table.add_row([‘2 years’, ‘40 pounds’])
# Add an arbitrary HTML element. # Note that the HTML ‘img’ element doesn’t have an end tag. doc.add(html.Element(‘img’, src=’images/my_pic.png’, end_tag=False))
# Create the file. doc.flush()
-
add
(elem)¶ Add an arbitrary element to the body of the document.
elem - any object that has a method html() to output HTML markup
Return the added element for convenience.
-
flush
()¶ Create a file with the contents of this document.
-
insert_table_of_contents
(max_header_level=2)¶ Insert an auto-generated table of contents.
The table of contents is based on the headers in the document.
-
write_header
(text, level=2)¶ Convenience method to write a header.
-
write_paragraph
(text)¶ Convenience method to write a paragraph.
-
class
natcap.invest.reporting.html.
Table
(**attr)¶ Bases:
object
Represents and renders HTML tables.
-
add_row
(cells, is_header=False, cell_attr=None, do_formatting=True)¶ Writes a table row with the given cell data.
- cell_attr - attributes for each cell. If provided, it must be the
- same length as cells. Each entry should be a dictionary mapping attribute key to value.
-
add_two_level_header
(outer_headers, inner_headers, row_id_header)¶ Adds a two level header to the table.
In this header, each outer header appears on the top row, and each inner header appears once beneath each outer header.
For example, the following code:
- table.add_two_level_header(
- outer_headers=[‘Weight’, ‘Value’], inner_headers=[‘Mean, Standard deviation’], row_id_header=’Farm ID’)
produces the following header:
Weight ValueFarm ID Mean Standard Deviation Mean Standard deviation
-
html
()¶ Return the HTML string for the table.
-
-
natcap.invest.reporting.html.
cell_format
(data)¶ Formats the data to put in a table cell.
A helper module for generating html tables that are represented as Strings
-
natcap.invest.reporting.table_generator.
add_checkbox_column
(col_list, row_list, checkbox_pos=1)¶ Insert a new column into the list of column dictionaries so that it is the second column dictionary found in the list. Also add the checkbox column header to the list of row dictionaries and subsequent checkbox value
- ‘col_list’- a list of dictionaries that defines the column
structure for the table (required). The order of the columns from left to right is depicted by the index of the column dictionary in the list. Each dictionary in the list has the following keys and values:
‘name’ - a string for the column name (required) ‘total’ - a boolean for whether the column should be
totaled (required)- ‘row_list’ - a list of dictionaries that represent the rows. Each
dictionaries keys should match the column names found in ‘col_list’ (required) Example: [{col_name_1: value, col_name_2: value, ...},
{col_name_1: value, col_name_2: value, ...}, ...]- checkbox_pos - an integer for the position of the checkbox
- column. Defaulted at 1 (optional)
- returns - a tuple of the updated column and rows list of dictionaries
- in that order
-
natcap.invest.reporting.table_generator.
add_totals_row
(col_headers, total_list, total_name, checkbox_total, tdata_tuples)¶ Construct a totals row as an html string. Creates one row element with data where the row gets a class name and the data get a class name if the corresponding column is a totalable column
col_headers - a list of the column headers in order (required)
- total_list - a list of booleans that corresponds to ‘col_headers’ and
- indicates whether a column should be totaled (required)
- total_name - a string for the name of the total row, ex: ‘Total’, ‘Sum’
- (required)
- checkbox_total - a boolean value that distinguishes whether a checkbox
- total row is being added or a regular total row. Checkbox total row is True. This will determine the row class name and row data class name (required)
- tdata_tuples - a list of tuples where the first index in the tuple is a
- boolean which indicates if a table data element has a attribute class. The second index is the String value of that class or None (required)
- return - a string representing the html contents of a row which should
- later be used in a ‘tfoot’ element
-
natcap.invest.reporting.table_generator.
generate_table
(table_dict, attributes=None)¶ Takes in a dictionary representation of a table and generates a String of the the table in the form of hmtl
- table_dict - a dictionary with the following arguments:
- ‘cols’- a list of dictionaries that defines the column
structure for the table (required). The order of the columns from left to right is depicted by the index of the column dictionary in the list. Each dictionary in the list has the following keys and values:
‘name’ - a string for the column name (required) ‘total’ - a boolean for whether the column should be
totaled (required)- ‘attr’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attr’: {‘class’: ‘offsets’}
- ‘td_class’ - a String to assign as a class name to
- the table data tags under the column. Each table data tag under the column will have a class attribute assigned to ‘td_class’ value (optional)
- ‘rows’ - a list of dictionaries that represent the rows. Each
dictionaries keys should match the column names found in ‘cols’ (possibly empty list) (required) Example: [{col_name_1: value, col_name_2: value, ...},
{col_name_1: value, col_name_2: value, ...}, ...]- ‘checkbox’ - a boolean value for whether there should be a
- checkbox column. If True a ‘selected total’ row will be added to the bottom of the table that will show the total of the columns selected (optional)
- ‘checkbox_pos’ - an integer value for in which column
- position the the checkbox column should appear (optional)
- ‘total’- a boolean value for whether there should be a constant
- total row at the bottom of the table that sums the column values (optional)
- ‘attributes’ - a dictionary of html table attributes. The attribute
- name is the key which gets set to the value of the key. (optional) Example: {‘class’: ‘sorttable’, ‘id’: ‘parcel_table’}
returns - a string representing an html table
-
natcap.invest.reporting.table_generator.
get_dictionary_values_ordered
(dict_list, key_name)¶ Generate a list, with values from ‘key_name’ found in each dictionary in the list of dictionaries ‘dict_list’. The order of the values in the returned list match the order they are retrieved from ‘dict_list’
- dict_list - a list of dictionaries where each dictionary has the same
- keys. Each dictionary should have at least one key:value pair with the key being ‘key_name’ (required)
- key_name - a String or Int for the key name of interest in the
- dictionaries (required)
- return - a list of values from ‘key_name’ in ascending order based
- on ‘dict_list’ keys
-
natcap.invest.reporting.table_generator.
get_row_data
(row_list, col_headers)¶ Construct the rows in a 2D List from the list of dictionaries, using col_headers to properly order the row data.
- ‘row_list’ - a list of dictionaries that represent the rows. Each
dictionaries keys should match the column names found in ‘col_headers’. The rows will be ordered the same as they are found in the dictionary list (required) Example: [{‘col_name_1’:‘9/13’, ‘col_name_3’:’expensive’,
‘col_name_2’:’chips’},- {‘col_name_1’:‘3/13’, ‘col_name_2’:’cheap’,
- ‘col_name_3’:’peanuts’},
- {‘col_name_1’:‘5/12’, ‘col_name_2’:’moderate’,
- ‘col_name_3’:’mints’}]
- col_headers - a List of the names of the column headers in order
- example : [col_name_1, col_name_2, col_name_3...]
return - a 2D list with each inner list representing a row
The natcap.invest.testing package defines core testing routines and functionality.
-
natcap.invest.reporting.
add_head_element
(param_args)¶ Generates a string that represents a valid element in the head section of an html file. Currently handles ‘style’ and ‘script’ elements, where both the script and style are locally embedded
param_args - a dictionary that holds the following arguments:
- param_args[‘format’] - a string representing the type of element to
- be added. Currently : ‘script’, ‘style’ (required)
- param_args[‘data_src’] - a string URI path for the external source
- of the element OR a String representing the html (DO NOT include html tags, tags are automatically generated). If a URI the file is read in as a String. (required)
- param_args[‘input_type’] - ‘Text’ or ‘File’. Determines how the
- input from ‘data_src’ is handled (required)
- ‘attributes’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attributes’: {‘class’: ‘offsets’}
returns - a string representation of the html head element
-
natcap.invest.reporting.
add_text_element
(param_args)¶ Generates a string that represents a html text block. The input string should be wrapped in proper html tags
param_args - a dictionary with the following arguments:
param_args[‘text’] - a stringreturns - a string
-
natcap.invest.reporting.
build_table
(param_args)¶ Generates a string representing a table in html format.
- param_args - a dictionary that has the parameters for building up the
html table. The dictionary includes the following:
- ‘attributes’ - a dictionary of html table attributes. The attribute
- name is the key which gets set to the value of the key. (optional) Example: {‘class’: ‘sorttable’, ‘id’: ‘parcel_table’}
- param_args[‘sortable’] - a boolean value that determines whether the
- table should be sortable (required)
- param_args[‘data_type’] - a string depicting the type of input to
- build the table from. Either ‘shapefile’, ‘csv’, or ‘dictionary’ (required)
- param_args[‘data’] - a URI to a csv or shapefile OR a list of
dictionaries. If a list of dictionaries the data should be represented in the following format: (required)
- [{col_name_1: value, col_name_2: value, ...},
- {col_name_1: value, col_name_2: value, ...}, ...]
- param_args[‘key’] - a string that depicts which column (csv) or
- field (shapefile) will be the unique key to use in extracting the data into a dictionary. (required for ‘data_type’ ‘shapefile’ and ‘csv’)
- param_args[‘columns’] - a list of dictionaries that defines the
column structure for the table (required). The order of the columns from left to right is depicted by the index of the column dictionary in the list. Each dictionary in the list has the following keys and values:
‘name’ - a string for the column name (required) ‘total’ - a boolean for whether the column should be
totaled (required)- ‘attr’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attr’: {‘class’: ‘offsets’}
- ‘td_class’ - a String to assign as a class name to
- the table data tags under the column. Each table data tag under the column will have a class attribute assigned to ‘td_class’ value (optional)
- param_args[‘total’] - a boolean value where if True a constant
- total row will be placed at the bottom of the table that sums the columns (required)
returns - a string that represents an html table
-
natcap.invest.reporting.
data_dict_to_list
(data_dict)¶ Abstract out inner dictionaries from data_dict into a list, where the inner dictionaries are added to the list in the order of their sorted keys
- data_dict - a dictionary with unique keys pointing to dictionaries.
- Could be empty (required)
returns - a list of dictionaries, or empty list if data_dict is empty
-
natcap.invest.reporting.
generate_report
(args)¶ Generate an html page from the arguments given in ‘reporting_args’
- reporting_args[title] - a string for the title of the html page
- (required)
- reporting_args[sortable] - a boolean value indicating whether
- the sorttable.js library should be added for table sorting functionality (optional)
- reporting_args[totals] - a boolean value indicating whether
- the totals_function.js script should be added for table totals functionality (optional)
- reporting_args[out_uri] - a URI to the output destination for the html
- page (required)
- reporting_args[elements] - a list of dictionaries that represent html
elements to be added to the html page. (required) If no elements are provided (list is empty) a blank html page will be generated. The 3 main element types are ‘table’, ‘head’, and ‘text’. All elements share the following arguments:
- ‘type’ - a string that depicts the type of element being add.
- Currently ‘table’, ‘head’, and ‘text’ are defined (required)
- ‘section’ - a string that depicts whether the element belongs
- in the body or head of the html page. Values: ‘body’ | ‘head’ (required)
Table element dictionary has at least the following additional arguments:
- ‘attributes’ - a dictionary of html table attributes. The
- attribute name is the key which gets set to the value of the key. (optional) Example: {‘class’: ‘sorttable’, ‘id’: ‘parcel_table’}
- ‘sortable’ - a boolean value for whether the tables columns
- should be sortable (required)
- ‘checkbox’ - a boolean value for whether there should be a
- checkbox column. If True a ‘selected total’ row will be added to the bottom of the table that will show the total of the columns selected (optional)
- ‘checkbox_pos’ - an integer value for in which column
- position the the checkbox column should appear (optional)
- ‘data_type’ - one of the following string values:
- ‘shapefile’|’hg csv’|’dictionary’. Depicts the type of data structure to build the table from (required)
- ‘data’ - either a list of dictionaries if ‘data_type’ is
‘dictionary’ or a URI to a CSV table or shapefile if ‘data_type’ is ‘shapefile’ or ‘csv’ (required). If a list of dictionaries, each dictionary should have keys that represent the columns, where each dictionary is a row (list could be empty) How the rows are ordered are defined by their index in the list. Formatted example: [{col_name_1: value, col_name_2: value, ...},
{col_name_1: value, col_name_2: value, ...}, ...]- ‘key’ - a string that defines which column or field should be
- used as the keys for extracting data from a shapefile or csv table ‘key_field’. (required for ‘data_type’ = ‘shapefile’ | ‘csv’)
- ‘columns’- a list of dictionaries that defines the column
structure for the table (required). The order of the columns from left to right is depicted by the index of the column dictionary in the list. Each dictionary in the list has the following keys and values:
‘name’ - a string for the column name (required) ‘total’ - a boolean for whether the column should be
totaled (required)- ‘attr’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attr’: {‘class’: ‘offsets’}
- ‘td_class’ - a String to assign as a class name to
- the table data tags under the column. Each table data tag under the column will have a class attribute assigned to ‘td_class’ value (optional)
- ‘total’- a boolean value for whether there should be a constant
- total row at the bottom of the table that sums the column values (optional)
Head element dictionary has at least the following additional arguments:
- ‘format’ - a string representing the type of head element being
- added. Currently ‘script’ (javascript) and ‘style’ (css style) accepted (required)
- ‘data_src’- a URI to the location of the external file for
- either the ‘script’ or the ‘style’ OR a String representing the html script or style (DO NOT include the tags) (required)
- ‘input_type’ - a String, ‘File’ or ‘Text’ that refers to how
- ‘data_src’ is being passed in (URI vs String) (required).
- ‘attributes’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attributes’: {‘id’: ‘muni_data’}
Text element dictionary has at least the following additional arguments:
- ‘text’- a string to add as a paragraph element in the html page
- (required)
returns - nothing
-
natcap.invest.reporting.
u
(string)¶
-
natcap.invest.reporting.
write_html
(html_obj, out_uri)¶ Write an html file to ‘out_uri’ from html element represented as strings in ‘html_obj’
- html_obj - a dictionary with two keys, ‘head’ and ‘body’, that point to
lists. The list for each key is a list of the htmls elements as strings (required) example: {‘head’:[‘elem_1’, ‘elem_2’,...],
‘body’:[‘elem_1’, ‘elem_2’,...]}
out_uri - a URI for the output html file
returns - nothing
DelineateIt entry point for exposing pygeoprocessing’s watershed delineation routine to a UI.
-
natcap.invest.routing.delineateit.
execute
(args)¶ Delineateit: Watershed Delineation.
This ‘model’ provides an InVEST-based wrapper around the pygeoprocessing routing API for watershed delineation.
Upon successful completion, the following files are written to the output workspace:
snapped_outlets.shp
- an ESRI shapefile with the points snapped to a nearby stream.watersheds.shp
- an ESRI shapefile of watersheds determined by the d-infinity routing algorithm.stream.tif
- a GeoTiff representing detected streams based on the providedflow_threshold
parameter. Values of 1 are streams, values of 0 are not.
Parameters: - workspace_dir (string) – The selected folder is used as the workspace all intermediate and output files will be written.If the selected folder does not exist, it will be created. If datasets already exist in the selected folder, they will be overwritten. (required)
- results_suffix (string) – This text will be appended to the end of output files to help separate multiple runs. (optional)
- dem_uri (string) – A GDAL-supported raster file with an elevation for each cell. Make sure the DEM is corrected by filling in sinks, and if necessary burning hydrographic features into the elevation model (recommended when unusual streams are observed.) See the ‘Working with the DEM’ section of the InVEST User’s Guide for more information. (required)
- outlet_shapefile_uri (string) – This is a vector of points representing points that the watersheds should be built around. (required)
- flow_threshold (int) – The number of upstream cells that must into a cell before it’s considered part of a stream such that retention stops and the remaining export is exported to the stream. Used to define streams from the DEM. (required)
- snap_distance (int) – Pixel Distance to Snap Outlet Points (required)
Returns: None
RouteDEM entry point for exposing the natcap.invest’s routing package to a UI.
-
natcap.invest.routing.routedem.
execute
(args)¶ RouteDEM: D-Infinity Routing.
This model exposes the pygeoprocessing d-infinity routing functionality in the InVEST model API.
Parameters: - workspace_dir (string) – The selected folder is used as the workspace where all intermediate and output files will be written. If the selected folder does not exist, it will be created. If datasets already exist in the selected folder, they will be overwritten. (required)
- dem_uri (string) – A GDAL-supported raster file containing a base Digital Elevation Model to execute the routing functionality across. (required)
- pit_filled_filename (string) – The filename of the output raster with pits filled in. It will go in the project workspace. (required)
- flow_direction_filename (string) – The filename of the flow direction raster. It will go in the project workspace. (required)
- flow_accumulation_filename (string) – The filename of the flow accumulation raster. It will go in the project workspace. (required)
- threshold_flow_accumulation (int) – The number of upstream cells that must flow into a cell before it’s classified as a stream. (required)
- multiple_stream_thresholds (bool) – Set to
True
to calculate multiple maps. If enabled, set stream threshold to the lowest amount, then set upper and step size thresholds. (optional) - threshold_flow_accumulation_upper (int) – The number of upstream pixels that must flow into a cell before it’s classified as a stream. (required)
- threshold_flow_accumulation_stepsize (int) – The number of cells to step up from lower to upper threshold range. (required)
- calculate_slope (bool) – Set to
True
to output a slope raster. (optional) - slope_filename (string) – The filename of the output slope raster. This will go in the project workspace. (required)
- calculate_downstream_distance (bool) – Select to calculate a distance stream raster, based on uppper threshold limit. (optional)
- downstream_distance_filename (string) – The filename of the output raster. It will go in the project workspace. (required)
Returns: None
Scenario Generator Module.
-
natcap.invest.scenario_generator.scenario_generator.
calculate_distance_raster_uri
(dataset_in_uri, dataset_out_uri)¶ Calculate distance to non-zero cell for all input zero-value cells.
Parameters: - dataset_in_uri (str) – the input mask raster. Distances calculated from the non-zero cells in raster.
- dataset_out_uri (str) – the output raster where all zero values are equal to the euclidean distance of the closest non-zero pixel.
-
natcap.invest.scenario_generator.scenario_generator.
calculate_priority
(priority_table_uri)¶ Create dictionary mapping each land-cover class to their priority weight.
Parameters: priority_table_uri (str) – path to priority csv table Returns: priority_dict – land-cover and weights_matrix Return type: dict
-
natcap.invest.scenario_generator.scenario_generator.
calculate_weights
(array, rounding=4)¶ Create list of priority weights by land-cover class.
Parameters: - array (np.array) – input array
- rounding (int) – number of decimal places to include
Returns: weights_list – list of priority weights
Return type: list
-
natcap.invest.scenario_generator.scenario_generator.
execute
(args)¶ Scenario Generator: Rule-Based.
Model entry-point.
Parameters: - workspace_dir (str) – path to workspace directory
- suffix (str) – string to append to output files
- landcover (str) – path to land-cover raster
- transition (str) – path to land-cover attributes table
- calculate_priorities (bool) – whether to calculate priorities
- priorities_csv_uri (str) – path to priority csv table
- calculate_proximity (bool) – whether to calculate proximity
- proximity_weight (float) – weight given to proximity
- calculate_transition (bool) – whether to specifiy transitions
- calculate_factors (bool) – whether to use suitability factors
- suitability_folder (str) – path to suitability folder
- suitability (str) – path to suitability factors table
- weight (float) – suitability factor weight
- factor_inclusion (int) – the rasterization method – all touched or center points
- factors_field_container (bool) – whether to use suitability factor inputs
- calculate_constraints (bool) – whether to use constraint inputs
- constraints (str) – filepath to constraints shapefile layer
- constraints_field (str) – shapefile field containing constraints field
- override_layer (bool) – whether to use override layer
- override (str) – path to override shapefile
- override_field (str) – shapefile field containing override value
- override_inclusion (int) – the rasterization method
Example Args:
args = { 'workspace_dir': 'path/to/dir', 'suffix': '', 'landcover': 'path/to/raster', 'transition': 'path/to/csv', 'calculate_priorities': True, 'priorities_csv_uri': 'path/to/csv', 'calculate_proximity': True, 'calculate_transition': True, 'calculate_factors': True, 'suitability_folder': 'path/to/dir', 'suitability': 'path/to/csv', 'weight': 0.5, 'factor_inclusion': 0, 'factors_field_container': True, 'calculate_constraints': True, 'constraints': 'path/to/shapefile', 'constraints_field': '', 'override_layer': True, 'override': 'path/to/shapefile', 'override_field': '', 'override_inclusion': 0 }
Added Afterwards:
d = { 'proximity_weight': 0.3, 'distance_field': '', 'transition_id': 'ID', 'percent_field': 'Percent Change', 'area_field': 'Area Change', 'priority_field': 'Priority', 'proximity_field': 'Proximity', 'suitability_id': '', 'suitability_layer': '', 'suitability_field': '', }
-
natcap.invest.scenario_generator.scenario_generator.
filter_fragments
(input_uri, size, output_uri)¶ Filter fragments.
Parameters: - input_uri (str) – path to input raster
- size (float) – patch (/fragments?) size threshold
- output_uri (str) – path to output raster
-
natcap.invest.scenario_generator.scenario_generator.
generate_chart_html
(cover_dict, cover_names_dict, workspace_dir)¶ Create HTML page showing statistics about land-cover change.
- Initial land-cover cell count
- Scenario land-cover cell count
- Land-cover percent change
- Land-cover percent total: initial, final, change
- Transition matrix
- Unconverted pixels list
Parameters: - cover_dict (dict) – land cover {‘cover_id’: [before, after]}
- cover_names_dict (dict) – land cover names {‘cover_id’: ‘cover_name’}
- workspace_dir (str) – path to workspace directory
Returns: chart_html – html chart
Return type: str
-
natcap.invest.scenario_generator.scenario_generator.
get_geometry_type_from_uri
(datasource_uri)¶ Get geometry type from a shapefile.
Parameters: datasource_uri (str) – path to shapefile Returns: shape_type – OGR geometry type Return type: int
-
natcap.invest.scenario_generator.scenario_generator.
get_transition_pairs_count_from_uri
(dataset_uri_list)¶ Find transition summary statistics between lulc rasters.
Parameters: dataset_uri_list (list) – list of paths to rasters Returns: unique_raster_values_count – cell type with each raster value transitions (dict): count of cells Return type: dict
GRASS Python script examples.
-
class
natcap.invest.scenic_quality.grass_examples.
grasswrapper
(dbBase='', location='', mapset='')¶
-
natcap.invest.scenic_quality.grass_examples.
random_string
(length)¶
-
natcap.invest.scenic_quality.los_sextante.
main
()¶
-
natcap.invest.scenic_quality.los_sextante.
run_script
(iface)¶ this shall be called from Script Runner
-
natcap.invest.scenic_quality.scenic_quality.
add_field_feature_set_uri
(fs_uri, field_name, field_type)¶
-
natcap.invest.scenic_quality.scenic_quality.
add_id_feature_set_uri
(fs_uri, id_name)¶
-
natcap.invest.scenic_quality.scenic_quality.
compute_viewshed
(input_array, visibility_uri, in_structure_uri, cell_size, rows, cols, nodata, GT, I_uri, J_uri, curvature_correction, refr_coeff, args)¶ array-based function that computes the viewshed as is defined in ArcGIS
-
natcap.invest.scenic_quality.scenic_quality.
compute_viewshed_uri
(in_dem_uri, out_viewshed_uri, in_structure_uri, curvature_correction, refr_coeff, args)¶ Compute the viewshed as it is defined in ArcGIS where the inputs are:
-in_dem_uri: URI to input surface raster -out_viewshed_uri: URI to the output raster -in_structure_uri: URI to a point shapefile that contains the location of the observers and the viewshed radius in (negative) meters -curvature_correction: flag for the curvature of the earth. Either FLAT_EARTH or CURVED_EARTH. Not used yet. -refraction: refraction index between 0 (max effect) and 1 (no effect). Default is 0.13.
-
natcap.invest.scenic_quality.scenic_quality.
execute
(args)¶ Scenic Quality.
Warning
The Scenic Quality model is under active development and is currently unstable.
Parameters: - workspace_dir (string) – The selected folder is used as the workspace where all intermediate and output files will be written. If the selected folder does not exist, it will be created. If datasets already exist in the selected folder, they will be overwritten. (required)
- aoi_uri (string) – An OGR-supported vector file. This AOI instructs the model where to clip the input data and the extent of analysis. Users will create a polygon feature layer that defines their area of interest. The AOI must intersect the Digital Elevation Model (DEM). (required)
- cell_size (float) – Length (in meters) of each side of the (square) cell. (optional)
- structure_uri (string) – An OGR-supported vector file. The user must specify a point feature layer that indicates locations of objects that contribute to negative scenic quality, such as aquaculture netpens or wave energy facilities. In order for the viewshed analysis to run correctly, the projection of this input must be consistent with the project of the DEM input. (required)
- dem_uri (string) – A GDAL-supported raster file. An elevation raster layer is required to conduct viewshed analysis. Elevation data allows the model to determine areas within the AOI’s land-seascape where point features contributing to negative scenic quality are visible. (required)
- refraction (float) – The earth curvature correction option corrects for the curvature of the earth and refraction of visible light in air. Changes in air density curve the light downward causing an observer to see further and the earth to appear less curved. While the magnitude of this effect varies with atmospheric conditions, a standard rule of thumb is that refraction of visible light reduces the apparent curvature of the earth by one-seventh. By default, this model corrects for the curvature of the earth and sets the refractivity coefficient to 0.13. (required)
- pop_uri (string) – A GDAL-supported raster file. A population raster layer is required to determine population within the AOI’s land-seascape where point features contributing to negative scenic quality are visible and not visible. (optional)
- overlap_uri (string) – An OGR-supported vector file. The user has the option of providing a polygon feature layer where they would like to determine the impact of objects on visual quality. This input must be a polygon and projected in meters. The model will use this layer to determine what percent of the total area of each polygon feature can see at least one of the point features impacting scenic quality.optional
- valuation_function (string) – Either ‘polynomial’ or ‘logarithmic’. This field indicates the functional form f(x) the model will use to value the visual impact for each viewpoint. For distances less than 1 km (x<1), the model uses a linear form g(x) where the line passes through f(1) (i.e. g(1) == f(1)) and extends to zero with the same slope as f(1) (i.e. g’(x) == f’(1)). (optional)
- a_coefficient (float) – First coefficient used either by the polynomial or by the logarithmic valuation function. (required)
- b_coefficient (float) – Second coefficient used either by the polynomial or by the logarithmic valuation function. (required)
- c_coefficient (float) – Third coefficient for the polynomial’s quadratic term. (required)
- d_coefficient (float) – Fourth coefficient for the polynomial’s cubic exponent. (required)
- max_valuation_radius (float) – Radius beyond which the valuation is set to zero. The valuation function ‘f’ cannot be negative at the radius ‘r’ (f(r)>=0). (required)
Returns: None
-
natcap.invest.scenic_quality.scenic_quality.
get_count_feature_set_uri
(fs_uri)¶
-
natcap.invest.scenic_quality.scenic_quality.
get_data_type_uri
(ds_uri)¶
-
natcap.invest.scenic_quality.scenic_quality.
old_reproject_dataset_uri
(original_dataset_uri, *args, **kwargs)¶ - A URI wrapper for reproject dataset that opens the original_dataset_uri
- before passing it to reproject_dataset.
original_dataset_uri - a URI to a gdal Dataset on disk
All other arguments to reproject_dataset are passed in.
return - nothing
-
natcap.invest.scenic_quality.scenic_quality.
reclassify_quantile_dataset_uri
(dataset_uri, quantile_list, dataset_out_uri, datatype_out, nodata_out)¶
-
natcap.invest.scenic_quality.scenic_quality.
reproject_dataset_uri
(original_dataset_uri, output_wkt, output_uri, output_type=<Mock id='140294645368848'>)¶ - A function to reproject and resample a GDAL dataset given an output pixel size
- and output reference and uri.
original_dataset - a gdal Dataset to reproject pixel_spacing - output dataset pixel size in projected linear units (probably meters) output_wkt - output project in Well Known Text (the result of ds.GetProjection()) output_uri - location on disk to dump the reprojected dataset output_type - gdal type of the output
return projected dataset
-
natcap.invest.scenic_quality.scenic_quality.
set_field_by_op_feature_set_uri
(fs_uri, value_field_name, op)¶
-
natcap.invest.scenic_quality.scenic_quality_core.
add_active_pixel
(sweep_line, index, distance, visibility)¶ Add a pixel to the sweep line in O(n) using a linked_list of linked_cells.
-
natcap.invest.scenic_quality.scenic_quality_core.
add_active_pixel_fast
(sweep_line, skip_nodes, distance)¶ Insert an active pixel in the sweep_line and update the skip_nodes.
- -sweep_line: a linked list of linked_cell as created by the
- linked_cell_factory.
- -skip_nodes: an array of linked lists that constitutes the hierarchy
- of skip pointers in the skip list. Each cell is defined as ???
-distance: the value to be added to the sweep_line
Return a tuple (sweep_line, skip_nodes) with the updated sweep_line and skip_nodes
-
natcap.invest.scenic_quality.scenic_quality_core.
cell_angles
(cell_coords, viewpoint)¶ Compute angles between cells and viewpoint where 0 angle is right of viewpoint.
- Inputs:
- -cell_coords: coordinate tuple (rows, cols) as numpy.where() from which to compute the angles -viewpoint: tuple (row, col) indicating the position of the observer. Each of row and col is an integer.
Returns a sorted list of angles
-
natcap.invest.scenic_quality.scenic_quality_core.
cell_link_factory
¶ alias of
cell_link
-
natcap.invest.scenic_quality.scenic_quality_core.
compute_viewshed
(input_array, nodata, coordinates, obs_elev, tgt_elev, max_dist, cell_size, refraction_coeff, alg_version)¶ Compute the viewshed for a single observer. Inputs:
-input_array: a numpy array of terrain elevations -nodata: input_array’s nodata value -coordinates: tuple (east, north) of coordinates of viewing
position-obs_elev: observer elevation above the raster map. -tgt_elev: offset for target elevation above the ground. Applied to
every point on the raster-max_dist: maximum visibility radius. By default infinity (-1), -cell_size: cell size in meters (integer) -refraction_coeff: refraction coefficient (0.0-1.0), not used yet -alg_version: name of the algorithm to be used. Either ‘cython’ (default) or ‘python’.
Returns the visibility map for the DEM as a numpy array
-
natcap.invest.scenic_quality.scenic_quality_core.
execute
(args)¶ Entry point for scenic quality core computation.
Inputs:
Returns
-
natcap.invest.scenic_quality.scenic_quality_core.
find_active_pixel
(sweep_line, distance)¶ Find an active pixel based on distance. Return None if can’t be found
-
natcap.invest.scenic_quality.scenic_quality_core.
find_active_pixel_fast
(sweep_line, skip_nodes, distance)¶ Find an active pixel based on distance.
- Inputs:
- -sweep_line: a linked list of linked_cell as created by the
- linked_cell_factory.
- -skip_list: an array of linked lists that constitutes the hierarchy
- of skip pointers in the skip list. Each cell is defined as ???
-distance: the key used to search the sweep_line
Return the linked_cell associated to ‘distance’, or None if such cell doesn’t exist
-
natcap.invest.scenic_quality.scenic_quality_core.
find_pixel_before_fast
(sweep_line, skip_nodes, distance)¶ Find the active pixel before the one with distance.
- Inputs:
- -sweep_line: a linked list of linked_cell as created by the
- linked_cell_factory.
- -skip_list: an array of linked lists that constitutes the hierarchy
- of skip pointers in the skip list. Each cell is defined as ???
-distance: the key used to search the sweep_line
- Return a tuple (pixel, hierarchy) where:
- -pixel is the linked_cell right before ‘distance’, or None if it doesn’t exist (either ‘distance’ is the first cell, or the sweep_line is empty). -hierarchy is the list of intermediate skip nodes starting from the bottom node right above the active pixel up to the top node.
-
natcap.invest.scenic_quality.scenic_quality_core.
get_perimeter_cells
(array_shape, viewpoint, max_dist=-1)¶ Compute cells along the perimeter of an array.
- Inputs:
- -array_shape: tuple (row, col) as ndarray.shape containing the size of the array from which to compute the perimeter -viewpoint: tuple (row, col) indicating the position of the observer -max_dist: maximum distance in pixels from the center of the array. Negative values are ignored (same effect as infinite distance).
Returns a tuple (rows, cols) of the cell rows and columns following the convention of numpy.where() where the first cell is immediately right to the viewpoint, and the others are enumerated clockwise.
-
natcap.invest.scenic_quality.scenic_quality_core.
hierarchy_is_consistent
(pixel, hierarchy, skip_nodes)¶ Makes simple tests to ensure the the hierarchy is consistent
-
natcap.invest.scenic_quality.scenic_quality_core.
linked_cell_factory
¶ alias of
linked_cell
-
natcap.invest.scenic_quality.scenic_quality_core.
list_extreme_cell_angles
(array_shape, viewpoint_coords, max_dist)¶ List the minimum and maximum angles spanned by each cell of a rectangular raster if scanned by a sweep line centered on viewpoint_coords.
- Inputs:
- -array_shape: a shape tuple (rows, cols) as is created from
- calling numpy.ndarray.shape()
-viewpoint_coords: a 2-tuple of coordinates similar to array_shape where the sweep line originates -max_dist: maximum viewing distance
returns a tuple (min, center, max, I, J) with min, center and max Nx1 numpy arrays of each raster cell’s minimum, center, and maximum angles and coords as two Nx1 numpy arrays of row and column of the coordinate of each point.
-
natcap.invest.scenic_quality.scenic_quality_core.
print_hierarchy
(hierarchy)¶
-
natcap.invest.scenic_quality.scenic_quality_core.
print_node
(node)¶ Printing a node by displaying its ‘distance’ and ‘next’ fields
-
natcap.invest.scenic_quality.scenic_quality_core.
print_skip_list
(sweep_line, skip_nodes)¶
-
natcap.invest.scenic_quality.scenic_quality_core.
print_sweep_line
(sweep_line)¶
-
natcap.invest.scenic_quality.scenic_quality_core.
remove_active_pixel
(sweep_line, distance)¶ Remove a pixel based on distance. Do nothing if can’t be found.
-
natcap.invest.scenic_quality.scenic_quality_core.
skip_list_is_consistent
(linked_list, skip_nodes)¶ Function that checks for skip list inconsistencies.
- Inputs:
- -sweep_line: the container proper which is a dictionary
- implementing a linked list that contains the items ordered in increasing distance
- -skip_nodes: python dict that is the hierarchical structure
- that sitting on top of the sweep_line to allow O(log n) operations.
- Returns a tuple (is_consistent, message) where is_consistent is
- True if list is consistent, False otherwise. If is_consistent is False, the string ‘message’ explains the cause
-
natcap.invest.scenic_quality.scenic_quality_core.
sweep_through_angles
(angles, add_events, center_events, remove_events, I, J, distances, visibility, visibility_map)¶ Update the active pixels as the algorithm consumes the sweep angles
-
natcap.invest.scenic_quality.scenic_quality_core.
update_visible_pixels
(active_pixels, I, J, visibility_map)¶ Update the array of visible pixels from the active pixel’s visibility
- Inputs:
-active_pixels: a linked list of dictionaries containing the following fields:
-distance: distance between pixel center and viewpoint -visibility: an elevation/distance ratio used by the algorithm to determine what pixels are bostructed -index: pixel index in the event stream, used to find the pixel’s coordinates ‘i’ and ‘j’. -next: points to the next pixel, or is None if at the endThe linked list is implemented with a dictionary where the pixels distance is the key. The closest pixel is also referenced by the key ‘closest’. -I: the array of pixel rows indexable by pixel[‘index’] -J: the array of pixel columns indexable by pixel[‘index’] -visibility_map: a python array the same size as the DEM with 1s for visible pixels and 0s otherwise. Viewpoint is always visible.
Returns nothing
-
natcap.invest.scenic_quality.scenic_quality_core.
viewshed
(input_array, cell_size, array_shape, nodata, output_uri, coordinates, obs_elev=1.75, tgt_elev=0.0, max_dist=-1.0, refraction_coeff=None, alg_version='cython')¶ URI wrapper for the viewshed computation function
- Inputs:
-input_array: numpy array of the elevation raster map -cell_size: raster cell size in meters -array_shape: input_array_shape as returned from ndarray.shape() -nodata: input_array’s raster nodata value -output_uri: output raster uri, compatible with input_array’s size -coordinates: tuple (east, north) of coordinates of viewing
position-obs_elev: observer elevation above the raster map. -tgt_elev: offset for target elevation above the ground. Applied to
every point on the raster-max_dist: maximum visibility radius. By default infinity (-1), -refraction_coeff: refraction coefficient (0.0-1.0), not used yet -alg_version: name of the algorithm to be used. Either ‘cython’ (default) or ‘python’.
Returns nothing
-
natcap.invest.scenic_quality.viewshed_grass.
execute
(args)¶
-
class
natcap.invest.scenic_quality.viewshed_grass.
grasswrapper
(dbBase='', location='/home/mlacayo/workspace/newLocation', mapset='PERMANENT')¶
-
natcap.invest.scenic_quality.viewshed_grass.
project_cleanup
()¶
-
natcap.invest.scenic_quality.viewshed_grass.
project_setup
(dataset_uri)¶
-
natcap.invest.scenic_quality.viewshed_grass.
viewshed
(dataset_uri, feature_set_uri, dataset_out_uri)¶
-
natcap.invest.scenic_quality.viewshed_sextante.
viewshed
(input_uri, output_uri, coordinates, obs_elev=1.75, tgt_elev=0.0, max_dist=-1, refraction_coeff=0.14286, memory=500, stream_dir=None, consider_curvature=False, consider_refraction=False, boolean_mode=False, elevation_mode=False, verbose=False, quiet=False)¶
InVEST Seasonal Water Yield Model.
-
natcap.invest.seasonal_water_yield.seasonal_water_yield.
execute
(args)¶ Seasonal Water Yield.
This function invokes the InVEST seasonal water yield model described in “Spatial attribution of baseflow generation at the parcel level for ecosystem-service valuation”, Guswa, et. al (under review in “Water Resources Research”)
Parameters: - args['workspace_dir'] (string) – output directory for intermediate,
- and final files (temporary,) –
- args['results_suffix'] (string) – (optional) string to append to any output files
- args['threshold_flow_accumulation'] (number) – used when classifying stream pixels from the DEM by thresholding the number of upstream cells that must flow into a cell before it’s considered part of a stream.
- args['et0_dir'] (string) – required if args[‘user_defined_local_recharge’] is False. Path to a directory that contains rasters of monthly reference evapotranspiration; units in mm.
- args['precip_dir'] (string) – required if args[‘user_defined_local_recharge’] is False. A path to a directory that contains rasters of monthly precipitation; units in mm.
- args['dem_raster_path'] (string) – a path to a digital elevation raster
- args['lulc_raster_path'] (string) – a path to a land cover raster used to classify biophysical properties of pixels.
- args['soil_group_path'] (string) –
required if args[‘user_defined_local_recharge’] is False. A path to a raster indicating SCS soil groups where integer values are mapped to soil types:
1: A 2: B 3: C 4: D
- args['aoi_path'] (string) – path to a vector that indicates the area over which the model should be run, as well as the area in which to aggregate over when calculating the output Qb.
- args['biophysical_table_path'] (string) – path to a CSV table that maps landcover codes paired with soil group types to curve numbers as well as Kc values. Headers must include ‘lucode’, ‘CN_A’, ‘CN_B’, ‘CN_C’, ‘CN_D’, ‘Kc_1’, ‘Kc_2’, ‘Kc_3’, ‘Kc_4’, ‘Kc_5’, ‘Kc_6’, ‘Kc_7’, ‘Kc_8’, ‘Kc_9’, ‘Kc_10’, ‘Kc_11’, ‘Kc_12’.
- args['rain_events_table_path'] (string) – Not required if args[‘user_defined_local_recharge’] is True or args[‘user_defined_climate_zones’] is True. Path to a CSV table that has headers ‘month’ (1-12) and ‘events’ (int >= 0) that indicates the number of rain events per month
- args['alpha_m'] (float or string) – required if args[‘monthly_alpha’] is false. Is the proportion of upslope annual available local recharge that is available in month m.
- args['beta_i'] (float or string) – is the fraction of the upgradient subsidy that is available for downgradient evapotranspiration.
- args['gamma'] (float or string) – is the fraction of pixel local recharge that is available to downgradient pixels.
- args['user_defined_local_recharge'] (boolean) – if True, indicates user will provide pre-defined local recharge raster layer
- args['l_path'] (string) – required if args[‘user_defined_local_recharge’] is True. If provided pixels indicate the amount of local recharge; units in mm.
- args['user_defined_climate_zones'] (boolean) – if True, user provides a climate zone rain events table and a climate zone raster map in lieu of a global rain events table.
- args['climate_zone_table_path'] (string) – required if args[‘user_defined_climate_zones’] is True. Contains monthly precipitation events per climate zone. Fields must be: “cz_id”, “jan”, “feb”, “mar”, “apr”, “may”, “jun”, “jul”, “aug”, “sep”, “oct”, “nov”, “dec”.
- args['climate_zone_raster_path'] (string) – required if args[‘user_defined_climate_zones’] is True, pixel values correspond to the “cz_id” values defined in args[‘climate_zone_table_path’]
- args['monthly_alpha'] (boolean) – if True, use the alpha
- args['monthly_alpha_path'] (string) – required if args[‘monthly_alpha’] is True.
Returns: None
A module for InVEST test-related data storage.
-
natcap.invest.testing.data_storage.
archive_uri
(name=None)¶
-
natcap.invest.testing.data_storage.
collect_parameters
(parameters, archive_uri)¶ Collect an InVEST model’s arguments into a dictionary and archive all the input data.
parameters - a dictionary of arguments archive_uri - a URI to the target archive.
Returns nothing.
-
natcap.invest.testing.data_storage.
extract_archive
(workspace_dir, archive_uri)¶ Extract a .tar.gzipped file to the given workspace.
workspace_dir - the folder to which the archive should be extracted archive_uri - the uri to the target archive
Returns nothing.
-
natcap.invest.testing.data_storage.
extract_parameters_archive
(workspace_dir, archive_uri, input_folder=None)¶ Extract the target archive to the target workspace folder.
workspace_dir - a uri to a folder on disk. Must be an empty folder. archive_uri - a uri to an archive to be unzipped on disk. Archive must
be in .tar.gz format.- input_folder=None - either a URI to a folder on disk or None. If None,
- temporary folder will be created and then erased using the atexit register.
Returns a dictionary of the model’s parameters for this run.
-
natcap.invest.testing.data_storage.
format_dictionary
(input_dict, types_lookup={})¶ Recurse through the input dictionary and return a formatted dictionary.
As each element is encountered, the correct function to use is looked up in the types_lookup input. If a type is not found, we assume that the element should be returned verbatim.
input_dict - a dictionary to process types_lookup - a dictionary mapping types to functions. These functions
must take a single parameter of the type that is the key. These functions must return a formatted version of the input parameter.Returns a formatted dictionary.
-
natcap.invest.testing.data_storage.
is_multi_file
(filename)¶ Check if the filename given is a file with multiple parts to it, such as an ESRI shapefile or an ArcInfo Binary Grid.
-
natcap.invest.testing.data_storage.
make_random_dir
(workspace, seed_string, prefix, make_dir=True)¶
-
natcap.invest.testing.data_storage.
make_raster_dir
(workspace, seed_string, make_dir=True)¶
-
natcap.invest.testing.data_storage.
make_vector_dir
(workspace, seed_string, make_dir=True)¶
-
natcap.invest.testing.test_writing.
add_test_to_class
(file_uri, test_class_name, test_func_name, in_archive_uri, out_archive_uri, module)¶ Add a test function to an existing test file. The test added is a regression test using the natcap.invest.testing.regression archive decorator.
file_uri - URI to the test file to modify. test_class_name - string. The test class name to modify. If the test class
already exists, the test function will be added to the test class. If not, the new class will be created.- test_func_name - string. The name of the test function to write. If a
- test function by this name already exists in the target class, the function will not be written.
in_archive_uri - URI to the input archive. out_archive_uri - URI to the output archive. module - string module, whose execute function will be run in the test
(e.g. ‘natcap.invest.pollination.pollination’)WARNING: The input test file is overwritten with the new test file.
Returns nothing.
-
natcap.invest.testing.test_writing.
class_has_test
(test_file_uri, test_class_name, test_func_name)¶ Check that a python test file contains the given class and function.
test_file_uri - a URI to a python file containing test classes. test_class_name - a string, the class name we’re looking for. test_func_name - a string, the test function name we’re looking for.
This function should be located within the target test class.Returns True if the function is found within the class, False otherwise.
-
natcap.invest.testing.test_writing.
file_has_class
(test_file_uri, test_class_name)¶ Check that a python test file contains a class.
test_file_uri - a URI to a python file containing test classes. test_class_name - a string, the class name we’re looking for.
Returns True if the class is found, False otherwise.
The natcap.invest.testing package defines core testing routines and functionality.
While the python standard library’s unittest
package provides valuable
resources for testing, GIS applications such as the various InVEST models
output GIS data that require more in-depth testing to verify equality. For
cases such as this, natcap.invest.testing
provides a GISTest
class that
provides assertions for common data formats.
natcap.invest.testing
¶The easiest way to take advantage of the functionality in natcap.invest.testing
is to use the GISTest
class whenever you write a TestCase class for your
model. Doing so will grant you access to the GIS assertions provided by
GISTest
.
This example is relatively simplistic, since there will often be many more assertions you may need to make to be able to test your model effectively:
import natcap.invest.testing
import natcap.invest.example_model
class ExampleTest(natcap.invest.testing.GISTest):
def test_some_model(self):
example_args = {
'workspace_dir': './workspace',
'arg_1': 'foo',
'arg_2': 'bar',
}
natcap.invest.example_model.execute(example_args)
# example GISTest assertion
self.assertRastersEqual('workspace/raster_1.tif',
'regression_data/raster_1.tif')
-
class
natcap.invest.testing.
GISTest
(methodName='runTest')¶ Bases:
unittest.case.TestCase
A test class with an emphasis on testing GIS outputs.
The
GISTest
class provides many functions for asserting the equality of various GIS files. This is particularly useful for GIS tool outputs, when we wish to assert the accuracy of very detailed outputs.GISTest
is a subclass ofunittest.TestCase
, so all members that exist inunittest.TestCase
also exist here. Read the python documentation onunittest
for more information about these test fixtures and their usage. The important thing to note is thatGISTest
merely provides more assertions for the more specialized testing and assertions that GIS outputs require.Example usage of
GISTest
:import natcap.invest.testing class ModelTest(natcap.invest.testing.GISTest): def test_some_function(self): # perform your tests here.
Note that to take advantage of these additional assertions, you need only to create a subclass of
GISTest
in your test file to gain access to theGISTest
assertions.-
assertArchives
(archive_1_uri, archive_2_uri)¶ Compare the contents of two archived workspaces against each other.
Takes two archived workspaces, each generated from
build_regression_archives()
, unzips them and compares the resulting workspaces against each other.Parameters: - archive_1_uri (string) – a URI to a .tar.gz workspace archive
- archive_2_uri (string) – a URI to a .tar.gz workspace archive
Raises: AssertionError
– Raised when the two workspaces are found to be different.Returns: Nothing.
-
assertCSVEqual
(aUri, bUri)¶ Tests if csv files a and b are ‘almost equal’ to each other on a per cell basis. Numeric cells are asserted to be equal out to 7 decimal places. Other cell types are asserted to be equal.
Parameters: - aUri (string) – a URI to a csv file
- bUri (string) – a URI to a csv file
Raises: AssertionError
– Raised when the two CSV files are found to be different.Returns: Nothing.
-
assertFiles
(file_1_uri, file_2_uri)¶ Assert two files are equal.
If the extension of the provided file is recognized, the relevant filetype-specific function is called and a more detailed check of the file can be done. If the extension is not recognized, the MD5sums of the two files are compared instead.
Known extensions:
.json
,.tif
,.shp
,.csv
,.txt.
,.html
Parameters: - file_1_uri (string) – a string URI to a file on disk.
- file_2_uru (string) – a string URI to a file on disk.
Raises: AssertionError
– Raised when one of the input files does not exist, when the extensions of the input files differ, or if the two files are found to differ.Returns: Nothing.
-
assertJSON
(json_1_uri, json_2_uri)¶ Assert two JSON files against each other.
The two JSON files provided will be opened, read, and their contents will be asserted to be equal. If the two are found to be different, the diff of the two files will be printed.
Parameters: - json_1_uri (string) – a uri to a JSON file.
- json_2_uri (string) – a uri to a JSON file.
Raises: AssertionError
– Raised when the two JSON objects differ.Returns: Nothing.
-
assertMD5
(uri, regression_hash)¶ Assert the MD5sum of a file against a regression MD5sum.
This method is a convenience method that uses
natcap.invest.testing.get_hash()
to determine the MD5sum of the file located at uri. It is functionally equivalent to calling:self.assertEqual(get_hash(uri), '<some md5sum>')
Regression MD5sums can be calculated for you by using
natcap.invest.testing.get_hash()
or a system-level md5sum program.Parameters: - uri (string) – a string URI to the file to be tested.
- regression_hash (string) –
Raises: AssertionError
– Raised when the MD5sum of the file at uri differs from the provided regression md5sum hash.Returns: Nothing.
-
assertMatrixes
(matrix_a, matrix_b, decimal=6)¶ Tests if the input numpy matrices are equal up to decimal places.
This is a convenience function that wraps up required functionality in
numpy.testing
.Parameters: - matrix_a (numpy.ndarray) – a numpy matrix
- matrix_b (numpy.ndarray) – a numpy matrix
- decimal (int) – an integer of the desired precision.
Raises: AssertionError
– Raised when the two matrices are determined to be different.Returns: Nothing.
-
assertRastersEqual
(a_uri, b_uri)¶ Tests if datasets a and b are ‘almost equal’ to each other on a per pixel basis
This assertion method asserts the equality of these raster characteristics:
- Raster height and width
- The number of layers in the raster
- Each pixel value, out to a precision of 7 decimal places if the pixel value is a float.
Parameters: - a_uri (string) – a URI to a GDAL dataset
- b_uri (string) – a URI to a GDAL dataset
Returns: Nothing.
Raises: IOError
– Raised when one of the input files is not found on disk.AssertionError
– Raised when the two rasters are found to be not equal to each other.
-
assertTextEqual
(text_1_uri, text_2_uri)¶ Assert that two text files are equal
This comparison is done line-by-line.
Parameters: - text_1_uri (string) – a python string uri to a text file. Considered the file to be tested.
- text_2_uri (string) – a python string uri to a text file. Considered the regression file.
Raises: AssertionError
– Raised when a line differs in the two files.Returns: Nothing.
-
assertVectorsEqual
(aUri, bUri)¶ Tests if vector datasources are equal to each other.
This assertion method asserts the equality of these vector characteristics:
- Number of layers in the vector
- Number of features in each layer
- Feature geometry type
- Number of fields in each feature
- Name of each field
- Field values for each feature
Parameters: - aUri (string) – a URI to an OGR vector
- bUri (string) – a URI to an OGR vector
Raises: IOError
– Raised if one of the input files is not found on disk.AssertionError
– Raised if the vectors are not found to be equal to one another.
- Returns
- Nothing.
-
assertWorkspace
(archive_1_folder, archive_2_folder, glob_exclude='')¶ Check the contents of two folders against each other.
This method iterates through the contents of each workspace folder and verifies that all files exist in both folders. If this passes, then each file is compared against each other using
GISTest.assertFiles()
.If one of these workspaces includes files that are known to be different between model runs (such as logs, or other files that include timestamps), you may wish to specify a glob pattern matching those filenames and passing it to glob_exclude.
Parameters: - archive_1_folder (string) – a uri to a folder on disk
- archive_2_folder (string) – a uri to a folder on disk
- glob_exclude (string) – a string in glob format representing files to ignore
Raises: AssertionError
– Raised when the two folders are found to have different contents.Returns: Nothing.
-
-
natcap.invest.testing.
build_regression_archives
(file_uri, input_archive_uri, output_archive_uri)¶ Build regression archives for a target model run.
With a properly formatted JSON configuration file at file_uri, all input files and parameters are collected and compressed into a single gzip. Then, the target model is executed and the output workspace is zipped up into another gzip. These could then be used for regression testing, such as with the
natcap.invest.testing.regression
decorator.Example configuration file contents (serialized to JSON):
{ "model": "natcap.invest.pollination.pollination", "arguments": { # the full set of model arguments here } }
Example function usage:
import natcap.invest.testing file_uri = "/path/to/config.json" input_archive_uri = "/path/to/archived_inputs.tar.gz" output_archive_uri = "/path/to/archived_outputs.tar.gz" natcap.invest.testing.build_regression_archives(file_uri, input_archive_uri, output_archive_uri)
Parameters: - file_uri (string) – a URI to a json file on disk containing the
- configuration options. (above) –
- input_archive_uri (string) – the URI to where the gzip archive
- inputs should be saved once it is created. (of) –
- output_archive_uri (string) – the URI to where the gzip output
- of output should be saved once it is created. (archive) –
Returns: Nothing.
-
natcap.invest.testing.
get_hash
(uri)¶ Get the MD5 hash for a single file. The file is read in a memory-efficient fashion.
Parameters: uri (string) – a string uri to the file to be tested. Returns: An md5sum of the input file
-
natcap.invest.testing.
regression
(input_archive, workspace_archive)¶ Decorator to unzip input data, run the regression test and compare the outputs against the outputs on file.
Example usage with a test case:
import natcap.invest.testing @natcap.invest.testing.regression('/data/input.tar.gz', /data/output.tar.gz') def test_workspaces(self): model.execute(self.args)
Parameters: - input_archive (string) – The path to a .tar.gz archive with the input data.
- workspace_archive (string) – The path to a .tar.gz archive with the workspace to assert.
Returns: Composed function with regression testing.
-
natcap.invest.testing.
save_workspace
(new_workspace)¶ Decorator to save a workspace to a new location.
If new_workspace already exists on disk, it will be recursively removed.
Example usage with a test case:
import natcap.invest.testing @natcap.invest.testing.save_workspace('/path/to/workspace') def test_workspaces(self): model.execute(self.args)
Note
- Target workspace folder must be saved to
self.workspace_dir
This decorator is only designed to work with test functions from subclasses of
unittest.TestCase
such asnatcap.invest.testing.GISTest
.
- Target workspace folder must be saved to
- If
new_workspace
exists, it will be removed. So be careful where you save things.
- If
Parameters: new_workspace (string) – a URI to the where the workspace should be copied. Returns: A composed test case function which will execute and then save your workspace to the specified location.
InVEST Timber model.
-
natcap.invest.timber.timber.
execute
(args)¶ Managed Timber Production.
Invoke the timber model given uri inputs specified by the user guide.
Parameters: - args['workspace_dir'] (string) – The file location where the outputs will be written (Required)
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['timber_shape_uri'] (string) – The shapefile describing timber parcels with fields as described in the user guide (Required)
- args['attr_table_uri'] (string) – The CSV attribute table location with fields that describe polygons in timber_shape_uri (Required)
- market_disc_rate (float) – The market discount rate
Returns: nothing
InVEST Wave Energy Model Core Code
-
exception
natcap.invest.wave_energy.wave_energy.
IntersectionError
¶ Bases:
exceptions.Exception
A custom error message for when the AOI does not intersect any wave data points.
-
natcap.invest.wave_energy.wave_energy.
build_point_shapefile
(driver_name, layer_name, path, data, prj, coord_trans)¶ This function creates and saves a point geometry shapefile to disk. It specifically only creates one ‘Id’ field and creates as many features as specified in ‘data’
driver_name - A string specifying a valid ogr driver type layer_name - A string representing the name of the layer path - A string of the output path of the file data - A dictionary who’s keys are the Id’s for the field
and who’s values are arrays with two elements being latitude and longitudeprj - A spatial reference acting as the projection/datum coord_trans - A coordinate transformation
returns - Nothing
-
natcap.invest.wave_energy.wave_energy.
calculate_distance
(xy_1, xy_2)¶ For all points in xy_1, this function calculates the distance from point xy_1 to various points in xy_2, and stores the shortest distances found in a list min_dist. The function also stores the index from which ever point in xy_2 was closest, as an id in a list that corresponds to min_dist.
xy_1 - A numpy array of points in the form [x,y] xy_2 - A numpy array of points in the form [x,y]
- returns - A numpy array of shortest distances and a numpy array
- of id’s corresponding to the array of shortest distances
-
natcap.invest.wave_energy.wave_energy.
calculate_percentiles_from_raster
(raster_uri, percentiles)¶ Does a memory efficient sort to determine the percentiles of a raster. Percentile algorithm currently used is the nearest rank method.
raster_uri - a uri to a gdal raster on disk percentiles - a list of desired percentiles to lookup
ex: [25,50,75,90]- returns - a list of values corresponding to the percentiles
- from the percentiles list
-
natcap.invest.wave_energy.wave_energy.
captured_wave_energy_to_shape
(energy_cap, wave_shape_uri)¶ Adds each captured wave energy value from the dictionary energy_cap to a field of the shapefile wave_shape. The values are set corresponding to the same I,J values which is the key of the dictionary and used as the unique identier of the shape.
- energy_cap - A dictionary with keys (I,J), representing the
- wave energy capacity values.
- wave_shape_uri - A uri to a point geometry shapefile to
- write the new field/values to
returns - Nothing
-
natcap.invest.wave_energy.wave_energy.
clip_datasource_layer
(shape_to_clip_path, binding_shape_path, output_path)¶ Clip Shapefile Layer by second Shapefile Layer.
Uses ogr.Layer.Clip() to clip a Shapefile, where the output Layer inherits the projection and fields from the original Shapefile.
Parameters: - shape_to_clip_path (string) – a path to a Shapefile on disk. This is the Layer to clip. Must have same spatial reference as ‘binding_shape_path’.
- binding_shape_path (string) – a path to a Shapefile on disk. This is the Layer to clip to. Must have same spatial reference as ‘shape_to_clip_path’
- output_path (string) – a path on disk to write the clipped Shapefile to. Should end with a ‘.shp’ extension.
Returns: Nothing
-
natcap.invest.wave_energy.wave_energy.
compute_wave_energy_capacity
(wave_data, interp_z, machine_param)¶ - Computes the wave energy capacity for each point and
generates a dictionary whos keys are the points (I,J) and whos value is the wave energy capacity.
- wave_data - A dictionary containing wave watch data with the following
- structure:
- {‘periods’: [1,2,3,4,...],
‘heights’: [.5,1.0,1.5,...], ‘bin_matrix’: { (i0,j0): [[2,5,3,2,...], [6,3,4,1,...],...],
- (i1,j1): [[2,5,3,2,...], [6,3,4,1,...],...],
- ...
(in, jn): [[2,5,3,2,...], [6,3,4,1,...],...]
}
}
- interp_z - A 2D array of the interpolated values for the machine
- performance table
- machine_param - A dictionary containing the restrictions for the
- machines (CapMax, TpMax, HsMax)
- returns - A dictionary representing the wave energy capacity at
- each wave point
-
natcap.invest.wave_energy.wave_energy.
count_pixels_groups
(raster_uri, group_values)¶ Does a pixel count for each value in ‘group_values’ over the raster provided by ‘raster_uri’. Returns a list of pixel counts for each value in ‘group_values’
raster_uri - a uri path to a gdal raster on disk group_values - a list of unique numbers for which to get a pixel count
- returns - A list of integers, where each integer at an index
- corresponds to the pixel count of the value from ‘group_values’ found at the same index
-
natcap.invest.wave_energy.wave_energy.
create_attribute_csv_table
(attribute_table_uri, fields, data)¶ Create a new csv table from a dictionary
filename - a URI path for the new table to be written to disk
- fields - a python list of the column names. The order of the fields in
- the list will be the order in how they are written. ex: [‘id’, ‘precip’, ‘total’]
- data - a python dictionary representing the table. The dictionary
should be constructed with unique numerical keys that point to a dictionary which represents a row in the table: data = {0 : {‘id’:1, ‘precip’:43, ‘total’: 65},
1 : {‘id’:2, ‘precip’:65, ‘total’: 94}}
returns - nothing
-
natcap.invest.wave_energy.wave_energy.
create_percentile_ranges
(percentiles, units_short, units_long, start_value)¶ Constructs the percentile ranges as Strings, with the first range starting at 1 and the last range being greater than the last percentile mark. Each string range is stored in a list that gets returned
percentiles - A list of the percentile marks in ascending order units_short - A String that represents the shorthand for the units of
the raster values (ex: kW/m)- units_long - A String that represents the description of the units of
- the raster values (ex: wave power per unit width of wave crest length (kW/m))
- start_value - A String representing the first value that goes to the
- first percentile range (start_value - percentile_one)
returns - A list of Strings representing the ranges of the percentiles
-
natcap.invest.wave_energy.wave_energy.
create_percentile_rasters
(raster_path, output_path, units_short, units_long, start_value, percentile_list, aoi_shape_path)¶ Creates a percentile (quartile) raster based on the raster_dataset. An attribute table is also constructed for the raster_dataset that displays the ranges provided by taking the quartile of values. The following inputs are required:
raster_path - A uri to a gdal raster dataset with data of type integer output_path - A String for the destination of new raster units_short - A String that represents the shorthand for the units
of the raster values (ex: kW/m)- units_long - A String that represents the description of the units
- of the raster values (ex: wave power per unit width of wave crest length (kW/m))
- start_value - A String representing the first value that goes to the
- first percentile range (start_value - percentile_one)
- percentile_list - a python list of the percentiles ranges
- ex: [25, 50, 75, 90]
- aoi_shape_path - a uri to an OGR polygon shapefile to clip the
- rasters to
return - Nothing
-
natcap.invest.wave_energy.wave_energy.
execute
(args)¶ Wave Energy.
Executes both the biophysical and valuation parts of the wave energy model (WEM). Files will be written on disk to the intermediate and output directories. The outputs computed for biophysical and valuation include: wave energy capacity raster, wave power raster, net present value raster, percentile rasters for the previous three, and a point shapefile of the wave points with attributes.
Parameters: - workspace_dir (string) – Where the intermediate and output folder/files will be saved. (required)
- wave_base_data_uri (string) – Directory location of wave base data including WW3 data and analysis area shapefile. (required)
- analysis_area_uri (string) – A string identifying the analysis area of interest. Used to determine wave data shapefile, wave data text file, and analysis area boundary shape. (required)
- aoi_uri (string) – A polygon shapefile outlining a more detailed area within the analysis area. This shapefile should be projected with linear units being in meters. (required to run Valuation model)
- machine_perf_uri (string) – The path of a CSV file that holds the machine performance table. (required)
- machine_param_uri (string) – The path of a CSV file that holds the machine parameter table. (required)
- dem_uri (string) – The path of the Global Digital Elevation Model (DEM). (required)
- suffix (string) – A python string of characters to append to each output filename (optional)
- valuation_container (boolean) – Indicates whether the model includes valuation
- land_gridPts_uri (string) – A CSV file path containing the Landing and Power Grid Connection Points table. (required for Valuation)
- machine_econ_uri (string) – A CSV file path for the machine economic parameters table. (required for Valuation)
- number_of_machines (int) – An integer specifying the number of machines for a wave farm site. (required for Valuation)
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'wave_base_data_uri': 'path/to/base_data_dir', 'analysis_area_uri': 'West Coast of North America and Hawaii', 'aoi_uri': 'path/to/shapefile', 'machine_perf_uri': 'path/to/csv', 'machine_param_uri': 'path/to/csv', 'dem_uri': 'path/to/raster', 'suffix': '_results', 'valuation_container': True, 'land_gridPts_uri': 'path/to/csv', 'machine_econ_uri': 'path/to/csv', 'number_of_machines': 28, }
-
natcap.invest.wave_energy.wave_energy.
get_coordinate_transformation
(source_sr, target_sr)¶ This function takes a source and target spatial reference and creates a coordinate transformation from source to target, and one from target to source.
source_sr - A spatial reference target_sr - A spatial reference
- return - A tuple, coord_trans (source to target) and
- coord_trans_opposite (target to source)
-
natcap.invest.wave_energy.wave_energy.
get_points_geometries
(shape_uri)¶ This function takes a shapefile and for each feature retrieves the X and Y value from it’s geometry. The X and Y value are stored in a numpy array as a point [x_location,y_location], which is returned when all the features have been iterated through.
shape_uri - An uri to an OGR shapefile datasource
- returns - A numpy array of points, which represent the shape’s feature’s
- geometries.
-
natcap.invest.wave_energy.wave_energy.
load_binary_wave_data
(wave_file_uri)¶ The load_binary_wave_data function converts a pickled WW3 text file into a dictionary who’s keys are the corresponding (I,J) values and whose value is a two-dimensional array representing a matrix of the number of hours a seastate occurs over a 5 year period. The row and column headers are extracted once and stored in the dictionary as well.
wave_file_uri - The path to a pickled binary WW3 file.
- returns - A dictionary of matrices representing hours of specific
seastates, as well as the period and height ranges. It has the following structure:
- {‘periods’: [1,2,3,4,...],
‘heights’: [.5,1.0,1.5,...], ‘bin_matrix’: { (i0,j0): [[2,5,3,2,...], [6,3,4,1,...],...],
- (i1,j1): [[2,5,3,2,...], [6,3,4,1,...],...],
- ...
(in, jn): [[2,5,3,2,...], [6,3,4,1,...],...]
}
}
-
natcap.invest.wave_energy.wave_energy.
pixel_size_based_on_coordinate_transform
(dataset_uri, coord_trans, point)¶ Get width and height of cell in meters.
Calculates the pixel width and height in meters given a coordinate transform and reference point on the dataset that’s close to the transform’s projected coordinate sytem. This is only necessary if dataset is not already in a meter coordinate system, for example dataset may be in lat/long (WGS84).
Parameters: - dataset_uri (string) – a String for a GDAL path on disk, projected in the form of lat/long decimal degrees
- coord_trans (osr.CoordinateTransformation) – an OSR coordinate transformation from dataset coordinate system to meters
- point (tuple) – a reference point close to the coordinate transform coordinate system. must be in the same coordinate system as dataset.
Returns: pixel_diff – a 2-tuple containing (pixel width in meters, pixel
height in meters)
Return type: tuple
-
natcap.invest.wave_energy.wave_energy.
pixel_size_helper
(shape_path, coord_trans, coord_trans_opposite, ds_uri)¶ This function helps retrieve the pixel sizes of the global DEM when given an area of interest that has a certain projection.
- shape_path - A uri to a point shapefile datasource indicating where
- in the world we are interested in
coord_trans - A coordinate transformation coord_trans_opposite - A coordinate transformation that transforms in
the opposite direction of ‘coord_trans’ds_uri - A uri to a gdal dataset to get the pixel size from
- returns - A tuple of the x and y pixel sizes of the global DEM
- given in the units of what ‘shape’ is projected in
-
natcap.invest.wave_energy.wave_energy.
wave_energy_interp
(wave_data, machine_perf)¶ - Generates a matrix representing the interpolation of the
machine performance table using new ranges from wave watch data.
- wave_data - A dictionary holding the new x range (period) and
y range (height) values for the interpolation. The dictionary has the following structure:
- {‘periods’: [1,2,3,4,...],
‘heights’: [.5,1.0,1.5,...], ‘bin_matrix’: { (i0,j0): [[2,5,3,2,...], [6,3,4,1,...],...],
- (i1,j1): [[2,5,3,2,...], [6,3,4,1,...],...],
- ...
(in, jn): [[2,5,3,2,...], [6,3,4,1,...],...]
}
}
- machine_perf - a dictionary that holds the machine performance
- information with the following keys and structure:
- machine_perf[‘periods’] - [1,2,3,...] machine_perf[‘heights’] - [.5,1,1.5,...] machine_perf[‘bin_matrix’] - [[1,2,3,...],[5,6,7,...],...].
returns - The interpolated matrix
-
natcap.invest.wave_energy.wave_energy.
wave_power
(shape_uri)¶ Calculates the wave power from the fields in the shapefile and writes the wave power value to a field for the corresponding feature.
- shape_uri - A uri to a Shapefile that has all the attributes
- represented in fields to calculate wave power at a specific wave farm
returns - Nothing
InVEST Wind Energy model
-
exception
natcap.invest.wind_energy.wind_energy.
FieldError
¶ Bases:
exceptions.Exception
A custom error message for fields that are missing
-
exception
natcap.invest.wind_energy.wind_energy.
TimePeriodError
¶ Bases:
exceptions.Exception
A custom error message for when the number of years does not match the number of years given in the price table
-
natcap.invest.wind_energy.wind_energy.
add_field_to_shape_given_list
(shape_ds_uri, value_list, field_name)¶ Adds a field and a value to a given shapefile from a list of values. The list of values must be the same size as the number of features in the shape
shape_ds_uri - a URI to an OGR datasource
- value_list - a list of values that is the same length as there are
- features in ‘shape_ds’
field_name - a String for the name of the new field
returns - nothing
-
natcap.invest.wind_energy.wind_energy.
calculate_distances_grid
(land_shape_uri, harvested_masked_uri, tmp_dist_final_uri)¶ Creates a distance transform raster from an OGR shapefile. The function first burns the features from ‘land_shape_uri’ onto a raster using ‘harvested_masked_uri’ as the base for that raster. It then does a distance transform from those locations and converts from pixel distances to distance in meters.
- land_shape_uri - a URI to an OGR shapefile that has the desired
- features to get the distance from (required)
- harvested_masked_uri - a URI to a GDAL raster that is used to get
- the proper extents and configuration for new rasters
- tmp_dist_final_uri - a URI to a GDAL raster for the final
- distance transform raster output
returns - Nothing
-
natcap.invest.wind_energy.wind_energy.
calculate_distances_land_grid
(land_shape_uri, harvested_masked_uri, tmp_dist_final_uri)¶ Creates a distance transform raster based on the shortest distances of each point feature in ‘land_shape_uri’ and each features ‘L2G’ field.
- land_shape_uri - a URI to an OGR shapefile that has the desired
- features to get the distance from (required)
- harvested_masked_uri - a URI to a GDAL raster that is used to get
- the proper extents and configuration for new rasters
- tmp_dist_final_uri - a URI to a GDAL raster for the final
- distance transform raster output
returns - Nothing
-
natcap.invest.wind_energy.wind_energy.
clip_and_reproject_raster
(raster_uri, aoi_uri, projected_uri)¶ Clip and project a Dataset to an area of interest
raster_uri - a URI to a gdal Dataset
aoi_uri - a URI to a ogr DataSource of geometry type polygon
- projected_uri - a URI string for the output dataset to be written to
- disk
returns - nothing
-
natcap.invest.wind_energy.wind_energy.
clip_and_reproject_shapefile
(shapefile_uri, aoi_uri, projected_uri)¶ Clip and project a DataSource to an area of interest
shapefile_uri - a URI to a ogr Datasource
aoi_uri - a URI to a ogr DataSource of geometry type polygon
- projected_uri - a URI string for the output shapefile to be written to
- disk
returns - nothing
-
natcap.invest.wind_energy.wind_energy.
clip_datasource
(aoi_uri, orig_ds_uri, output_uri)¶ Clip an OGR Datasource of geometry type polygon by another OGR Datasource geometry type polygon. The aoi should be a shapefile with a layer that has only one polygon feature
aoi_uri - a URI to an OGR Datasource that is the clipping bounding box
orig_ds_uri - a URI to an OGR Datasource to clip
out_uri - output uri path for the clipped datasource
returns - Nothing
-
natcap.invest.wind_energy.wind_energy.
combine_dictionaries
(dict_1, dict_2)¶ Add dict_2 to dict_1 and return in a new dictionary. Both dictionaries should be single level with a key that points to a value. If there is a key in ‘dict_2’ that already exists in ‘dict_1’ it will be ignored.
- dict_1 - a python dictionary
- ex: {‘ws_id’:1, ‘vol’:65}
- dict_2 - a python dictionary
- ex: {‘size’:11, ‘area’:5}
returns - a python dictionary that is the combination of ‘dict_1’ and ‘dict_2’ ex:
ex: {‘ws_id’:1, ‘vol’:65, ‘area’:5, ‘size’:11}
-
natcap.invest.wind_energy.wind_energy.
create_wind_farm_box
(spat_ref, start_point, x_len, y_len, out_uri)¶ Create an OGR shapefile where the geometry is a set of lines
- spat_ref - a SpatialReference to use in creating the output shapefile
- (required)
- start_point - a tuple of floats indicating the first vertice of the
- line (required)
- x_len - an integer value for the length of the line segment in
- the X direction (required)
- y_len - an integer value for the length of the line segment in
- the Y direction (required)
- out_uri - a string representing the file path to disk for the new
- shapefile (required)
return - nothing
-
natcap.invest.wind_energy.wind_energy.
execute
(args)¶ Wind Energy.
This module handles the execution of the wind energy model given the following dictionary:
Parameters: - workspace_dir (string) – a python string which is the uri path to where the outputs will be saved (required)
- wind_data_uri (string) – path to a CSV file with the following header: [‘LONG’,’LATI’,’LAM’, ‘K’, ‘REF’]. Each following row is a location with at least the Longitude, Latitude, Scale (‘LAM’), Shape (‘K’), and reference height (‘REF’) at which the data was collected (required)
- aoi_uri (string) – a uri to an OGR datasource that is of type polygon and projected in linear units of meters. The polygon specifies the area of interest for the wind data points. If limiting the wind farm bins by distance, then the aoi should also cover a portion of the land polygon that is of interest (optional for biophysical and no distance masking, required for biophysical and distance masking, required for valuation)
- bathymetry_uri (string) – a uri to a GDAL dataset that has the depth values of the area of interest (required)
- land_polygon_uri (string) – a uri to an OGR datasource of type polygon that provides a coastline for determining distances from wind farm bins. Enabled by AOI and required if wanting to mask by distances or run valuation
- global_wind_parameters_uri (string) – a float for the average distance in kilometers from a grid connection point to a land connection point (required for valuation if grid connection points are not provided)
- suffix (string) – a String to append to the end of the output files (optional)
- turbine_parameters_uri (string) – a uri to a CSV file that holds the turbines biophysical parameters as well as valuation parameters (required)
- number_of_turbines (int) – an integer value for the number of machines for the wind farm (required for valuation)
- min_depth (float) – a float value for the minimum depth for offshore wind farm installation (meters) (required)
- max_depth (float) – a float value for the maximum depth for offshore wind farm installation (meters) (required)
- min_distance (float) – a float value for the minimum distance from shore for offshore wind farm installation (meters) The land polygon must be selected for this input to be active (optional, required for valuation)
- max_distance (float) – a float value for the maximum distance from shore for offshore wind farm installation (meters) The land polygon must be selected for this input to be active (optional, required for valuation)
- valuation_container (boolean) – Indicates whether model includes valuation
- foundation_cost (float) – a float representing how much the foundation will cost for the specific type of turbine (required for valuation)
- discount_rate (float) – a float value for the discount rate (required for valuation)
- grid_points_uri (string) – a uri to a CSV file that specifies the landing and grid point locations (optional)
- avg_grid_distance (float) – a float for the average distance in kilometers from a grid connection point to a land connection point (required for valuation if grid connection points are not provided)
- price_table (boolean) – a bool indicating whether to use the wind energy price table or not (required)
- wind_schedule (string) – a URI to a CSV file for the yearly prices of wind energy for the lifespan of the farm (required if ‘price_table’ is true)
- wind_price (float) – a float for the wind energy price at year 0 (required if price_table is false)
- rate_change (float) – a float as a percent for the annual rate of change in the price of wind energy. (required if price_table is false)
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'wind_data_uri': 'path/to/file', 'aoi_uri': 'path/to/shapefile', 'bathymetry_uri': 'path/to/raster', 'land_polygon_uri': 'path/to/shapefile', 'global_wind_parameters_uri': 'path/to/csv', 'suffix': '_results', 'turbine_parameters_uri': 'path/to/csv', 'number_of_turbines': 10, 'min_depth': 3, 'max_depth': 60, 'min_distance': 0, 'max_distance': 200000, 'valuation_container': True, 'foundation_cost': 3.4, 'discount_rate': 7.0, 'grid_points_uri': 'path/to/csv', 'avg_grid_distance': 4, 'price_table': True, 'wind_schedule': 'path/to/csv', 'wind_price': 0.4, 'rate_change': 0.0, }
Returns: None
-
natcap.invest.wind_energy.wind_energy.
get_highest_harvested_geom
(wind_points_uri)¶ Find the point with the highest harvested value for wind energy and return its geometry
- wind_points_uri - a URI to an OGR Datasource of a point geometry
- shapefile for wind energy
returns - the geometry of the point with the highest harvested value
-
natcap.invest.wind_energy.wind_energy.
mask_by_distance
(dataset_uri, min_dist, max_dist, out_nodata, dist_uri, mask_uri)¶ Given a raster whose pixels are distances, bound them by a minimum and maximum distance
dataset_uri - a URI to a GDAL raster with distance values
min_dist - an integer of the minimum distance allowed in meters
max_dist - an integer of the maximum distance allowed in meters
mask_uri - the URI output of the raster masked by distance values
- dist_uri - the URI output of the raster converted from distance
- transform ranks to distance values in meters
out_nodata - the nodata value of the raster
returns - nothing
-
natcap.invest.wind_energy.wind_energy.
pixel_size_based_on_coordinate_transform_uri
(dataset_uri, coord_trans, point)¶ Get width and height of cell in meters.
A wrapper for pixel_size_based_on_coordinate_transform that takes a dataset uri as an input and opens it before sending it along.
Parameters: - dataset_uri (string) – a URI to a gdal dataset
- other parameters pass along (All) –
Returns: result – (pixel_width_meters, pixel_height_meters)
Return type: tuple
-
natcap.invest.wind_energy.wind_energy.
point_to_polygon_distance
(poly_ds_uri, point_ds_uri)¶ Calculates the distances from points in a point geometry shapefile to the nearest polygon from a polygon shapefile. Both datasources must be projected in meters
- poly_ds_uri - a URI to an OGR polygon geometry datasource projected in
- meters
- point_ds_uri - a URI to an OGR point geometry datasource projected in
- meters
returns - a list of the distances from each point
-
natcap.invest.wind_energy.wind_energy.
read_csv_wind_data
(wind_data_uri, hub_height)¶ Unpack the csv wind data into a dictionary.
Parameters: - wind_data_uri (string) – a path for the csv wind data file with header of: “LONG”,”LATI”,”LAM”,”K”,”REF”
- hub_height (int) – the hub height to use for calculating weibell parameters and wind energy values
Returns: A dictionary where the keys are lat/long tuples which point to dictionaries that hold wind data at that location.
-
natcap.invest.wind_energy.wind_energy.
read_csv_wind_parameters
(csv_uri, parameter_list)¶ Construct a dictionary from a csv file given a list of keys in ‘parameter_list’. The list of keys corresponds to the parameters names in ‘csv_uri’ which are represented in the first column of the file.
- csv_uri - a URI to a CSV file where every row is a parameter with the
- parameter name in the first column followed by the value in the second column
- parameter_list - a List of Strings that represent the parameter names to
- be found in ‘csv_uri’. These Strings will be the keys in the returned dictionary
- returns - a Dictionary where the the ‘parameter_list’ Strings are the
- keys that have values pulled from ‘csv_uri’
-
natcap.invest.wind_energy.wind_energy.
wind_data_to_point_shape
(dict_data, layer_name, output_uri)¶ Given a dictionary of the wind data create a point shapefile that represents this data
- dict_data - a python dictionary with the wind data, where the keys are
- tuples of the lat/long coordinates: { (97, 43) : {‘LATI’:97, ‘LONG’:43, ‘LAM’:6.3, ‘K’:2.7, ‘REF’:10}, (55, 51) : {‘LATI’:55, ‘LONG’:51, ‘LAM’:6.2, ‘K’:2.4, ‘REF’:10}, (73, 47) : {‘LATI’:73, ‘LONG’:47, ‘LAM’:6.5, ‘K’:2.3, ‘REF’:10} }
layer_name - a python string for the name of the layer
output_uri - a uri for the output destination of the shapefile
return - nothing
-
class
natcap.invest.fileio.
CSVDriver
(uri, fieldnames=None)¶ Bases:
natcap.invest.fileio.TableDriverTemplate
The CSVDriver class is a subclass of TableDriverTemplate.
-
get_fieldnames
()¶
-
get_file_object
(uri=None)¶
-
read_table
()¶
-
write_table
(table_list, uri=None, fieldnames=None)¶
-
-
exception
natcap.invest.fileio.
ColumnMissingFromTable
¶ Bases:
exceptions.KeyError
A custom exception for when a key is missing from a table. More descriptive than just throwing a KeyError. This class inherits the KeyError exception, so any existing exception handling should still work properly.
-
class
natcap.invest.fileio.
DBFDriver
(uri, fieldnames=None)¶ Bases:
natcap.invest.fileio.TableDriverTemplate
The DBFDriver class is a subclass of TableDriverTemplate.
-
get_fieldnames
()¶ Return a list of strings containing the fieldnames.
-
get_file_object
(uri=None, read_only=True)¶ Return the library-specific file object by using the input uri. If uri is None, return use self.uri.
-
read_table
()¶ Return the table object with data built from the table using the file-specific package as necessary. Should return a list of dictionaries.
-
write_table
(table_list, uri=None, fieldnames=None)¶ Take the table_list input and write its contents to the appropriate URI. If uri == None, write the file to self.uri. Otherwise, write the table to uri (which may be a new file). If fieldnames == None, assume that the default fieldnames order will be used.
-
-
class
natcap.invest.fileio.
TableDriverTemplate
(uri, fieldnames=None)¶ Bases:
object
The TableDriverTemplate classes provide a uniform, simple way to interact with specific tabular libraries. This allows us to interact with multiple filetypes in exactly the same way and in a uniform syntax. By extension, this also allows us to read and write to and from any desired table format as long as the appropriate TableDriver class has been implemented.
These driver classes exist for convenience, and though they can be accessed directly by the user, these classes provide only the most basic functionality. Other classes, such as the TableHandler class, use these drivers to provide a convenient layer of functionality to the end-user.
This class is merely a template to be subclassed for use with appropriate table filetype drivers. Instantiating this object will yield a functional object, but it won’t actually get you any relevant results.
-
get_fieldnames
()¶ Return a list of strings containing the fieldnames.
-
get_file_object
(uri=None)¶ Return the library-specific file object by using the input uri. If uri is None, return use self.uri.
-
read_table
()¶ Return the table object with data built from the table using the file-specific package as necessary. Should return a list of dictionaries.
-
write_table
(table_list, uri=None, fieldnames=None)¶ Take the table_list input and write its contents to the appropriate URI. If uri == None, write the file to self.uri. Otherwise, write the table to uri (which may be a new file). If fieldnames == None, assume that the default fieldnames order will be used.
-
-
class
natcap.invest.fileio.
TableHandler
(uri, fieldnames=None)¶ Bases:
object
-
__iter__
()¶ Allow this handler object’s table to be iterated through. Returns an iterable version of self.table.
-
create_column
(column_name, position=None, default_value=0)¶ Create a new column in the internal table object with the name column_name. If position == None, it will be appended to the end of the fieldnames. Otherwise, the column will be inserted at index position. This function will also loop through the entire table object and create an entry with the default value of default_value.
Note that it’s up to the driver to actually add the field to the file on disk.
Returns nothing
-
find_driver
(uri, fieldnames=None)¶ Locate the driver needed for uri. Returns a driver object as documented by self.driver_types.
-
get_fieldnames
(case='lower')¶ Returns a python list of the original fieldnames, true to their original case.
- case=’lower’ - a python string representing the desired status of the
- fieldnames. ‘lower’ for lower case, ‘orig’ for original case.
returns a python list of strings.
-
get_map
(key_field, value_field)¶ Returns a python dictionary mapping values contained in key_field to values contained in value_field. If duplicate keys are found, they are overwritten in the output dictionary.
This is implemented as a dictionary comprehension on top of self.get_table_list(), so there shouldn’t be a need to reimplement this for each subclass of AbstractTableHandler.
If the table list has not been retrieved, it is retrieved before generating the map.
key_field - a python string. value_field - a python string.
returns a python dictionary mapping key_fields to value_fields.
-
get_table
()¶ Return the table list object.
-
get_table_dictionary
(key_field, include_key=True)¶ Returns a python dictionary mapping a key value to all values in that particular row dictionary (including the key field). If duplicate keys are found, the are overwritten in the output dictionary.
- key_field - a python string of the desired field value to be used as
- the key for the returned dictionary.
- include_key=True - a python boolean indicating whether the
- key_field provided should be included in each row_dictionary.
returns a python dictionary of dictionaries.
-
get_table_row
(key_field, key_value)¶ Return the first full row where the value of key_field is equivalent to key_value. Raises a KeyError if key_field does not exist.
key_field - a python string. key_value - a value of appropriate type for this field.
returns a python dictionary of the row, or None if the row does not exist.
-
set_field_mask
(regexp=None, trim=0, trim_place='front')¶ Set a mask for the table’s self.fieldnames. Any fieldnames that match regexp will have trim number of characters stripped off the front.
- regexp=None - a python string or None. If a python string, this
- will be a regular expression. If None, this represents no regular expression.
trim - a python int. trim_place - a string, either ‘front’ or ‘back’. Indicates where
the trim should take place.Returns nothing.
-
write_table
(table=None, uri=None)¶ Invoke the driver to save the table to disk. If table == None, self.table will be written, otherwise, the list of dictionaries passed in to table will be written. If uri is None, the table will be written to the table’s original uri, otherwise, the table object will be written to uri.
-
-
natcap.invest.fileio.
get_free_space
(folder='/', unit='auto')¶ Get the free space on the drive/folder marked by folder. Returns a float of unit unit.
- folder - (optional) a string uri to a folder or drive on disk. Defaults
- to ‘/’ (‘C:’ on Windows’)
- unit - (optional) a string, one of [‘B’, ‘MB’, ‘GB’, ‘TB’, ‘auto’]. If
- ‘auto’, the unit returned will be automatically calculated based on available space. Defaults to ‘auto’.
returns a string marking the space free and the selected unit. Number is rounded to two decimal places.’
InVEST Carbon Edge Effect Model an implementation of the model described in ‘Degradation in carbon stocks near tropical forest edges’, by Chaplin-Kramer et. al (in review)
-
natcap.invest.forest_carbon_edge_effect.
execute
(args)¶ Forest Carbon Edge Effect.
InVEST Carbon Edge Model calculates the carbon due to edge effects in tropical forest pixels.
Parameters: - args['workspace_dir'] (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['n_nearest_model_points'] (int) – number of nearest neighbor model points to search for
- args['aoi_uri'] (string) – (optional) if present, a path to a shapefile that will be used to aggregate carbon stock results at the end of the run.
- args['biophysical_table_uri'] (string) –
a path to a CSV table that has at least the fields ‘lucode’ and ‘c_above’. If
args['compute_forest_edge_effects'] == True
, table must also contain an ‘is_tropical_forest’ field. Ifargs['pools_to_calculate'] == 'all'
, this table must contain the fields ‘c_below’, ‘c_dead’, and ‘c_soil’.lucode
: an integer that corresponds to landcover codes in the rasterargs['lulc_uri']
is_tropical_forest
: either 0 or 1 indicating whether the landcover type is forest (1) or not (0). If 1, the value inc_above
is ignored and instead calculated from the edge regression model.c_above
: floating point number indicating tons of above ground carbon per hectare for that landcover type{'c_below', 'c_dead', 'c_soil'}
: three other optional carbon pools that will statically map landcover types to the carbon densities in the table.
Example:
lucode,is_tropical_forest,c_above,c_soil,c_dead,c_below 0,0,32.8,5,5.2,2.1 1,1,n/a,2.5,0.0,0.0 2,1,n/a,1.8,1.0,0.0 16,0,28.1,4.3,0.0,2.0
Note the “n/a” in
c_above
are optional since that field is ignored whenis_tropical_forest==1
. - args['lulc_uri'] (string) – path to a integer landcover code raster
- args['pools_to_calculate'] (string) – one of “all” or “above_ground”. If “all” model expects ‘c_above’, ‘c_below’, ‘c_dead’, ‘c_soil’ in header of biophysical_table and will make a translated carbon map for each based off the landcover map. If “above_ground”, this is only done with ‘c_above’.
- args['compute_forest_edge_effects'] (boolean) – if True, requires biophysical table to have ‘is_tropical_forest’ forest field, and any landcover codes that have a 1 in this column calculate carbon stocks using the Chaplin-Kramer et. al method and ignore ‘c_above’.
- args['tropical_forest_edge_carbon_model_shape_uri'] (string) –
path to a shapefile that defines the regions for the local carbon edge models. Has at least the fields ‘method’, ‘theta1’, ‘theta2’, ‘theta3’. Where ‘method’ is an int between 1..3 describing the biomass regression model, and the thetas are floating point numbers that have different meanings depending on the ‘method’ parameter. Specifically,
- method 1 (asymptotic model):
biomass = theta1 - theta2 * exp(-theta3 * edge_dist_km)
- method 2 (logarithmic model):
# NOTE: theta3 is ignored for this method biomass = theta1 + theta2 * numpy.log(edge_dist_km)
- method 3 (linear regression):
biomass = theta1 + theta2 * edge_dist_km
- method 1 (asymptotic model):
- args['biomass_to_carbon_conversion_factor'] (string/float) – Number by which to multiply forest biomass to convert to carbon in the edge effect calculation.
Returns: None
GLOBIO InVEST Model
-
natcap.invest.globio.
execute
(args)¶ GLOBIO.
The model operates in two modes. Mode (a) generates a landcover map based on a base landcover map and information about crop yields, infrastructure, and more. Mode (b) assumes the globio landcover map is generated. These modes are used below to describe input parameters.
Parameters: - args['workspace_dir'] (string) – output directory for intermediate, temporary, and final files
- args['predefined_globio'] (boolean) – if True then “mode (b)” else “mode (a)”
- args['results_suffix'] (string) – (optional) string to append to any output files
- args['lulc_uri'] (string) – used in “mode (a)” path to a base landcover map with integer codes
- args['lulc_to_globio_table_uri'] (string) –
used in “mode (a)” path to table that translates the land-cover args[‘lulc_uri’] to intermediate GLOBIO classes, from which they will be further differentiated using the additional data in the model. Contains at least the following fields:
- ‘lucode’: Land use and land cover class code of the dataset used. LULC codes match the ‘values’ column in the LULC raster of mode (b) and must be numeric and unique.
- ‘globio_lucode’: The LULC code corresponding to the GLOBIO class to which it should be converted, using intermediate codes described in the example below.
- args['infrastructure_dir'] (string) – used in “mode (a) and (b)” a path to a folder containing maps of either gdal compatible rasters or OGR compatible shapefiles. These data will be used in the infrastructure to calculation of MSA.
- args['pasture_uri'] (string) – used in “mode (a)” path to pasture raster
- args['potential_vegetation_uri'] (string) – used in “mode (a)” path to potential vegetation raster
- args['pasture_threshold'] (float) – used in “mode (a)”
- args['intensification_fraction'] (float) – used in “mode (a)”; a value between 0 and 1 denoting proportion of total agriculture that should be classified as ‘high input’
- args['primary_threshold'] (float) – used in “mode (a)”
- args['msa_parameters_uri'] (string) – path to MSA classification parameters
- args['aoi_uri'] (string) – (optional) if it exists then final MSA raster is summarized by AOI
- args['globio_lulc_uri'] (string) – used in “mode (b)” path to predefined globio raster.
Returns: None
-
natcap.invest.globio.
load_msa_parameter_table
(msa_parameter_table_filename, intensification_fraction)¶ Loads a specifically formatted parameter table into a dictionary that can be used to dynamically define the MSA ranges.
Parameters: - msa_parameter_table_filename (string) – path to msa csv table
- intensification_fraction (float) – a number between 0 and 1 indicating what level between msa_lu 8 and 9 to define the general GLOBIO code “12” to.
- a dictionary of the form (returns) –
- {
- ‘msa_f’: {
- valuea: msa_f_value, ... valueb: ... ‘<’: (bound, msa_f_value), ‘>’: (bound, msa_f_value)}
- ‘msa_i_other_table’: {
- valuea: msa_i_value, ... valueb: ... ‘<’: (bound, msa_i_other_value), ‘>’: (bound, msa_i_other_value)}
- ‘msa_i_primary’: {
- valuea: msa_i_primary_value, ... valueb: ... ‘<’: (bound, msa_i_primary_value), ‘>’: (bound, msa_i_primary_value)}
- ‘msa_lu’: {
- valuea: msa_lu_value, ...
valueb: ...
‘<’: (bound, msa_lu_value),
‘>’: (bound, msa_lu_value)
12: (msa_lu_8 * (1.0 - intensification_fraction) +msa_lu_9 * intensification_fraction}
}
-
natcap.invest.globio.
make_gaussian_kernel_uri
(sigma, kernel_uri)¶ create a gaussian kernel raster
Habitat suitability model.
-
natcap.invest.habitat_suitability.
execute
(args)¶ Habitat Suitability.
Calculate habitat suitability indexes given biophysical parameters.
The objective of a habitat suitability index (HSI) is to help users identify areas within their AOI that would be most suitable for habitat restoration. The output is a gridded map of the user’s AOI in which each grid cell is assigned a suitability rank between 0 (not suitable) and 1 (most suitable). The suitability rank is generally calculated as the weighted geometric mean of several individual input criteria, which have also been ranked by suitability from 0-1. Habitat types (e.g. marsh, mangrove, coral, etc.) are treated separately, and each habitat type will have a unique set of relevant input criteria and a resultant habitat suitability map.
Parameters: - args['workspace_dir'] (string) – directory path to workspace directory for output files.
- args['results_suffix'] (string) – (optional) string to append to any output file names.
- args['aoi_path'] (string) – file path to an area of interest shapefile.
- args['exclusion_path_list'] (list) – (optional) a list of file paths to shapefiles which define areas which the HSI should be masked out in a final output.
- args['output_cell_size'] (float) – (optional) size of output cells. If not present, the output size will snap to the smallest cell size in the HSI range rasters.
- args['habitat_threshold'] (float) – a value to threshold the habitat score values to 0 and 1.
- args['hsi_ranges'] (dict) –
a dictionary that describes the habitat biophysical base rasters as well as the ranges for optimal and tolerable values. Each biophysical value has a unique key in the dictionary that is used to name the mapping of biophysical to local HSI value. Each value is dictionary with keys:
- ‘raster_path’: path to disk for biophysical raster.
- ‘range’: a 4-tuple in non-decreasing order describing the “tolerable” to “optimal” ranges for those biophysical values. The endpoints non-inclusively define where the suitability score is 0.0, the two midpoints inclusively define the range where the suitability is 1.0, and the ranges above and below are linearly interpolated between 0.0 and 1.0.
Example:
{ 'depth': { 'raster_path': r'C:/path/to/depth.tif', 'range': (-50, -30, -10, -10), }, 'temperature': { 'temperature_path': ( r'C:/path/to/temperature.tif'), 'range': (5, 7, 12.5, 16), } }
Returns: None
This is a collection of postprocessing functions that are useful for some of the InVEST models.
-
natcap.invest.postprocessing.
plot_flow_direction
(flow_dataset_uri, output_uri)¶ Generates a quiver plot (arrows on a grid) of a flow matrix
- flow_dataset_uri - a uri to a GDAL compatable raster whose values are
- radians indicating the direction of outward flow.
output_uri - the location to disk to save the resulting plot png file
returns nothing
Scenario Generation: Proximity Based
-
natcap.invest.scenario_gen_proximity.
execute
(args)¶ Scenario Generator: Proximity-Based.
Main entry point for proximity based scenario generator model.
Parameters: - args['workspace_dir'] (string) – output directory for intermediate, temporary, and final files
- args['results_suffix'] (string) – (optional) string to append to any output files
- args['base_lulc_uri'] (string) – path to the base landcover map
- args['replacment_lucode'] (string or int) – code to replace when converting pixels
- args['area_to_convert'] (string or float) – max area (Ha) to convert
- args['focal_landcover_codes'] (string) – a space separated string of landcover codes that are used to determine the proximity when refering to “towards” or “away” from the base landcover codes
- args['convertible_landcover_codes'] (string) – a space separated string of landcover codes that can be converted in the generation phase found in args[‘base_lulc_uri’].
- args['n_fragmentation_steps'] (string) – an int as a string indicating the number of steps to take for the fragmentation conversion
- args['aoi_uri'] (string) – (optional) path to a shapefile that indicates an area of interest. If present, the expansion scenario operates only under that AOI and the output raster is clipped to that shape.
- args['convert_farthest_from_edge'] (boolean) – if True will run the conversion simulation starting from the furthest pixel from the edge and work inwards. Workspace will contain output files named ‘toward_base{suffix}.{tif,csv}.
- args['convert_nearest_to_edge'] (boolean) – if True will run the conversion simulation starting from the nearest pixel on the edge and work inwards. Workspace will contain output files named ‘toward_base{suffix}.{tif,csv}.
Returns: None.
InVEST Sediment Delivery Ratio (SDR) module.
- The SDR method in this model is based on:
- Winchell, M. F., et al. “Extension and validation of a geographic information system-based method for calculating the Revised Universal Soil Loss Equation length-slope factor for erosion risk assessments in large watersheds.” Journal of Soil and Water Conservation 63.3 (2008): 105-111.
-
natcap.invest.sdr.
execute
(args)¶ Sediment Delivery Ratio.
This function calculates the sediment export and retention of a landscape using the sediment delivery ratio model described in the InVEST user’s guide.
Parameters: - args['workspace_dir'] (string) – output directory for intermediate, temporary, and final files
- args['results_suffix'] (string) – (optional) string to append to any output file names
- args['dem_path'] (string) – path to a digital elevation raster
- args['erosivity_path'] (string) – path to rainfall erosivity index raster
- args['erodibility_path'] (string) – a path to soil erodibility raster
- args['lulc_path'] (string) – path to land use/land cover raster
- args['watersheds_path'] (string) – path to vector of the watersheds
- args['biophysical_table_path'] (string) – path to CSV file with biophysical information of each land use classes. contain the fields ‘usle_c’ and ‘usle_p’
- args['threshold_flow_accumulation'] (number) – number of upstream pixels on the dem to threshold to a stream.
- args['k_param'] (number) – k calibration parameter
- args['sdr_max'] (number) – max value the SDR
- args['ic_0_param'] (number) – ic_0 calibration parameter
- args['drainage_path'] (string) – (optional) path to drainage raster that is used to add additional drainage areas to the internally calculated stream layer
Returns: None.
InVEST specific code utils.
-
natcap.invest.utils.
build_file_registry
(base_file_path_list, file_suffix)¶ Combine file suffixes with key names, base filenames, and directories.
Parameters: - base_file_tuple_list (list) – a list of (dict, path) tuples where the dictionaries have a ‘file_key’: ‘basefilename’ pair, or ‘file_key’: list of ‘basefilename’s. ‘path’ indicates the file directory path to prepend to the basefile name.
- file_suffix (string) – a string to append to every filename, can be empty string
Returns: dictionary of ‘file_keys’ from the dictionaries in base_file_tuple_list mapping to full file paths with suffixes or lists of file paths with suffixes depending on the original type of the ‘basefilename’ pair.
Raises: ValueError if there are duplicate file keys or duplicate file paths.
-
natcap.invest.utils.
exponential_decay_kernel_raster
(expected_distance, kernel_filepath)¶ Create a raster-based exponential decay kernel.
The raster created will be a tiled GeoTiff, with 256x256 memory blocks.
Parameters: - expected_distance (int or float) – The distance (in pixels) of the kernel’s radius, the distance at which the value of the decay function is equal to 1/e.
- kernel_filepath (string) – The path to the file on disk where this kernel should be stored. If this file exists, it will be overwritten.
Returns: None
-
natcap.invest.utils.
make_suffix_string
(args, suffix_key)¶ Make an InVEST appropriate suffix string.
Creates an InVEST appropriate suffix string given the args dictionary and suffix key. In general, prepends an ‘_’ when necessary and generates an empty string when necessary.
Parameters: - args (dict) – the classic InVEST model parameter dictionary that is passed to execute.
- suffix_key (string) – the key used to index the base suffix.
Returns: - If suffix_key is not in args, or args[‘suffix_key’] is “”
return “”,
- If args[‘suffix_key’] starts with ‘_’ return args[‘suffix_key’]
else return ‘_’+`args[‘suffix_key’]`
init module for natcap.invest.
-
natcap.invest.
local_dir
(source_file)¶ Return the path to where source_file would be on disk.
If this is frozen (as with PyInstaller), this will be the folder with the executable in it. If not, it’ll just be the foldername of the source_file being passed in.
Module contents¶
Indices and tables¶
Ecosystem Service Analysis Tools¶
Coastal Protection Package¶
Coastal Vulnerability¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability.
execute
(args)¶ Coastal Vulnerability.
Parameters: - workspace_dir (string) – The path to the workspace directory on disk (required)
- aoi_uri (string) – Path to an OGR vector on disk representing the area of interest. (required)
- landmass_uri (string) – Path to an OGR vector on disk representing the global landmass. (required)
- bathymetry_uri (string) – Path to a GDAL raster on disk representing the bathymetry. Must overlap with the Area of Interest if if provided. (optional)
- bathymetry_constant (int) – An int between 1 and 5 (inclusive). (optional)
- relief_uri (string) – Path to a GDAL raster on disk representing the elevation within the land polygon provided. (optional)
- relief_constant (int) – An int between 1 and 5 (inclusive). (optional)
- elevation_averaging_radius (int) – a positive int. The radius around which to compute the average elevation for relief. Must be in meters. (required)
- mean_sea_level_datum (int) – a positive int. This input is the elevation of Mean Sea Level (MSL) datum relative to the datum of the bathymetry layer that they provide. The model transforms all depths to MSL datum by subtracting the value provided by the user to the bathymetry. This input can be used to run the model for a future sea-level rise scenario. Must be in meters. (required)
- cell_size (int) – Cell size in meters. The higher the value, the faster the computation, but the coarser the output rasters produced by the model. (required)
- depth_threshold (int) – Depth in meters (integer) cutoff to determine if fetch rays project over deep areas. (optional)
- exposure_proportion (float) – Minimum proportion of rays that project over exposed and/or deep areas need to classify a shore segment as exposed. (required)
- geomorphology_uri (string) – A OGR-supported polygon vector file that has a field called “RANK” with values between 1 and 5 in the attribute table. (optional)
- geomorphology_constant (int) – Integer value between 1 and 5. If layer associated to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether.
- habitats_directory_uri (string) – Directory containing OGR-supported polygon vectors associated with natural habitats. The name of these shapefiles should be suffixed with the ID that is specified in the natural habitats CSV file provided along with the habitats (optional)
- habitats_csv_uri (string) – A CSV file listing the attributes for each
habitat. For more information, see ‘Habitat Data Layer’ section in
the model’s documentation. (required if
args['habitat_directory_uri']
is provided) - habitat_constant (int) – Integer value between 1 and 5. If layer associated to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- area_computed (string) – Determine if the output data is about all the
coast about sheltered segments only. Either
'sheltered'
or'both'
(required) - suffix (string) – A string that will be added to the end of the output file. (optional)
- climatic_forcing_uri (string) – An OGR-supported vector containing both wind wave information across the region of interest. (optional)
- climatic_forcing_constant (int) – Integer value between 1 and 5. If layer to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- continental_shelf_uri (string) – An OGR-supported polygon vector delineating edges of the continental shelf. Default is global continental shelf shapefile. If omitted, the user can specify depth contour. See entry below. (optional)
- depth_contour (int) – Used to delineate shallow and deep areas. Continental limit is at about 150 meters. (optional)
- sea_level_rise_uri (string) – An OGR-supported point or polygon vector file features with “Trend” fields in the attributes table. (optional)
- sea_level_rise_constant (int) – Integer value between 1 and 5. If layer to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- structures_uri (string) – An OGR-supported vector file containing rigid structures to identify the portions of the coast that is armored. (optional)
- structures_constant (int) – Integer value between 1 and 5. If layer associated this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- population_uri (string) – A GDAL-supported raster file representing the population. (required)
- urban_center_threshold (int) – Minimum population required to consider shore segment a population center. (required)
- additional_layer_uri (string) – An OGR-supported vector file representing level rise, and will be used in the computation of coastal vulnerability and coastal vulnerability without habitat. (optional)
- additional_layer_constant (int) – Integer value between 1 and 5. If layer to this field is omitted, replace all shore points for this layer with a constant rank value in the computation of the coastal vulnerability index. If both the file and value for the layer are omitted, the layer is skipped altogether. (optional)
- rays_per_sector (int) – Number of rays used to subsample the fetch distance each of the 16 sectors. (required)
- max_fetch (int) – Maximum fetch distance computed by the model (>=60,000m). (optional)
- spread_radius (int) – Integer multiple of ‘cell size’. The coast from geomorphology layer could be of a better resolution than the global landmass, so the shores do not necessarily overlap. To make them coincide, the shore from the geomorphology layer is widened by 1 or more pixels. The value should be a multiple of ‘cell size’ that indicates how many pixels the coast from the geomorphology layer is widened. The widening happens on each side of the coast (n pixels landward, and n pixels seaward). (required)
- population_radius (int) – Radius length in meters used to count the number people leaving close to the coast. (optional)
Note
If neither
args['bathymetry_uri']
norargs['bathymetry_constant']
is provided, bathymetry is ignored altogether.If neither
args['relief_uri']
norargs['relief_constant']
is provided, relief is ignored altogether.If neither
args['geomorphology_uri']
norargs['geomorphology_constant']
is provided, geomorphology is ignored altogether.If neither
args['climatic_forcing_uri']
norargs['climatic_forcing_constant']
is provided, climatic_forcing is ignored altogether.If neither
args['sea_level_rise_uri']
norargs['sea_level_rise_constant']
is provided, sea level rise is ignored altogether.If neither
args['structures_uri']
norargs['structures_constant']
is provided, structures is ignored altogether.If neither
args['additional_layer_uri']
norargs['additional_layer_constant']
is provided, the additional layer option is ignored altogether.Example args:
args = { u'additional_layer_uri': u'CoastalProtection/Input/SeaLevRise_WCVI.shp', u'aoi_uri': u'CoastalProtection/Input/AOI_BarkClay.shp', u'area_computed': u'both', u'bathymetry_uri': u'Base_Data/Marine/DEMs/claybark_dem/hdr.adf', u'cell_size': 1000, u'climatic_forcing_uri': u'CoastalProtection/Input/WaveWatchIII.shp', u'continental_shelf_uri': u'CoastalProtection/Input/continentalShelf.shp', u'depth_contour': 150, u'depth_threshold': 0, u'elevation_averaging_radius': 5000, u'exposure_proportion': 0.8, u'geomorphology_uri': u'CoastalProtection/Input/Geomorphology_BarkClay.shp', u'habitats_csv_uri': u'CoastalProtection/Input/NaturalHabitat_WCVI.csv', u'habitats_directory_uri': u'CoastalProtection/Input/NaturalHabitat', u'landmass_uri': u'Base_Data/Marine/Land/global_polygon.shp', u'max_fetch': 12000, u'mean_sea_level_datum': 0, u'population_radius': 1000, u'population_uri': u'Base_Data/Marine/Population/global_pop/w001001.adf', u'rays_per_sector': 1, u'relief_uri': u'Base_Data/Marine/DEMs/claybark_dem/hdr.adf', u'sea_level_rise_uri': u'CoastalProtection/Input/SeaLevRise_WCVI.shp', u'spread_radius': 250, u'structures_uri': u'CoastalProtection/Input/Structures_BarkClay.shp', u'urban_center_threshold': 5000, u'workspace_dir': u'coastal_vulnerability_workspace' }
Returns: None
Coastal Vulnerability Core¶
Coastal vulnerability model core functions
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
adjust_dataset_ranks
(input_uri, output_uri)¶ Adjust the rank of a dataset’s first band using ‘adjust_layer_ranks’.
- Inputs:
- input_uri: dataset uri where values are 1, 2, 3, 4, or 5
- output_uri: new dataset with values adjusted by ‘adjust_layer_ranks’.
Returns output_uri.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
adjust_layer_ranks
(layer)¶ Adjust the rank of a layer in case there are less than 5 values.
- Inputs:
- layer: a float or int numpy array as extracted by ReadAsArray
that encodes the layer ranks (valued 1, 2, 3, 4, or 5).
- Output:
adjusted_layer: a numpy array of same dimensions as the input array with rank values reassigned follows:
-non-shore segments have a (no-data) value of zero (0) -all segments have the same value: all are set to a rank of 3 -2 different values: lower values are set to 3, 4 for the rest -3 values: 2, 3, and 4 by ascending level of vulnerability -4 values: 2, 3, 4, and 5 by ascending level of vulnerability
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
adjust_raster_to_aoi
(in_dataset_uri, aoi_datasource_uri, cell_size, out_dataset_uri)¶ Adjust in_dataset_uri to match aoi_dataset_uri’s extents, cell size and projection.
- Inputs:
in_dataset_uri: the uri of the dataset to adjust
- aoi_dataset_uri: uri to the aoi we want to use to adjust
in_dataset_uri
out_dataset_uri: uri to the adjusted dataset
- Returns:
- out_dataset_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
adjust_shapefile_to_aoi
(data_uri, aoi_uri, output_uri, empty_raster_allowed=False)¶ Adjust the shapefile’s data to the aoi, i.e.reproject & clip data points.
- Inputs:
- data_uri: uri to the shapefile to adjust
- aoi_uri: uir to a single polygon shapefile
- base_path: directory where the intermediate files will be saved
- output_uri: dataset that is clipped and/or reprojected to the
aoi if necessary. - empty_raster_allowed: boolean flag that, if False (default),
causes the function to break if output_uri is empty, or return an empty raster otherwise.
Returns: output_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
aggregate_csvs
(csv_list, out_uri)¶ Concatenate 3-row csv files created with tif2csv
- Inputs:
- csv_list: list of csv_uri strings
- Outputs:
- uri_output: the output uri of the concatenated csv
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
aggregate_tifs_from_directory
(path='.', mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
aggregate_tifs_from_list
(uri_list, path, mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
assign_sheltered_segments
(exposure_raster_uri, raster_uri, output_raster_uri)¶ Propagate values from ‘sources’ across a surface defined by ‘mask’ in a breadth-first-search manner.
- Inputs:
-exposure_raster_uri: URI to the GDAL dataset that we want to process -mask: a numpy array where 1s define the area across which we want
to propagate the values defined in ‘sources’.- -sources: a tuple as is returned by numpy.where(...) of coordinates
- of where to pick values in ‘raster_uri’ (a source). They are the values we want to propagate across the area defined by ‘mask’.
-output_raster_uri: URI to the GDAL dataset where we want to save the array once the values from source are propagated.
Returns: nothing.
The algorithm tries to spread the values pointed by ‘sources’ to every of the 8 immediately adjascent pixels where mask==1. Each source point is processed in sequence to ensure that values are propagated from the closest source point. If a connected component of 1s in ‘mask’ does not contain any source, its value remains unchanged in the output raster.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
cast_ray_fast
(direction, d_max)¶ March from the origin towards a direction until either land or a maximum distance is met.
Inputs: - origin: algorithm’s starting point – has to be on sea - direction: marching direction - d_max: maximum distance to traverse - raster: land mass raster
Returns the distance to the origin.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
clip_datasource
(aoi_ds, orig_ds, output_uri)¶ Clip an OGR Datasource of geometry type polygon by another OGR Datasource geometry type polygon. The aoi_ds should be a shapefile with a layer that has only one polygon feature
aoi_ds - an OGR Datasource that is the clipping bounding box orig_ds - an OGR Datasource to clip out_uri - output uri path for the clipped datasource
returns - a clipped OGR Datasource
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
combined_rank
(R_k)¶ Compute the combined habitats ranks as described in equation (3)
- Inputs:
- R_k: the list of ranks
- Output:
- R_hab as decribed in the user guide’s equation 3.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_additional_layer
(args)¶ Compute the additional layer the sea level rise index.
- Inputs:
-args[‘additional_layer_uri’]: uri to the additional layer data. -args[‘aoi_uri’]: uri to datasource of the area of interest -args[‘shore_raster_uri’]: uri to the shoreline dataset (land =1, sea =0) -args[‘cell_size’]: integer of the cell size in meters -args[‘intermediate_directory’]: uri to the intermediate file
directory- Output:
- Return a dictionary of all the intermediate file URIs.
- Intermediate outputs:
rasterized_sea_level_rise.tif:rasterized version of the shapefile
shore_FIELD_NAME.tif: raw value along the shore.
- FIELD_NAME.tif: index along the shore. If all
the shore has the same value, assign the moderate index value 3.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_coastal_exposure
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_coastal_exposure_no_habitats
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_coastal_population
(args)¶ Compute population living along the shore within a given radius.
- Inputs:
- args[‘intermediate_directory’]: uri to a directory where intermediate files are stored
- args[‘subdirectory’]: string URI of an existing subdirectory
- args[‘prefix’]: string prefix appended to every file generated
- args[‘population_uri’]: uri to the population density dataset.
- args[‘population_radius’]: used to compute the population density.
- args[‘aoi_uri’]: uri to a polygon shapefile
- args[‘cell_size’]: size of a pixel in meters
- Outputs:
- Return a uri dictionary of all the files created to generate the population density along the coastline.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_continental_shelf_distance
(args)¶ Copy the continental shelf distance data to the outputs/ directory.
- Inputs:
- args[‘shore_shelf_distance’]: uri to the continental shelf distance args[‘prefix’]:
- Outputs:
- data_uri: a dictionary containing the uri where the data is saved.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_erodible_shoreline
(args)¶ Compute the erodible shoreline as described in Greg’s notes. The erodible shoreline is the shoreline segments of rank 5.
- Inputs:
- args[geomorphology]: the geomorphology data. args[‘prefix’]: prefix to be added to the new filename. args[‘aoi_uri’]: URI to the area of interest shapefile args[‘cell_size’]: size of a cell on the raster
- Outputs:
- data_uri: a dictionary containing the uri where the data is saved.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_erosion_exposure
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_fetch
(land_array, rays_per_sector, d_max, cell_size, shore_points, bathymetry, bathymetry_nodata, GT, shore_raster)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_fetch_uri
(landmass_raster_uri, rays_per_sector, d_max, cell_size, shore_uri, bathymetry_uri)¶ Given a land raster, return the fetch distance from a point in given directions
- land_raster: raster where land is encoded as 1s, sea as 0s,
and cells outside the area of interest as anything different from 0s or 1s.
- directions: tuple of angles (in radians) from which the fetch
will be computed for each pixel.
d_max: maximum distance in meters over which to compute the fetch
cell_size: size of a cell in meters
- shore_uri: URI to the raster where the shoreline is encoded as 1s,
the rest as 0s.
- returns: a tuple (distances, depths) where:
- distances is a dictionary of fetch data where the key is a shore point (tuple of integer coordinates), and the value is a 1*sectors numpy array containing fetch distances (float) from that point for each sector. The first sector (0) points eastward.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_geomorphology
(args)¶ Translate geomorphology RANKS to shore pixels.
Create a raster identical to the shore pixel raster that has geomorphology RANK values. The values are gathered by finding the closest geomorphology feature to the center of the pixel cell.
Parameters: - args['geomorphology_uri'] (string) – a path on disk to a shapefile of the gemorphology ranking along the coastline.
- args['shore_raster_uri'] (string) – a path on disk to a the shoreline dataset (land = 1, sea = 0).
- args['intermediate_directory'] (string) – a path to the directory where intermediate files are stored.
- args['subdirectory'] (string) – a path for a directory to store the specific geomorphology intermediate steps.
Returns: data_uri – a dictionary of with the path for the geomorphology
raster.
Return type: dict
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_habitat_role
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_natural_habitats_vulnerability
(args)¶ Compute the natural habitat rank as described in the user manual.
- Inputs:
- -args[‘habitats_csv_uri’]: uri to a comma-separated text file
- containing the list of habitats.
- -args[‘habitats_directory_uri’]: uri to the directory where to find
- the habitat shapefiles.
-args[‘aoi_uri’]: uri to the datasource of the area of interest -args[‘shore_raster_uri’]: uri to the shoreline dataset
(land =1, sea =0)-args[‘cell_size’]: integer cell size in meters -args[‘intermediate_directory’]: uri to the directory where
intermediate files are stored- Output:
- -data_uri: a dictionary of all the intermediate file URIs.
- Intermediate outputs:
- For each habitat (habitat name ‘ABCD’, with id ‘X’) shapefile:
ABCD_X_raster.tif: rasterized shapefile data.
ABCD_influence.tif: habitat area of influence. Convolution between the rasterized shape data and a circular kernel which
radius is the habitat’s area of influence, TRUNCATED TO CELL_SIZE!!!
ABCD_influence_on_shore.tif: habitat influence along the shore
- habitats_available_data.tif: combined habitat rank along the
shore using equation 4.4 in the user guide.
habitats_missing_data.tif: shore section without habitat data.
habitats.tif: shore ranking using habitat and default ranks.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_relief_rank
(args)¶ Compute the relief index as is described in InVEST’s user guide.
- Inputs:
args[‘relief_uri’]: uri to an elevation dataset.
args[‘aoi_uri’]: uri to the datasource of the region of interest.
args[‘landmass_uri’]: uri to the landmass datasource where land is 1 and sea is 0.
- args[‘spread_radius’]: if the coastline from the geomorphology i
doesn’t match the land polygon’s shoreline, we can increase the overlap by ‘spreading’ the data from the geomorphology over a wider area. The wider the spread, the more ranking data overlaps with the coast. The spread is a convolution between the geomorphology ranking data and a 2D gaussian kernel of area (2*spread_radius+1)^2. A radius of zero reduces the kernel to the scalar 1, which means no spread at all.
- args[‘spread_radius’]: how much the shore coast is spread to match
the relief’s coast.
args[‘shore_raster_uri’]: URI to the shore tiff dataset.
args[‘cell_size’]: granularity of the rasterization.
- args[‘intermediate_directory’]: where intermediate files are
stored
- Output:
- Return R_relief as described in the user manual.
- A rastrer file called relief.tif
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_sea_level_rise
(args)¶ Compute the sea level rise index as described in the user manual.
- Inputs:
-args[‘sea_level_rise’]: shapefile with the sea level rise data. -args[‘aoi_uri’]: uri to datasource of the area of interest -args[‘shore_raster_uri’]: uri to the shoreline dataset (land =1, sea =0) -args[‘cell_size’]: integer of the cell size in meters -args[‘intermediate_directory’]: uri to the intermediate file
directory- Output:
- Return a dictionary of all the intermediate file URIs.
- Intermediate outputs:
rasterized_sea_level_rise.tif:rasterized version of the shapefile
shore_level_rise.tif: sea level rise along the shore.
- sea_level_rise.tif: sea level rise index along the shore. If all
the shore has the same value, assign the moderate index value 3.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_segment_exposure
(args)¶ Compute exposed and sheltered shoreline segment map.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_structure_protection
(args)¶ Compute the structure influence on the shore to later include it in the computation of the layers final rankings, as is specified in Gregg’s the additional notes (decrement ranks around structure edges).
- Inputs:
- args[‘aoi_uri’]: string uri to the datasource of the area of
interest
args[‘shore_raster_uri’]: dataset uri of the coastline within the AOI
args[‘structures_uri’]: string of the structure datasource uri
args[‘cell_size’]: integer of the size of a pixel in meters
- args[‘intermediate_directory’]: string of the uri where
intermediate files are stored
args[‘prefix’]: string prefix appended to every intermediate file
- Outputs:
- data_uri: a dictionary of the file uris generated in the intermediate directory.
- data_uri[‘adjusted_structures’]: string of the dataset uri obtained from reprojecting args[‘structures_uri’] and burining it onto the aoi. Contains the structure information across the whole aoi.
- data_uri[‘shore_structures’]: string uri pointing to the structure information along the coast only.
- data_uri[‘structure_influence’]: string uri pointing to a datasource of the spatial influence of the structures.
- data_uri[‘structure_edge’]: string uri pointing to the datasource of the edges of the structures.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_surge_potential
(args)¶ Compute surge potential index as described in the user manual.
- Inputs:
- args[‘bathymetry’]: bathymetry DEM file.
- args[‘landmass’]: shapefile containing land coverage data (land = 1, sea = 0)
- args[‘aoi_uri’]: uri to the datasource of the area of interest
- args[‘shore_raster_uri’]: uri to a shore raster where the shoreline is 1, and everything else is 0.
- args[‘cell_size’]: integer number for the cell size in meters
- args[‘intermediate_directory’]: uri to the directory where intermediate files are stored
- Output:
- Return R_surge as described in the user guide.
- Intermediate outputs:
- rasterized_sea_level_rise.tif:rasterized version of the shapefile
- shore_level_rise.tif: sea level rise along the shore.
- sea_level_rise.tif: sea level rise index along the shore.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_wave_exposure
(args)¶ Compute the wind exposure for every shore segment
- Inputs:
- args[‘climatic_forcing_uri’]: uri to wave datasource
- args[‘aoi_uri’]: uri to area of interest datasource
- args[‘fetch_distances’]: a dictionary of (point, list) pairs where point is a tuple of integer (row, col) coordinates and list is a maximal fetch distance in meters for each fetch sector.
- args[‘fetch_depths’]: same dictionary as fetch_distances, but list is a maximal fetch depth in meters for each fetch sector.
- args[‘cell_size’]: cell size in meters (integer)
- args[‘H_threshold’]: threshold (double) for the H function (eq. 7)
- args[‘intermediate_directory’]: uri to the directory that contains the intermediate files
- Outputs:
- data_uri: dictionary of the uri of all the files created in the function execution
- Detail of files:
A file called wave.tif that contains the wind exposure index along the shore.
- For each equiangular fetch sector k:
- F_k.tif: per-sector fetch value (see eq. 6).
- H_k.tif: per-sector H value (see eq. 7)
- E_o_k.tif: per-sector average oceanic wave power (eq. 6)
- E_l_k.tif: per-sector average wind-generated wave power (eq.9)
- E_w_k.tif: per-sector wave power (eq.5)
- E_w.tif: combined wave power.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
compute_wind_exposure
(args)¶ Compute the wind exposure for every shore segment as in equation 4.5
- Inputs:
- args[‘climatic_forcing_uri’]: uri to the wind information datasource
- args[‘aoi_uri’]: uri to the area of interest datasource
- args[‘fetch_distances’]: a dictionary of (point, list) pairs where point is a tuple of integer (row, col) coordinates and list is a maximal fetch distance in meters for each fetch sector.
- args[‘fetch_depths’]: same dictionary as fetch_distances, but list is a maximal fetch depth in meters for each fetch sector.
- args[‘cell_size’]: granularity of the rasterization.
- args[‘intermediate_directory’]:where intermediate files are stored
- args[‘prefix’]: string
- Outputs:
- data_uri: dictionary of the uri of all the files created in the function execution
- File description:
REI.tif: combined REI value of the wind exposure index for all sectors along the shore.
- For each equiangular fetch sector n:
- REI_n.tif: per-sector REI value (U_n * P_n * F_n).
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
convert_tif_to_csv
(tif_uri, csv_uri=None, mask=None)¶ Converts a single band geo-tiff file to a csv text file
- Inputs:
- -tif_uri: the uri to the file to be converted -csv_uri: uri to the output file. The file should not exist.
- Outputs:
- -returns the ouput file uri
returns nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
convert_tifs_to_csv
(tif_list, mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
detect_shore
(land_sea_array, aoi_array, aoi_nodata)¶ Extract the boundary between land and sea from a raster.
- raster: numpy array with sea, land and nodata values.
returns a numpy array the same size as the input raster with the shore encoded as ones, and zeros everywhere else.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
detect_shore_uri
(landmass_raster_uri, aoi_raster_uri, output_uri)¶ Extract the boundary between land and sea from a raster.
- raster: numpy array with sea, land and nodata values.
returns a numpy array the same size as the input raster with the shore encoded as ones, and zeros everywhere else.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
dict_to_point_shapefile
(dict_data, out_path, spat_ref, columns, row_order)¶ Create a point shapefile from a dictionary.
Parameters: - dict_data (dict) – a dictionary where keys point to a sub dictionary that has at least keys ‘x’, ‘y’. Each sub dictionary will be added as a point feature using ‘x’, ‘y’ as the geometry for the point. All other key, value pairs in the sub dictionary will be added as fields and values to the point feature.
- out_path (string) – a path on disk for the point shapefile.
- spat_ref (osr spatial reference) – an osr spatial reference to use when creating the layer.
- columns (list) – a list of strings representing the order the field names should be written. Attempting the attribute table reflects this order.
- row_order (list) – a list of tuples that match the keys of ‘dict_data’. This is so we can add the points in a specific order and hopefully populate the attribute table in that order.
Returns: Nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
disc_kernel
(r)¶ Create a (r+1)^2 disc-shaped array filled with 1s where d(i-r,j-r) <= r
Input: r, the kernel radius. r=0 is a single scalar of value 1.
- Output: a (r+1)x(r+1) array with:
- 1 if cell is closer than r units to the kernel center (r,r),
- 0 otherwise.
Distances are Euclidean.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
enumerate_shapefile_fields
(shapefile_uri)¶ Enumerate all the fielfd in a shapefile.
- Inputs:
- -shapefile_uri: uri to the shapefile which fields have to be enumerated
Returns a nested list of the field names in the order they are stored in the layer, and groupped per layer in the order the layers appear.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
execute
(args)¶ Entry point for coastal vulnerability core
args[‘’] - actual data structure the way I want them look like :RICH:DESCRIBE ALL THE ARGUMENTS IN ARGS
returns nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
fetch_vectors
(angles)¶ convert the angles passed as arguments to raster vector directions.
- Input:
- -angles: list of angles in radians
- Outputs:
- -directions: vector directions numpy array of size (len(angles), 2)
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
find_attribute_field
(field_name, shapefile_uri)¶ Look for a field name in the shapefile attribute table. Search is case insensitive.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
get_field
(field_name, shapefile, case_sensitive=True)¶ Return the field in shapefile that corresponds to field_name, None otherwise.
- Inputs:
- field_name: string to look for.
- shapefile: where to look for the field.
- case_sensitive: indicates whether the case is relevant when
comparing field names
- Output:
- the field name in the shapefile that corresponds to field_name,
None otherwise.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
get_layer_and_index_from_field_name
(field_name, shapefile)¶ Given a field name, return its layer and field index. Inputs:
- field_name: string to look for.
- shapefile: where to look for the field.
- Output:
- A tuple (layer, field_index) if the field exist in ‘shapefile’.
- (None, None) otherwise.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
has_field
(field_name, shapefile, case_sensitive=True)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
is_point_datasource
(uri)¶ Returns True if the datasource is a point shapefile
- Inputs:
- -uri: uri to a datasource
- Outputs:
- -True if uri points to a point datasource, False otherwise
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
is_polygon_datasource
(uri)¶ Returns True if the datasource is a polygon shapefile
- Inputs:
- -uri: uri to a datasource
- Outputs:
- -True if uri points to a polygon datasource, False otherwise
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
nearest_vector_neighbor
(neighbors_path, point_path, inherit_field)¶ Inherit a field value from the closest shapefile feature.
Each point in ‘point_path’ will inherit field ‘inherit_field’ from the closest feature in ‘neighbors_path’. Uses an rtree to build up a spatial index of ‘neighbor_path’ bounding boxes to find nearest points.
Parameters: - neighbors_path (string) – a filepath on disk to a shapefile that has at least one field called ‘inherit_field’
- point_path (string) – a filepath on disk to a shapefile. A field ‘inherit_field’ will be added to the point features. The value of that field will come from the closest feature’s field in ‘neighbors_path’
- inherit_field (string) – the name of the field in ‘neighbors_path’ to pass along to ‘point_path’.
Returns: Nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
preprocess_dataset
(dataset_uri, aoi_uri, cell_size, output_uri)¶ Funstion that preprocesses an input dataset (clip, reproject, resample) so that it is ready to be used in the model
- Inputs:
-dataset_uri: uri to the input dataset to be pre-processed -aoi_uri: uri to an aoi polygon datasource that is used for
clipping and reprojection.-cell_size: output dataset cell size in meters (integer) -output_uri: uri to the pre-processed output dataset.
Returns output_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
preprocess_inputs
(args)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
preprocess_point_datasource
(datasource_uri, aoi_uri, cell_size, output_uri, field_list, nodata=0.0)¶ Function that converts a point shapefile to a dataset by clipping, reprojecting, resampling, burning, and extrapolating burnt values.
- Inputs:
-datasource_uri: uri to the datasource to be pre-processed -aoi_uri: uri to an aoi polygon datasource that is used for
clipping and reprojection.-cell_size: output dataset cell size in meters (integer) -output_uri: uri to the pre-processed output dataset. -field_name: name of the field in the attribute table to get the values from. If a number, use it as a constant. If Null, use 1.
Returns output_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
preprocess_polygon_datasource
(datasource_uri, aoi_uri, cell_size, output_uri, field_name=None, all_touched=False, nodata=0.0, empty_raster_allowed=False)¶ Function that converts a polygon shapefile to a dataset by clipping, reprojecting, resampling, burning, and extrapolating burnt values.
- Inputs:
-datasource_uri: uri to the datasource to be pre-processed -aoi_uri: uri to an aoi polygon datasource that is used for
clipping and reprojection.-cell_size: output dataset cell size in meters (integer) -output_uri: uri to the pre-processed output dataset. -field_name: name of the field in the attribute table to get the values from. If a number, use it as a constant. If Null, use 1. -all_touched: boolean flag used in gdal’s vectorize_rasters options flag -nodata: float used as nodata in the output raster -empty_raster_allowed: flag that allows the function to return an empty raster if set to True, or break if set to False. False is the default.
Returns output_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
projections_match
(projection_list, silent_mode=True)¶ Check that two gdal datasets are projected identically. Functionality adapted from Doug’s biodiversity_biophysical.check_projections
- Inputs:
- projection_list: list of wkt projections to compare
- silent_mode: id True (default), don’t output anything, otherwise output if and why some projections are not the same.
- Output:
- False the datasets are not projected identically.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
rank_by_quantiles
(X, bin_count)¶ Tries to evenly distribute elements in X among ‘bin_count’ bins. If the boundary of a bin falls within a group of elements with the same value, all these elements will be included in that bin. Inputs:
-X: a 1D numpy array of the elements to bin -bin_count: the number of binsReturns the bin boundaries ready to be used by numpy.digitize
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
rank_shore
(X, bin_count)¶ Assign a rank based on natural breaks (Jenks natural breaks for now).
- Inputs:
- X: a numpy array with the lements to be ranked
- bins: the number of ranks (integer)
- Outputs:
- output: a numpy array with rankings in the interval
[0, bin_count-1] that correspond to the elements of X (rank of X[i] == outputs[i]).
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
raster_from_shapefile_uri
(shapefile_uri, aoi_uri, cell_size, output_uri, field=None, all_touched=False, nodata=0.0, datatype=<Mock id='140294660256336'>)¶ Burn default or user-defined data from a shapefile on a raster.
- Inputs:
shapefile: the dataset to be discretized
aoi_uri: URI to an AOI shapefile
cell_size: coarseness of the discretization (in meters)
output_uri: uri where the raster will be saved
- field: optional field name (string) where to extract the data
from.
all_touched: optional boolean that indicates if we use GDAL’s ALL_TOUCHED parameter when rasterizing.
- Output: A shapefile where:
If field is specified, the field data is used as burn value. If field is not specified, then:
- shapes on the first layer are encoded as 1s
- the rest is encoded as 0
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
raster_to_point_vector
(raster_path, point_vector_path)¶ Create a point shapefile from raster pixels.
Creates a point feature from each non nodata raster pixel, where the geometry for the point is the center of the pixel. A field ‘Value’ is added to each point feature with the value from the pixel. The created point shapefile will use a spatial reference taking from the rasters projection.
Parameters: - raster_path (string) – a filepath on disk of the raster to convert into a point shapefile.
- point_vector_path (string) – a filepath on disk for where to save the shapefile. Must have a ‘.shp’ extension.
Returns: Nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
raster_wkt
(raster)¶ Return the projection of a raster in the OpenGIS WKT format.
- Input:
- raster: raster file
- Output:
- a projection encoded as a WKT-compliant string.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
read_habitat_info
(habitats_csv_uri, habitats_directory_uri)¶ Extract the habitats information from the csv file and directory.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
rowcol_to_xy
(rows, cols, raster)¶ non-uri version of rowcol_to_xy_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
rowcol_to_xy_uri
(rows, cols, raster_uri)¶ converts row/col coordinates into x/y coordinates using raster_uri’s geotransform
- Inputs:
- -rows: integer scalar or numpy array of row coordinates -cols: integer scalar or numpy array of column coordinates -raster_uri: uri from where the geotransform is going to be extracted
Returns a tuple (X, Y) of scalars or numpy arrays of the projected coordinates
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_array_to_raster
(array, out_uri, base_uri, cell_size, no_data=None, default_nodata=0.0, gdal_type=<Mock id='140294660256208'>)¶ Save an array to a raster constructed from an AOI.
- Inputs:
- array: numpy array to be saved
- out_uri: output raster file URI.
- base_uri: URI to the AOI from which to construct the template raster
- cell_size: granularity of the rasterization in meters
- recompute_nodata: if True, recompute nodata to avoid interferece with existing raster data
- no_data: value of nodata used in the function. If None, revert to default_nodata.
- default_nodata: nodata used if no_data is set to none.
- Output:
- save the array in a raster file constructed from the AOI of granularity specified by cell_size
- Return the array uri.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_fetch_depths
(fetch, aoi_uri, cell_size, base_path, prefix)¶ Create dictionary of raster filenames of fetch F(n) for each sector n.
- Inputs:
- wind_data: wind data points adjusted to the aoi
- aoi: used to create the rasters for each sector
- cell_size: raster granularity in meters
- base_path: base path where the generated raster will be saved
- Output:
- A dictionary where keys are sector angles in degrees and values are raster filenames where F(n) is defined on each cell
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_fetch_distances
(fetch, aoi_uri, cell_size, base_path, prefix='')¶ Create dictionary of raster filenames of fetch F(n) for each sector n.
- Inputs:
- wind_data: wind data points adjusted to the aoi
- aoi: used to create the rasters for each sector
- cell_size: raster granularity in meters
- base_path: base path where the generated raster will be saved
Output: A list of raster URIs corresponding to sectors of increasing angles where data points encode the sector’s fetch distance for that point
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_fetch_to_outputs
(args)¶ Function that copies the fetch information (depth and distances) in the outputs directory.
- Inputs:
- args[‘fetch_distance_uris’]: A dictionary of (‘string’:string)
- entries where the first string is the sector in degrees, and the second string is a uri pointing to the file that contains the fetch distances for this sector.
- args[‘fetch_depths_uris’]: A dictionary similar to the depth one,
- but the second string is pointing to the file that contains fetch depths, not distances.
- args[‘prefix’]: String appended before the filenames. Currently
- used to follow Greg’s output labelling scheme.
- Outputs:
- data_uri that contains the uri of the new files in the outputs
directory, one for fetch distance and one for fetch depths for each fetch direction ‘n’, for a total of 2n.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_local_wave_exposure_to_subdirectory
(args)¶ Copy local wave exposure to the outputs/ directory.
- Inputs:
- args[‘E_l’]: uri to the local wave exposure data args[‘prefix’]: prefix to be appended to the new filename
- Outputs:
- data_uri: dictionary containing the uri where the data is saved
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_oceanic_wave_exposure_to_subdirectory
(args)¶ Copy oceanic wave exposure to the outputs/ directory.
- Inputs:
- args[‘E_o’]: uri to the oceanic wave exposure data args[‘prefix’]: prefix to be appended to the new filename
- Outputs:
- data_uri: dictionary containing the uri where the data is saved
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_structure_to_subdirectory
(args)¶ Save structure data to its intermediate subdirectory, under a custom prefix.
- Inputs:
args[‘structure_edges’]: the data’s uri to save to /outputs args[‘prefix’]: prefix to add to the new filename. Currently used to
mirror the labeling of outputs in Greg’s notes.- Outputs:
- data_uri: a dictionary of the uri where the data has been saved.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
save_wind_generated_waves_to_subdirectory
(args)¶ Copy the wave height and wave period to the outputs/ directory.
- Inputs:
- args[‘wave_height’][sector]: uri to “sector“‘s wave height data args[‘wave_period’][sector]: uri to “sector“‘s wave period data args[‘prefix’]: prefix to be appended to the new filename
- Outputs:
- data_uri: dictionary containing the uri where the data is saved
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
set_H_threshold
(threshold)¶ Return 0 if fetch is strictly below a threshold in km, 1 otherwise.
- Inputs:
- fetch: fetch distance in meters.
Returns: 1 if fetch >= threshold (in km) 0 if fetch < threshold Note: conforms to equation 4.8 in the invest documentation.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
shapefile_wkt
(shapefile)¶ Return the projection of a shapefile in the OpenGIS WKT format.
- Input:
- raster: raster file
- Output:
- a projection encoded as a WKT-compliant string.
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
xy_to_rowcol
(x, y, raster)¶ non-uri version of xy_to_rowcol_uri
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_core.
xy_to_rowcol_uri
(x, y, raster_uri)¶ Does the opposite of rowcol_to_xy_uri
Coastal Vulnerability Cython Core¶
Coastal Vulnerability Post Processing¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
aggregate_csvs
(csv_list, out_uri)¶ Concatenate 3-row csv files created with tif2csv
- Inputs:
- csv_list: list of csv_uri strings
- Outputs:
- uri_output: the output uri of the concatenated csv
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
aggregate_tifs_from_directory
(path='.', mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
convert_tif_to_csv
(tif_uri, csv_uri=None, mask=None)¶ Converts a single band geo-tiff file to a csv text file
- Inputs:
- -tif_uri: the uri to the file to be converted -csv_uri: uri to the output file. The file should not exist.
- Outputs:
- -returns the ouput file uri
returns nothing
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
convert_tifs_to_csv
(tif_list, mask=None)¶
-
natcap.invest.coastal_vulnerability.coastal_vulnerability_post_processing.
execute
(args)¶
Module contents¶
Overlap Analysis Package¶
Overlap Analysis¶
Invest overlap analysis filehandler for data passed in through UI
-
natcap.invest.overlap_analysis.overlap_analysis.
create_hubs_raster
(hubs_shape_uri, decay, aoi_raster_uri, hubs_out_uri)¶ This will create a rasterized version of the hubs shapefile where each pixel on the raster will be set accourding to the decay function from the point values themselves. We will rasterize the shapefile so that all land is 0, and nodata is the distance from the closest point.
- Input:
- hubs_shape_uri - Open point shapefile containing the hub locations
- as points.
- decay - Double representing the rate at which the hub importance
- depreciates relative to the distance from the location.
- aoi_raster_uri - The URI to the area interest raster on which we
- want to base our new hubs raster.
- hubs_out_uri - The URI location at which the new hubs raster should
- be placed.
- Output:
- This creates a raster within hubs_out_uri whose data will be a function of the decay around points provided from hubs shape.
Returns nothing.
-
natcap.invest.overlap_analysis.overlap_analysis.
create_unweighted_raster
(output_dir, aoi_raster_uri, raster_files_uri)¶ This will create the set of unweighted rasters- both the AOI and individual rasterizations of the activity layers. These will all be combined to output a final raster displaying unweighted activity frequency within the area of interest.
- Input:
- output_dir- This is the directory in which the final frequency raster
- will be placed. That file will be named ‘hu_freq.tif’.
- aoi_raster_uri- The uri to the rasterized version of the AOI file
- passed in with args[‘zone_layer_file’]. We will use this within the combination function to determine where to place nodata values.
- raster_files_uri - The uris to the rasterized version of the files
- passed in through args[‘over_layer_dict’]. Each raster file shows the presence or absence of the activity that it represents.
- Output:
- A raster file named [‘workspace_dir’]/output/hu_freq.tif. This depicts the unweighted frequency of activity within a gridded area or management zone.
Returns nothing.
-
natcap.invest.overlap_analysis.overlap_analysis.
create_weighted_raster
(out_dir, intermediate_dir, aoi_raster_uri, inter_weights_dict, layers_dict, intra_name, do_inter, do_intra, do_hubs, hubs_raster_uri, raster_uris, raster_names)¶ This function will create an output raster that takes into account both inter-activity weighting and intra-activity weighting. This will produce a map that looks both at where activities are occurring, and how much people value those activities and areas.
- Input:
- out_dir- This is the directory into which our completed raster file
- should be placed when completed.
- intermediate_dir- The directory in which the weighted raster files can
- be stored.
- inter_weights_dict- The dictionary that holds the mappings from layer
- names to the inter-activity weights passed in by CSV. The dictionary key is the string name of each shapefile, minus the .shp extension. This ID maps to a double representing ther inter-activity weight of each activity layer.
- layers_dict- This dictionary contains all the activity layers that are
- included in the particular model run. This maps the name of the shapefile (excluding the .shp extension) to the open datasource itself.
- intra_name- A string which represents the desired field name in our
- shapefiles. This field should contain the intra-activity weight for that particular shape.
- do_inter- A boolean that indicates whether inter-activity weighting is
- desired.
- do_intra- A boolean that indicates whether intra-activity weighting is
- desired.
- aoi_raster_uri - The uri to the dataset for our Area Of Interest.
- This will be the base map for all following datasets.
- raster_uris - A list of uris to the open unweighted raster files
- created by make_indiv_rasters that begins with the AOI raster. This will be used when intra-activity weighting is not desired.
- raster_names- A list of file names that goes along with the unweighted
- raster files. These strings can be used as keys to the other ID-based dictionaries, and will be in the same order as the ‘raster_files’ list.
- Output:
- weighted_raster- A raster file output that takes into account both
- inter-activity weights and intra-activity weights.
Returns nothing.
-
natcap.invest.overlap_analysis.overlap_analysis.
execute
(args)¶ Overlap Analysis.
This function will take care of preparing files passed into the overlap analysis model. It will handle all files/inputs associated with calculations and manipulations. It may write log, warning, or error messages to stdout.
Parameters: - args – A python dictionary created by the UI and passed to this method. It will contain the following data.
- args['workspace_dir'] (string) – The directory in which to place all resulting files, will come in as a string. (required)
- args['zone_layer_uri'] (string) – A URI pointing to a shapefile with the analysis zones on it. (required)
- args['grid_size'] (int) – This is an int specifying how large the gridded squares over the shapefile should be. (required)
- args['overlap_data_dir_uri'] (string) – URI pointing to a directory where multiple shapefiles are located. Each shapefile represents an activity of interest for the model. (required)
- args['do-inter'] (bool) – Boolean that indicates whether or not inter-activity weighting is desired. This will decide if the overlap table will be created. (required)
- args['do_intra'] (bool) – Boolean which indicates whether or not intra-activity weighting is desired. This will will pull attributes from shapefiles passed in in ‘zone_layer_uri’. (required)
- args['do_hubs'] (bool) – Boolean which indicates if human use hubs are desired. (required)
- args['overlap_layer_tbl'] (string) – URI to a CSV file that holds relational data and identifier data for all layers being passed in within the overlap analysis directory. (optional)
- args['intra_name'] (string) – string which corresponds to a field within the layers being passed in within overlap analysis directory. This is the intra-activity importance for each activity. (optional)
- args['hubs_uri'] (string) – The location of the shapefile containing points for human use hub calculations. (optional)
- args['decay_amt'] (float) – A double representing the decay rate of value from the human use hubs. (optional)
Returns: None
-
natcap.invest.overlap_analysis.overlap_analysis.
format_over_table
(over_tbl)¶ This CSV file contains a string which can be used to uniquely identify a .shp file to which the values in that string’s row will correspond. This string, therefore, should be used as the key for the ovlap_analysis dictionary, so that we can get all corresponding values for a shapefile at once by knowing its name.
- Input:
- over_tbl- A CSV that contains a list of each interest shapefile,
- and the inter activity weights corresponding to those layers.
- Returns:
- over_dict- The analysis layer dictionary that maps the unique name
- of each layer to the optional parameter of inter-activity weight. For each entry, the key will be the string name of the layer that it represents, and the value will be the inter-activity weight for that layer.
-
natcap.invest.overlap_analysis.overlap_analysis.
make_indiv_rasters
(out_dir, overlap_shape_uris, aoi_raster_uri)¶ This will pluck each of the files out of the dictionary and create a new raster file out of them. The new file will be named the same as the original shapefile, but with a .tif extension, and will be placed in the intermediate directory that is being passed in as a parameter.
- Input:
- out_dir- This is the directory into which our completed raster files
- should be placed when completed.
- overlap_shape_uris- This is a dictionary containing all of the open
- shapefiles which need to be rasterized. The key for this dictionary is the name of the file itself, minus the .shp extension. This key maps to the open shapefile of that name.
- aoi_raster_uri- The dataset for our AOI. This will be the base map for
- all following datasets.
Returns: - raster_files- This is a list of the datasets that we want to sum. The
- first will ALWAYS be the AOI dataset, and the rest will be the variable number of other datasets that we want to sum.
- raster_names- This is a list of layer names that corresponds to the
- files in ‘raster_files’. The first layer is guaranteed to be the AOI, but all names after that will be in the same order as the files so that it can be used for indexing later.
-
natcap.invest.overlap_analysis.overlap_analysis.
make_indiv_weight_rasters
(input_dir, aoi_raster_uri, layers_dict, intra_name)¶ This is a helper function for create_weighted_raster, which abstracts some of the work for getting the intra-activity weights per pixel to a separate function. This function will take in a list of the activities layers, and using the aoi_raster as a base for the tranformation, will rasterize the shapefile layers into rasters where the burn value is based on a per-pixel intra-activity weight (specified in each polygon on the layer). This function will return a tuple of two lists- the first is a list of the rasterized shapefiles, starting with the aoi. The second is a list of the shapefile names (minus the extension) in the same order as they were added to the first list. This will be used to reference the dictionaries containing the rest of the weighting information for the final weighted raster calculation.
- Input:
- input_dir: The directory into which the weighted rasters should be
- placed.
- aoi_raster_uri: The uri to the rasterized version of the area of
- interest. This will be used as a basis for all following rasterizations.
- layers_dict: A dictionary of all shapefiles to be rasterized. The key
- is the name of the original file, minus the file extension. The value is an open shapefile datasource.
- intra_name: The string corresponding to the value we wish to pull out
- of the shapefile layer. This is an attribute of all polygons corresponding to the intra-activity weight of a given shape.
Returns: - A list of raster versions of the original
- activity shapefiles. The first file will ALWAYS be the AOI, followed by the rasterized layers.
- weighted_names: A list of the filenames minus extensions, of the
- rasterized files in weighted_raster_files. These can be used to reference properties of the raster files that are located in other dictionaries.
Return type: weighted_raster_files
Overlap Analysis Core¶
Core module for both overlap analysis and management zones. This function can be used by either of the secondary modules within the OA model.
-
natcap.invest.overlap_analysis.overlap_core.
get_files_dict
(folder)¶ Returns a dictionary of all .shp files in the folder.
- Input:
- folder- The location of all layer files. Among these, there should
- be files with the extension .shp. These will be used for all activity calculations.
Returns: file_dict- A dictionary which maps the name (minus file extension) of a shapefile to the open datasource itself. The key in this dictionary is the name of the file (not including file path or extension), and the value is the open shapefile.
-
natcap.invest.overlap_analysis.overlap_core.
listdir
(path)¶ A replacement for the standar os.listdir which, instead of returning only the filename, will include the entire path. This will use os as a base, then just lambda transform the whole list.
- Input:
- path- The location container from which we want to gather all files.
Returns: A list of full URIs contained within ‘path’.
Overlap Analysis Management Zone¶
This is the preperatory class for the management zone portion of overlap analysis.
-
natcap.invest.overlap_analysis.overlap_analysis_mz.
execute
(args)¶ Overlap Analysis: Management Zones.
Parameters: - args – A python dictionary created by the UI and passed to this method. It will contain the following data.
- args['workspace_dir'] (string) – The directory in which to place all resulting files, will come in as a string. (required)
- args['zone_layer_loc'] (string) – A URI pointing to a shapefile with the analysis zones on it. (required)
- args['overlap_data_dir_loc'] (string) – URI pointing to a directory where multiple shapefiles are located. Each shapefile represents an activity of interest for the model. (required)
Returns: None
Overlap Analysis Management Zone Core¶
This is the core module for the management zone analysis portion of the Overlap Analysis model.
-
natcap.invest.overlap_analysis.overlap_analysis_mz_core.
execute
(args)¶ This is the core module for the management zone model, which was extracted from the overlap analysis model. This particular one will take in a shapefile conatining a series of AOI’s, and a folder containing activity layers, and will return a modified shapefile of AOI’s, each of which will have an attribute stating how many activities take place within that polygon.
- Input:
- args[‘workspace_dir’]- The folder location into which we can write an
- Output or Intermediate folder as necessary, and where the final shapefile will be placed.
- args[‘zone_layer_file’]- An open shapefile which contains our
- management zone polygons. It should be noted that this should not be edited directly but instead, should have a copy made in order to add the attribute field.
- args[‘over_layer_dict’] - A dictionary which maps the name of the
- shapefile (excluding the .shp extension) to the open datasource itself. These files are each an activity layer that will be counted within the totals per management zone.
- Output:
- A file named [workspace_dir]/Ouput/mz_frequency.shp which is a copy of args[‘zone_layer_file’] with the added attribute “ACTIV_CNT” that will total the number of activities taking place in each polygon.
Returns nothing.
Module contents¶
Scenario Generator Package¶
Scenario Generator¶
Scenario Generator Module.
-
natcap.invest.scenario_generator.scenario_generator.
calculate_distance_raster_uri
(dataset_in_uri, dataset_out_uri)¶ Calculate distance to non-zero cell for all input zero-value cells.
Parameters: - dataset_in_uri (str) – the input mask raster. Distances calculated from the non-zero cells in raster.
- dataset_out_uri (str) – the output raster where all zero values are equal to the euclidean distance of the closest non-zero pixel.
-
natcap.invest.scenario_generator.scenario_generator.
calculate_priority
(priority_table_uri)¶ Create dictionary mapping each land-cover class to their priority weight.
Parameters: priority_table_uri (str) – path to priority csv table Returns: priority_dict – land-cover and weights_matrix Return type: dict
-
natcap.invest.scenario_generator.scenario_generator.
calculate_weights
(array, rounding=4)¶ Create list of priority weights by land-cover class.
Parameters: - array (np.array) – input array
- rounding (int) – number of decimal places to include
Returns: weights_list – list of priority weights
Return type: list
-
natcap.invest.scenario_generator.scenario_generator.
execute
(args)¶ Scenario Generator: Rule-Based.
Model entry-point.
Parameters: - workspace_dir (str) – path to workspace directory
- suffix (str) – string to append to output files
- landcover (str) – path to land-cover raster
- transition (str) – path to land-cover attributes table
- calculate_priorities (bool) – whether to calculate priorities
- priorities_csv_uri (str) – path to priority csv table
- calculate_proximity (bool) – whether to calculate proximity
- proximity_weight (float) – weight given to proximity
- calculate_transition (bool) – whether to specifiy transitions
- calculate_factors (bool) – whether to use suitability factors
- suitability_folder (str) – path to suitability folder
- suitability (str) – path to suitability factors table
- weight (float) – suitability factor weight
- factor_inclusion (int) – the rasterization method – all touched or center points
- factors_field_container (bool) – whether to use suitability factor inputs
- calculate_constraints (bool) – whether to use constraint inputs
- constraints (str) – filepath to constraints shapefile layer
- constraints_field (str) – shapefile field containing constraints field
- override_layer (bool) – whether to use override layer
- override (str) – path to override shapefile
- override_field (str) – shapefile field containing override value
- override_inclusion (int) – the rasterization method
Example Args:
args = { 'workspace_dir': 'path/to/dir', 'suffix': '', 'landcover': 'path/to/raster', 'transition': 'path/to/csv', 'calculate_priorities': True, 'priorities_csv_uri': 'path/to/csv', 'calculate_proximity': True, 'calculate_transition': True, 'calculate_factors': True, 'suitability_folder': 'path/to/dir', 'suitability': 'path/to/csv', 'weight': 0.5, 'factor_inclusion': 0, 'factors_field_container': True, 'calculate_constraints': True, 'constraints': 'path/to/shapefile', 'constraints_field': '', 'override_layer': True, 'override': 'path/to/shapefile', 'override_field': '', 'override_inclusion': 0 }
Added Afterwards:
d = { 'proximity_weight': 0.3, 'distance_field': '', 'transition_id': 'ID', 'percent_field': 'Percent Change', 'area_field': 'Area Change', 'priority_field': 'Priority', 'proximity_field': 'Proximity', 'suitability_id': '', 'suitability_layer': '', 'suitability_field': '', }
-
natcap.invest.scenario_generator.scenario_generator.
filter_fragments
(input_uri, size, output_uri)¶ Filter fragments.
Parameters: - input_uri (str) – path to input raster
- size (float) – patch (/fragments?) size threshold
- output_uri (str) – path to output raster
-
natcap.invest.scenario_generator.scenario_generator.
generate_chart_html
(cover_dict, cover_names_dict, workspace_dir)¶ Create HTML page showing statistics about land-cover change.
- Initial land-cover cell count
- Scenario land-cover cell count
- Land-cover percent change
- Land-cover percent total: initial, final, change
- Transition matrix
- Unconverted pixels list
Parameters: - cover_dict (dict) – land cover {‘cover_id’: [before, after]}
- cover_names_dict (dict) – land cover names {‘cover_id’: ‘cover_name’}
- workspace_dir (str) – path to workspace directory
Returns: chart_html – html chart
Return type: str
-
natcap.invest.scenario_generator.scenario_generator.
get_geometry_type_from_uri
(datasource_uri)¶ Get geometry type from a shapefile.
Parameters: datasource_uri (str) – path to shapefile Returns: shape_type – OGR geometry type Return type: int
-
natcap.invest.scenario_generator.scenario_generator.
get_transition_pairs_count_from_uri
(dataset_uri_list)¶ Find transition summary statistics between lulc rasters.
Parameters: dataset_uri_list (list) – list of paths to rasters Returns: unique_raster_values_count – cell type with each raster value transitions (dict): count of cells Return type: dict
Scenario Generator Summary¶
Despeckle¶
Disk Sort¶
Module contents¶
Final Ecosystem Services¶
Coastal Blue Carbon Package¶
Model Entry Point¶
Coastal Blue Carbon¶
Coastal Blue Carbon Model.
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
execute
(args)¶ Coastal Blue Carbon.
Parameters: - workspace_dir (str) – location into which all intermediate and output files should be placed.
- results_suffix (str) – a string to append to output filenames.
- lulc_lookup_uri (str) – filepath to a CSV table used to convert the lulc code to a name. Also used to determine if a given lulc type is a coastal blue carbon habitat.
- lulc_transition_matrix_uri (str) – generated by the preprocessor. This file must be edited before it can be used by the main model. The left-most column represents the source lulc class, and the top row represents the destination lulc class.
- carbon_pool_initial_uri (str) – the provided CSV table contains information related to the initial conditions of the carbon stock within each of the three pools of a habitat. Biomass includes carbon stored above and below ground. All non-coastal blue carbon habitat lulc classes are assumed to contain no carbon. The values for ‘biomass’, ‘soil’, and ‘litter’ should be given in terms of Megatonnes CO2 e/ ha.
- carbon_pool_transient_uri (str) – the provided CSV table contains information related to the transition of carbon into and out of coastal blue carbon pools. All non-coastal blue carbon habitat lulc classes are assumed to neither sequester nor emit carbon as a result of change. The ‘yearly_accumulation’ values should be given in terms of Megatonnes of CO2 e/ha-yr. The ‘half-life’ values must be given in terms of years. The ‘disturbance’ values must be given as a decimal (e.g. 0.5 for 50%) of stock distrubed given a transition occurs away from a lulc-class.
- lulc_baseline_map_uri (str) – a GDAL-supported raster representing the baseline landscape/seascape.
- lulc_transition_maps_list (list) – a list of GDAL-supported rasters representing the landscape/seascape at particular points in time. Provided in chronological order.
- lulc_transition_years_list (list) – a list of years that respectively correspond to transition years of the rasters. Provided in chronological order.
- analysis_year (int) – optional. Indicates how many timesteps to run the transient analysis beyond the last transition year. Must come chronologically after the last transition year if provided. Otherwise, the final timestep of the model will be set to the last transition year.
- do_economic_analysis (bool) – boolean value indicating whether model should run economic analysis.
- do_price_table (bool) – boolean value indicating whether a price table is included in the arguments and to be used or a price and interest rate is provided and to be used instead.
- price (float) – the price per Megatonne CO2 e at the base year.
- interest_rate (float) – the interest rate on the price per Megatonne CO2e, compounded yearly. Provided as a percentage (e.g. 3.0 for 3%).
- price_table_uri (bool) – if args[‘do_price_table’] is set to True the provided CSV table is used in place of the initial price and interest rate inputs. The table contains the price per Megatonne CO2e sequestered for a given year, for all years from the original snapshot to the analysis year, if provided.
- discount_rate (float) – the discount rate on future valuations of sequestered carbon, compounded yearly. Provided as a percentage (e.g. 3.0 for 3%).
Example Args:
args = { 'workspace_dir': 'path/to/workspace/', 'results_suffix': '', 'lulc_lookup_uri': 'path/to/lulc_lookup_uri', 'lulc_transition_matrix_uri': 'path/to/lulc_transition_uri', 'carbon_pool_initial_uri': 'path/to/carbon_pool_initial_uri', 'carbon_pool_transient_uri': 'path/to/carbon_pool_transient_uri', 'lulc_baseline_map_uri': 'path/to/baseline_map.tif', 'lulc_transition_maps_list': [raster1_uri, raster2_uri, ...], 'lulc_transition_years_list': [2000, 2005, ...], 'analysis_year': 2100, 'do_economic_analysis': '<boolean>', 'do_price_table': '<boolean>', 'price': '<float>', 'interest_rate': '<float>', 'price_table_uri': 'path/to/price_table', 'discount_rate': '<float>' }
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
get_inputs
(args)¶ Get Inputs.
Parameters: - workspace_dir (str) – workspace directory
- results_suffix (str) – optional suffix appended to results
- lulc_lookup_uri (str) – lulc lookup table filepath
- lulc_transition_matrix_uri (str) – lulc transition table filepath
- carbon_pool_initial_uri (str) – initial conditions table filepath
- carbon_pool_transient_uri (str) – transient conditions table filepath
- lulc_baseline_map_uri (str) – baseline map filepath
- lulc_transition_maps_list (list) – ordered list of transition map filepaths
- lulc_transition_years_list (list) – ordered list of transition years
- analysis_year (int) – optional final year to extend the analysis beyond the last transition year
- do_economic_analysis (bool) – whether to run economic component of the analysis
- do_price_table (bool) – whether to use the price table for the economic component of the analysis
- price (float) – the price of net sequestered carbon
- interest_rate (float) – the interest rate on the price of carbon
- price_table_uri (str) – price table filepath
- discount_rate (float) – the discount rate on future valuations of carbon
Returns: d – data dictionary.
Return type: dict
- Example Returns:
- d = {
- ‘do_economic_analysis’: <bool>, ‘lulc_to_Sb’: <dict>, ‘lulc_to_Ss’: <dict> ‘lulc_to_L’: <dict>, ‘lulc_to_Yb’: <dict>, ‘lulc_to_Ys’: <dict>, ‘lulc_to_Hb’: <dict>, ‘lulc_to_Hs’: <dict>, ‘lulc_trans_to_Db’: <dict>, ‘lulc_trans_to_Ds’: <dict>, ‘C_r_rasters’: <list>, ‘transition_years’: <list>, ‘snapshot_years’: <list>, ‘timesteps’: <int>, ‘transitions’: <list>, ‘price_t’: <list>, ‘File_Registry’: <dict>
}
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
get_num_blocks
(raster_uri)¶ Get the number of blocks in a raster file.
Parameters: raster_uri (str) – filepath to raster Returns: num_blocks – number of blocks in raster Return type: int
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
is_transition_year
(snapshot_years, transitions, timestep)¶ Check whether given timestep is a transition year.
Parameters: - snapshot_years (list) – list of snapshot years.
- transitions (int) – number of transitions.
- timestep (int) – current timestep.
Returns: is_transition_year – whether the year corresponding to the
timestep is a transition year.
Return type: bool
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
read_from_raster
(input_raster, offset_block)¶ Read numpy array from raster block.
Parameters: - input_raster (str) – filepath to input raster
- offset_block (dict) – dictionary of offset information
Returns: array – a blocked array of the input raster
Return type: np.array
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
reclass
(array, d, out_dtype=None, nodata_mask=None)¶ Reclassify values in array.
If a nodata value is not provided, the function will return an array with NaN values in its place to mark cells that could not be reclassed.
Parameters: - array (np.array) – input data
- d (dict) – reclassification map
- out_dtype (np.dtype) – a numpy datatype for the reclass_array
- nodata_mask (number) – for floats, a nodata value that is set to np.nan if provided to make reclass_array nodata values consistent
Returns: reclass_array – reclassified array
Return type: np.array
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
reclass_transition
(a_prev, a_next, trans_dict, out_dtype=None, nodata_mask=None)¶ Reclass arrays based on element-wise combinations between two arrays.
Parameters: - a_prev (np.array) – previous lulc array
- a_next (np.array) – next lulc array
- trans_dict (dict) – reclassification map
- out_dtype (np.dtype) – a numpy datatype for the reclass_array
- nodata_mask (number) – for floats, a nodata value that is set to np.nan if provided to make reclass_array nodata values consistent
Returns: reclass_array – reclassified array
Return type: np.array
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
s_to_timestep
(snapshot_years, snapshot_idx)¶ Convert snapshot index position to timestep.
Parameters: - snapshot_years (list) – list of snapshot years.
- snapshot_idx (int) – index of snapshot
Returns: snapshot_timestep – timestep of the snapshot
Return type: int
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
timestep_to_transition_idx
(snapshot_years, transitions, timestep)¶ Convert timestep to transition index.
Parameters: - snapshot_years (list) – a list of years corresponding to the provided rasters
- transitions (int) – the number of transitions in the scenario
- timestep (int) – the current timestep
Returns: transition_idx – the current transition
Return type: int
-
natcap.invest.coastal_blue_carbon.coastal_blue_carbon.
write_to_raster
(output_raster, array, xoff, yoff)¶ Write numpy array to raster block.
Parameters: - output_raster (str) – filepath to output raster
- array (np.array) – block to save to raster
- xoff (int) – offset index for x-dimension
- yoff (int) – offset index for y-dimension
Preprocessor¶
Coastal Blue Carbon Preprocessor.
-
natcap.invest.coastal_blue_carbon.preprocessor.
execute
(args)¶ Coastal Blue Carbon Preprocessor.
The preprocessor accepts a list of rasters and checks for cell-transitions across the rasters. The preprocessor outputs a CSV file representing a matrix of land cover transitions, each cell prefilled with a string indicating whether carbon accumulates or is disturbed as a result of the transition, if a transition occurs.
Parameters: - workspace_dir (string) – directory path to workspace
- results_suffix (string) – append to outputs directory name if provided
- lulc_lookup_uri (string) – filepath of lulc lookup table
- lulc_snapshot_list (list) – a list of filepaths to lulc rasters
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'results_suffix': '', 'lulc_lookup_uri': 'path/to/lookup.csv', 'lulc_snapshot_list': ['path/to/raster1', 'path/to/raster2', ...] }
-
natcap.invest.coastal_blue_carbon.preprocessor.
read_from_raster
(input_raster, offset_block)¶ Read block from raster.
Parameters: - input_raster (str) – filepath to raster.
- offset_block (dict) – where the block is indexed.
Returns: a – the raster block.
Return type: np.array
Module contents¶
Coastal Blue Carbon package.
Carbon Package¶
Model Entry Point¶
-
natcap.invest.carbon.carbon_combined.
execute_30
(**args)¶ Carbon Storage and Sequestration.
This can include the biophysical model, the valuation model, or both.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- suffix (string) – a string to append to any output file name (optional)
- do_biophysical (boolean) – whether to run the biophysical model
- lulc_cur_uri (string) – a uri to a GDAL raster dataset (required)
- lulc_cur_year (int) – An integer representing the year of lulc_cur used in HWP calculation (required if args contains a ‘hwp_cur_shape_uri’, or ‘hwp_fut_shape_uri’ key)
- lulc_fut_uri (string) – a uri to a GDAL raster dataset (optional if calculating sequestration)
- lulc_redd_uri (string) – a uri to a GDAL raster dataset that represents land cover data for the REDD policy scenario (optional).
- lulc_fut_year (int) – An integer representing the year of lulc_fut used in HWP calculation (required if args contains a ‘hwp_fut_shape_uri’ key)
- carbon_pools_uri (string) – a uri to a CSV or DBF dataset mapping carbon storage density to the lulc classifications specified in the lulc rasters. (required if ‘do_uncertainty’ is false)
- hwp_cur_shape_uri (String) – Current shapefile uri for harvested wood calculation (optional, include if calculating current lulc hwp)
- hwp_fut_shape_uri (String) – Future shapefile uri for harvested wood calculation (optional, include if calculating future lulc hwp)
- do_uncertainty (boolean) – a boolean that indicates whether we should do uncertainty analysis. Defaults to False if not present.
- carbon_pools_uncertain_uri (string) – as above, but has probability distribution data for each lulc type rather than point estimates. (required if ‘do_uncertainty’ is true)
- confidence_threshold (float) – a number between 0 and 100 that indicates the minimum threshold for which we should highlight regions in the output raster. (required if ‘do_uncertainty’ is True)
- sequest_uri (string) – uri to a GDAL raster dataset describing the amount of carbon sequestered.
- yr_cur (int) – the year at which the sequestration measurement started
- yr_fut (int) – the year at which the sequestration measurement ended
- do_valuation (boolean) – whether to run the valuation model
- carbon_price_units (string) – indicates whether the price is in terms of carbon or carbon dioxide. Can value either as ‘Carbon (C)’ or ‘Carbon Dioxide (CO2)’.
- V (string) – value of a sequestered ton of carbon or carbon dioxide in
- per metric ton (dollars) –
- r (int) – the market discount rate in terms of a percentage
- c (float) – the annual rate of change in the price of carbon
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir/', 'suffix': '_results', 'do_biophysical': True, 'lulc_cur_uri': 'path/to/lulc_cur', 'lulc_cur_year': 2014, 'lulc_fut_uri': 'path/to/lulc_fut', 'lulc_redd_uri': 'path/to/lulc_redd', 'lulc_fut_year': 2025, 'carbon_pools_uri': 'path/to/carbon_pools', 'hwp_cur_shape_uri': 'path/to/hwp_cur_shape', 'hwp_fut_shape_uri': 'path/to/hwp_fut_shape', 'do_uncertainty': True, 'carbon_pools_uncertain_uri': 'path/to/carbon_pools_uncertain', 'confidence_threshold': 50.0, 'sequest_uri': 'path/to/sequest_uri', 'yr_cur': 2014, 'yr_fut': 2025, 'do_valuation': True, 'carbon_price_units':, 'Carbon (C)', 'V': 43.0, 'r': 7, 'c': 0, }
Returns: outputs – contains names of all output files Return type: dictionary
Carbon Combined¶
Integrated carbon model with biophysical and valuation components.
-
natcap.invest.carbon.carbon_combined.
execute
(args)¶ Carbon Storage and Sequestration.
This can include the biophysical model, the valuation model, or both.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- suffix (string) – a string to append to any output file name (optional)
- do_biophysical (boolean) – whether to run the biophysical model
- lulc_cur_uri (string) – a uri to a GDAL raster dataset (required)
- lulc_cur_year (int) – An integer representing the year of lulc_cur used in HWP calculation (required if args contains a ‘hwp_cur_shape_uri’, or ‘hwp_fut_shape_uri’ key)
- lulc_fut_uri (string) – a uri to a GDAL raster dataset (optional if calculating sequestration)
- lulc_redd_uri (string) – a uri to a GDAL raster dataset that represents land cover data for the REDD policy scenario (optional).
- lulc_fut_year (int) – An integer representing the year of lulc_fut used in HWP calculation (required if args contains a ‘hwp_fut_shape_uri’ key)
- carbon_pools_uri (string) – a uri to a CSV or DBF dataset mapping carbon storage density to the lulc classifications specified in the lulc rasters. (required if ‘do_uncertainty’ is false)
- hwp_cur_shape_uri (String) – Current shapefile uri for harvested wood calculation (optional, include if calculating current lulc hwp)
- hwp_fut_shape_uri (String) – Future shapefile uri for harvested wood calculation (optional, include if calculating future lulc hwp)
- do_uncertainty (boolean) – a boolean that indicates whether we should do uncertainty analysis. Defaults to False if not present.
- carbon_pools_uncertain_uri (string) – as above, but has probability distribution data for each lulc type rather than point estimates. (required if ‘do_uncertainty’ is true)
- confidence_threshold (float) – a number between 0 and 100 that indicates the minimum threshold for which we should highlight regions in the output raster. (required if ‘do_uncertainty’ is True)
- sequest_uri (string) – uri to a GDAL raster dataset describing the amount of carbon sequestered.
- yr_cur (int) – the year at which the sequestration measurement started
- yr_fut (int) – the year at which the sequestration measurement ended
- do_valuation (boolean) – whether to run the valuation model
- carbon_price_units (string) – indicates whether the price is in terms of carbon or carbon dioxide. Can value either as ‘Carbon (C)’ or ‘Carbon Dioxide (CO2)’.
- V (string) – value of a sequestered ton of carbon or carbon dioxide in
- per metric ton (dollars) –
- r (int) – the market discount rate in terms of a percentage
- c (float) – the annual rate of change in the price of carbon
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir/', 'suffix': '_results', 'do_biophysical': True, 'lulc_cur_uri': 'path/to/lulc_cur', 'lulc_cur_year': 2014, 'lulc_fut_uri': 'path/to/lulc_fut', 'lulc_redd_uri': 'path/to/lulc_redd', 'lulc_fut_year': 2025, 'carbon_pools_uri': 'path/to/carbon_pools', 'hwp_cur_shape_uri': 'path/to/hwp_cur_shape', 'hwp_fut_shape_uri': 'path/to/hwp_fut_shape', 'do_uncertainty': True, 'carbon_pools_uncertain_uri': 'path/to/carbon_pools_uncertain', 'confidence_threshold': 50.0, 'sequest_uri': 'path/to/sequest_uri', 'yr_cur': 2014, 'yr_fut': 2025, 'do_valuation': True, 'carbon_price_units':, 'Carbon (C)', 'V': 43.0, 'r': 7, 'c': 0, }
Returns: outputs – contains names of all output files Return type: dictionary
-
natcap.invest.carbon.carbon_combined.
execute_30
(**args) Carbon Storage and Sequestration.
This can include the biophysical model, the valuation model, or both.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- suffix (string) – a string to append to any output file name (optional)
- do_biophysical (boolean) – whether to run the biophysical model
- lulc_cur_uri (string) – a uri to a GDAL raster dataset (required)
- lulc_cur_year (int) – An integer representing the year of lulc_cur used in HWP calculation (required if args contains a ‘hwp_cur_shape_uri’, or ‘hwp_fut_shape_uri’ key)
- lulc_fut_uri (string) – a uri to a GDAL raster dataset (optional if calculating sequestration)
- lulc_redd_uri (string) – a uri to a GDAL raster dataset that represents land cover data for the REDD policy scenario (optional).
- lulc_fut_year (int) – An integer representing the year of lulc_fut used in HWP calculation (required if args contains a ‘hwp_fut_shape_uri’ key)
- carbon_pools_uri (string) – a uri to a CSV or DBF dataset mapping carbon storage density to the lulc classifications specified in the lulc rasters. (required if ‘do_uncertainty’ is false)
- hwp_cur_shape_uri (String) – Current shapefile uri for harvested wood calculation (optional, include if calculating current lulc hwp)
- hwp_fut_shape_uri (String) – Future shapefile uri for harvested wood calculation (optional, include if calculating future lulc hwp)
- do_uncertainty (boolean) – a boolean that indicates whether we should do uncertainty analysis. Defaults to False if not present.
- carbon_pools_uncertain_uri (string) – as above, but has probability distribution data for each lulc type rather than point estimates. (required if ‘do_uncertainty’ is true)
- confidence_threshold (float) – a number between 0 and 100 that indicates the minimum threshold for which we should highlight regions in the output raster. (required if ‘do_uncertainty’ is True)
- sequest_uri (string) – uri to a GDAL raster dataset describing the amount of carbon sequestered.
- yr_cur (int) – the year at which the sequestration measurement started
- yr_fut (int) – the year at which the sequestration measurement ended
- do_valuation (boolean) – whether to run the valuation model
- carbon_price_units (string) – indicates whether the price is in terms of carbon or carbon dioxide. Can value either as ‘Carbon (C)’ or ‘Carbon Dioxide (CO2)’.
- V (string) – value of a sequestered ton of carbon or carbon dioxide in
- per metric ton (dollars) –
- r (int) – the market discount rate in terms of a percentage
- c (float) – the annual rate of change in the price of carbon
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir/', 'suffix': '_results', 'do_biophysical': True, 'lulc_cur_uri': 'path/to/lulc_cur', 'lulc_cur_year': 2014, 'lulc_fut_uri': 'path/to/lulc_fut', 'lulc_redd_uri': 'path/to/lulc_redd', 'lulc_fut_year': 2025, 'carbon_pools_uri': 'path/to/carbon_pools', 'hwp_cur_shape_uri': 'path/to/hwp_cur_shape', 'hwp_fut_shape_uri': 'path/to/hwp_fut_shape', 'do_uncertainty': True, 'carbon_pools_uncertain_uri': 'path/to/carbon_pools_uncertain', 'confidence_threshold': 50.0, 'sequest_uri': 'path/to/sequest_uri', 'yr_cur': 2014, 'yr_fut': 2025, 'do_valuation': True, 'carbon_price_units':, 'Carbon (C)', 'V': 43.0, 'r': 7, 'c': 0, }
Returns: outputs – contains names of all output files Return type: dictionary
Carbon Biophysical¶
InVEST Carbon biophysical module at the “uri” level
-
exception
natcap.invest.carbon.carbon_biophysical.
MapCarbonPoolError
¶ Bases:
exceptions.Exception
A custom error for catching lulc codes from a raster that do not match the carbon pools data file
-
natcap.invest.carbon.carbon_biophysical.
execute
(args)¶
-
natcap.invest.carbon.carbon_biophysical.
execute_30
(**args)¶ This function invokes the carbon model given URI inputs of files. It will do filehandling and open/create appropriate objects to pass to the core carbon biophysical processing function. It may write log, warning, or error messages to stdout.
args - a python dictionary with at the following possible entries: args[‘workspace_dir’] - a uri to the directory that will write output
and other temporary files during calculation. (required)args[‘suffix’] - a string to append to any output file name (optional) args[‘lulc_cur_uri’] - is a uri to a GDAL raster dataset (required) args[‘carbon_pools_uri’] - is a uri to a CSV or DBF dataset mapping carbon
storage density to the lulc classifications specified in the lulc rasters. (required if ‘do_uncertainty’ is false)- args[‘carbon_pools_uncertain_uri’] - as above, but has probability distribution
- data for each lulc type rather than point estimates. (required if ‘do_uncertainty’ is true)
- args[‘do_uncertainty’] - a boolean that indicates whether we should do
- uncertainty analysis. Defaults to False if not present.
- args[‘confidence_threshold’] - a number between 0 and 100 that indicates
- the minimum threshold for which we should highlight regions in the output raster. (required if ‘do_uncertainty’ is True)
- args[‘lulc_fut_uri’] - is a uri to a GDAL raster dataset (optional
- if calculating sequestration)
- args[‘lulc_cur_year’] - An integer representing the year of lulc_cur
- used in HWP calculation (required if args contains a ‘hwp_cur_shape_uri’, or ‘hwp_fut_shape_uri’ key)
- args[‘lulc_fut_year’] - An integer representing the year of lulc_fut
- used in HWP calculation (required if args contains a ‘hwp_fut_shape_uri’ key)
- args[‘lulc_redd_uri’] - is a uri to a GDAL raster dataset that represents
- land cover data for the REDD policy scenario (optional).
- args[‘hwp_cur_shape_uri’] - Current shapefile uri for harvested wood
- calculation (optional, include if calculating current lulc hwp)
- args[‘hwp_fut_shape_uri’] - Future shapefile uri for harvested wood
- calculation (optional, include if calculating future lulc hwp)
returns a dict with the names of all output files.
Carbon Valuation¶
InVEST valuation interface module. Informally known as the URI level.
-
natcap.invest.carbon.carbon_valuation.
execute
(args)¶
-
natcap.invest.carbon.carbon_valuation.
execute_30
(**args)¶ This function calculates carbon sequestration valuation.
args - a python dictionary with at the following required entries:
- args[‘workspace_dir’] - a uri to the directory that will write output
- and other temporary files during calculation. (required)
args[‘suffix’] - a string to append to any output file name (optional) args[‘sequest_uri’] - is a uri to a GDAL raster dataset describing the
amount of carbon sequestered (baseline scenario, if this is REDD)- args[‘sequest_redd_uri’] (optional) - uri to the raster dataset for
- sequestration under the REDD policy scenario
- args[‘conf_uri’] (optional) - uri to the raster dataset indicating
- confident pixels for sequestration or emission
args[‘conf_redd_uri’] (optional) - as above, but for the REDD scenario args[‘carbon_price_units’] - a string indicating whether the price is
in terms of carbon or carbon dioxide. Can value either as ‘Carbon (C)’ or ‘Carbon Dioxide (CO2)’.- args[‘V’] - value of a sequestered ton of carbon or carbon dioxide in
- dollars per metric ton
args[‘r’] - the market discount rate in terms of a percentage args[‘c’] - the annual rate of change in the price of carbon args[‘yr_cur’] - the year at which the sequestration measurement
startedargs[‘yr_fut’] - the year at which the sequestration measurement ended
returns a dict with output file URIs.
Carbon Utilities¶
Useful functions for the carbon biophysical and valuation models.
-
natcap.invest.carbon.carbon_utils.
make_suffix
(model_args)¶ Return the suffix from the args (prepending ‘_’ if necessary).
-
natcap.invest.carbon.carbon_utils.
setup_dirs
(workspace_dir, *dirnames)¶ Create the requested directories, and return the pathnames.
-
natcap.invest.carbon.carbon_utils.
sum_pixel_values_from_uri
(uri)¶ Return the sum of the values of all pixels in the given file.
Module contents¶
Crop Production Package¶
Table of Contents¶
Model Entry Point¶
-
natcap.invest.crop_production.crop_production.
execute
(args)¶ Crop Production.
Parameters: - args['workspace_dir'] (str) – location into which all intermediate and output files should be placed.
- args['results_suffix'] (str) – a string to append to output filenames
- args['lookup_table'] (str) – filepath to a CSV table used to convert the crop code provided in the Crop Map to the crop name that can be used for searching through inputs and formatting outputs.
- args['aoi_raster'] (str) – a GDAL-supported raster representing a crop management scenario.
- args['dataset_dir'] (str) – the provided folder should contain a set of folders and data specified in the ‘Running the Model’ section of the model’s User Guide.
- args['yield_function'] (str) – the method used to compute crop yield. Can be one of three: ‘observed’, ‘percentile’, and ‘regression’.
- args['percentile_column'] (str) – for percentile yield function, the table column name must be provided so that the program can fetch the correct yield values for each climate bin.
- args['fertilizer_dir'] (str) – path to folder that contains a set of GDAL-supported rasters representing the amount of Nitrogen (N), Phosphorous (P2O5), and Potash (K2O) applied to each area of land (kg/ha).
- args['irrigation_raster'] (str) – filepath to a GDAL-supported raster representing whether irrigation occurs or not. A zero value indicates that no irrigation occurs. A one value indicates that irrigation occurs. If any other values are provided, irrigation is assumed to occur within that cell area.
- args['compute_nutritional_contents'] (boolean) – if true, calculates nutrition from crop production and creates associated outputs.
- args['nutrient_table'] (str) – filepath to a CSV table containing information about the nutrient contents of each crop.
- args['compute_financial_analysis'] (boolean) – if true, calculates economic returns from crop production and creates associated outputs.
- args['economics_table'] (str) – filepath to a CSV table containing information related to market price of a given crop and the costs involved with producing that crop.
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'results_suffix': 'scenario_name', 'lookup_table': 'path/to/lookup_table', 'aoi_raster': 'path/to/aoi_raster', 'dataset_dir': 'path/to/dataset_dir/', 'yield_function': 'regression', 'percentile_column': 'yield_95th', 'fertilizer_dir':'path/to/fertilizer_rasters_dir/', 'irrigation_raster': 'path/to/is_irrigated_raster', 'compute_nutritional_contents': True, 'nutrient_table': 'path/to/nutrition_table', 'compute_financial_analysis': True, 'economics_table': 'path/to/economics_table' }
Crop Production IO Module¶
Crop Production Model Module¶
Module contents¶
Finfish Aquaculture Package¶
Model Entry Point¶
-
natcap.invest.finfish_aquaculture.finfish_aquaculture.
execute
(args)¶ Finfish Aquaculture.
This function will take care of preparing files passed into the finfish aquaculture model. It will handle all files/inputs associated with biophysical and valuation calculations and manipulations. It will create objects to be passed to the aquaculture_core.py module. It may write log, warning, or error messages to stdout.
Parameters: - workspace_dir (string) – The directory in which to place all result files.
- ff_farm_loc (string) – URI that points to a shape file of fishery locations
- farm_ID (string) – column heading used to describe individual farms. Used to link GIS location data to later inputs.
- g_param_a (float) – Growth parameter alpha, used in modeling fish growth, should be an int or float.
- g_param_b (float) – Growth parameter beta, used in modeling fish growth, should be an int or float.
- g_param_tau (float) – Growth parameter tau, used in modeling fish growth, should be an int or float
- use_uncertainty (boolean) –
- g_param_a_sd (float) – (description)
- g_param_b_sd (float) – (description)
- num_monte_carlo_runs (int) –
- water_temp_tbl (string) – URI to a CSV table where daily water temperature values are stored from one year
- farm_op_tbl (string) – URI to CSV table of static variables for calculations
- outplant_buffer (int) – This value will allow the outplanting start day to be flexible plus or minus the number of days specified here.
- do_valuation (boolean) – Boolean that indicates whether or not valuation should be performed on the aquaculture model
- p_per_kg (float) – Market price per kilogram of processed fish
- frac_p (float) – Fraction of market price that accounts for costs rather than profit
- discount (float) – Daily market discount rate
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'ff_farm_loc': 'path/to/shapefile', 'farm_ID': 'FarmID' 'g_param_a': 0.038, 'g_param_b': 0.6667, 'g_param_tau': 0.08, 'use_uncertainty': True, 'g_param_a_sd': 0.005, 'g_param_b_sd': 0.05, 'num_monte_carlo_runs': 1000, 'water_temp_tbl': 'path/to/water_temp_tbl', 'farm_op_tbl': 'path/to/farm_op_tbl', 'outplant_buffer': 3, 'do_valuation': True, 'p_per_kg': 2.25, 'frac_p': 0.3, 'discount': 0.000192, }
Finfish Aquaculture¶
inVEST finfish aquaculture filehandler for biophysical and valuation data
-
natcap.invest.finfish_aquaculture.finfish_aquaculture.
execute
(args) Finfish Aquaculture.
This function will take care of preparing files passed into the finfish aquaculture model. It will handle all files/inputs associated with biophysical and valuation calculations and manipulations. It will create objects to be passed to the aquaculture_core.py module. It may write log, warning, or error messages to stdout.
Parameters: - workspace_dir (string) – The directory in which to place all result files.
- ff_farm_loc (string) – URI that points to a shape file of fishery locations
- farm_ID (string) – column heading used to describe individual farms. Used to link GIS location data to later inputs.
- g_param_a (float) – Growth parameter alpha, used in modeling fish growth, should be an int or float.
- g_param_b (float) – Growth parameter beta, used in modeling fish growth, should be an int or float.
- g_param_tau (float) – Growth parameter tau, used in modeling fish growth, should be an int or float
- use_uncertainty (boolean) –
- g_param_a_sd (float) – (description)
- g_param_b_sd (float) – (description)
- num_monte_carlo_runs (int) –
- water_temp_tbl (string) – URI to a CSV table where daily water temperature values are stored from one year
- farm_op_tbl (string) – URI to CSV table of static variables for calculations
- outplant_buffer (int) – This value will allow the outplanting start day to be flexible plus or minus the number of days specified here.
- do_valuation (boolean) – Boolean that indicates whether or not valuation should be performed on the aquaculture model
- p_per_kg (float) – Market price per kilogram of processed fish
- frac_p (float) – Fraction of market price that accounts for costs rather than profit
- discount (float) – Daily market discount rate
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'ff_farm_loc': 'path/to/shapefile', 'farm_ID': 'FarmID' 'g_param_a': 0.038, 'g_param_b': 0.6667, 'g_param_tau': 0.08, 'use_uncertainty': True, 'g_param_a_sd': 0.005, 'g_param_b_sd': 0.05, 'num_monte_carlo_runs': 1000, 'water_temp_tbl': 'path/to/water_temp_tbl', 'farm_op_tbl': 'path/to/farm_op_tbl', 'outplant_buffer': 3, 'do_valuation': True, 'p_per_kg': 2.25, 'frac_p': 0.3, 'discount': 0.000192, }
-
natcap.invest.finfish_aquaculture.finfish_aquaculture.
format_ops_table
(op_path, farm_ID, ff_aqua_args)¶ Takes in the path to the operating parameters table as well as the keyword to look for to identify the farm number to go with the parameters, and outputs a 2D dictionary that contains all parameters by farm and description. The outer key is farm number, and the inner key is a string description of the parameter.
- Input:
op_path: URI to CSV table of static variables for calculations farm_ID: The string to look for in order to identify the column in
which the farm numbers are stored. That column data will become the keys for the dictionary output.- ff_aqua_args: Dictionary of arguments being created in order to be
- passed to the aquaculture core function.
- Output:
- ff_aqua_args[‘farm_op_dict’]: A dictionary that is built up to store
- the static parameters for the aquaculture model run. This is a 2D dictionary, where the outer key is the farm ID number, and the inner keys are strings of parameter names.
Returns nothing.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture.
format_temp_table
(temp_path, ff_aqua_args)¶ This function is doing much the same thing as format_ops_table- it takes in information from a temperature table, and is formatting it into a 2D dictionary as an output.
- Input:
- temp_path: URI to a CSV file containing temperature data for 365 days
- for the farms on which we will look at growth cycles.
- ff_aqua_args: Dictionary of arguments that we are building up in order
- to pass it to the aquaculture core module.
- Output:
- ff_aqua_args[‘water_temp_dict’]: A 2D dictionary containing temperature
- data for 365 days. The outer keys are days of the year from 0 to 364 (we need to be able to check the day modulo 365) which we manually shift down by 1, and the inner keys are farm ID numbers.
Returns nothing.
Finfish Aquaculture Core¶
Implementation of the aquaculture calculations, and subsequent outputs. This will pull from data passed in by finfish_aquaculture.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
calc_farm_cycles
(outplant_buffer, a, b, tau, water_temp_dict, farm_op_dict, dur)¶ - Input:
- outplant_buffer: The number of days surrounding the outplant day during
- which the fish growth cycle can still be started.
- a: Growth parameter alpha. Float used as a scaler in the fish growth
- equation.
- b: Growth paramater beta. Float used as an exponential multiplier in
- the fish growth equation.
- water_temp_dict: 2D dictionary which contains temperature values for
- farms. The outer keys are calendar days as strings, and the inner are farm numbers as strings.
- farm_op_dict: 2D dictionary which contains individual operating
- parameters for each farm. The outer key is farm number as a string, and the inner is string descriptors of each parameter.
- dur: Float which describes the length for the growth simulation to run
- in years.
Returns cycle_history where:
- cycle_history: Dictionary which contains mappings from farms to a
history of growth for each cycle completed on that farm. These entries are formatted as follows...
- Farm->List of Type (day of outplanting,day of harvest, fish weight
- (grams))
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
calc_hrv_weight
(farm_op_dict, frac, mort, cycle_history)¶ - Input:
- farm_op_dict: 2D dictionary which contains individual operating
- parameters for each farm. The outer key is farm number as a string, and the inner is string descriptors of each parameter.
- frac: A float representing the fraction of the fish that remains after
- processing.
- mort: A float referring to the daily mortality rate of fishes on an
- aquaculture farm.
- cycle_history: Farm->List of Type (day of outplanting,
- day of harvest, fish weight (grams))
- Returns a tuple (curr_cycle_totals,indiv_tpw_totals) where:
- curr_cycle_totals_: dictionary which will hold a mapping from every
- farm (as identified by farm_ID) to the total processed weight of each farm
- indiv_tpw_totals: dictionary which will hold a farm->list mapping,
- where the list holds the individual tpw for all cycles that the farm completed
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
compute_uncertainty_data
(args, output_dir)¶ Does uncertainty analysis via a Monte Carlo simulation.
Returns a tuple with two 2D dicts. -a dict containing relative file paths to produced histograms -a dict containining statistical results (mean and std deviation) Each dict has farm IDs as outer keys, and result types (e.g. ‘value’, ‘weight’, and ‘cycles’) as inner keys.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
create_HTML_table
(output_dir, args, cycle_history, sum_hrv_weight, hrv_weight, farms_npv, value_history, histogram_paths, uncertainty_stats)¶ - Inputs:
- output_dir: The directory in which we will be creating our .html file
- output.
- cycle_history: dictionary mapping farm ID->list of tuples, each of
- which contains 3 things- (day of outplanting, day of harvest,
- harvest weight of a single fish in grams)
- sum_hrv_weight: dictionary which holds a mapping from farm ID->total
- processed weight of each farm
- hrv_weight: dictionary which holds a farm->list mapping, where the list
- holds the individual tpw for all cycles that the farm completed
- do_valuation: boolean variable that says whether or not valuation is
- desired
- farms_npv: dictionary with a farm-> float mapping, where each float is
- the net processed value of the fish processed on that farm, in $1000s of dollars.
- value_history: dictionary which holds a farm->list mapping, where the
- list holds tuples containing (Net Revenue, Net Present Value) for each cycle completed by that farm
- Output:
- HTML file: contains 3 tables that summarize inputs and outputs for the
duration of the model. - Input Table: Farm Operations provided data, including Farm ID #,
Cycle Number, weight of fish at start, weight of fish at harvest, number of fish in farm, start day for growing, and length of fallowing period- Output Table 1: Farm Harvesting data, including a summary table
for each harvest cycle of each farm. Will show Farm ID, cycle number, days since outplanting date, harvested weight, net revenue, outplant day, and year.
- Output Table 2: Model outputs for each farm, including Farm ID,
net present value, number of completed harvest cycles, and total volume harvested.
Returns nothing.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
do_monte_carlo_simulation
(args)¶ Performs a Monte Carlo simulation and returns the results.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
execute
(args)¶ ‘ Runs the biophysical and valuation parts of the finfish aquaculture model. This will output: 1. a shape file showing farm locations w/ addition of # of harvest cycles,
total processed weight at that farm, and if valuation is true, total discounted net revenue at each farm location.- Three HTML tables summarizing all model I/O- summary of user-provided
data, summary of each harvest cycle, and summary of the outputs/farm
- A .txt file that is named according to the date and time the model is
run, which lists the values used during that run
Data in args should include the following: –Biophysical Arguments– args: a python dictionary containing the following data: args[‘workspace_dir’]- The directory in which to place all result files. args[‘ff_farm_file’]- An open shape file containing the locations of
individual fisheries- args[‘farm_ID’]- column heading used to describe individual farms. Used to
- link GIS location data to later inputs.
- args[‘g_param_a’]- Growth parameter alpha, used in modeling fish growth,
- should be int or a float.
- args[‘g_param_b’]- Growth parameter beta, used in modeling fish growth,
- should be int or a float.
- args[‘water_temp_dict’]- A dictionary which links a specific date to the
farm numbers, and their temperature values on that day. (Note: in this case, the outer keys 1 and 2 are calendar days out of 365, starting with January 1 (day 0), and the inner 1, 2, and 3 are farm numbers.)
- Format: {‘0’: ‘{‘1’: ‘8.447, ‘2’: ‘8.447’, ‘3’:‘8.947’, ...}’ ,
- ‘1’: ‘{‘1’: ‘8.406, ‘2’: ‘8.406’, ‘3’:‘8.906’, ...}’ ,
. . . . . . . . . }
- args[‘farm_op_dict’]- Dictionary which links a specific farm ID # to
another dictionary containing operating parameters mapped to their value for that particular farm (Note: in this case, the 1 and 2 are farm ID’s, not dates out of 365.)
- Format: {‘1’: ‘{‘Wt of Fish’: ‘0.06’, ‘Tar Weight’: ‘5.4’, ...}’,
- ‘2’: ‘{‘Wt of Fish’: ‘0.06’, ‘Tar Weight’: ‘5.4’, ...}’, . . . . . . . . . }
- args[‘frac_post_process’]- the fraction of edible fish left after
- processing is done to remove undesirable parts
- args[‘mort_rate_daily’]- mortality rate among fish in a year, divided by
- 365
args[‘duration’]- duration of the simulation, in years args[‘outplant_buffer’] - This value will allow the outplant start day to
be flexible plus or minus the number of days specified here.–Valuation arguments– args[‘do_valuation’]- boolean indicating whether or not to run the
valuation processargs[‘p_per_kg’]: Market price per kilogram of processed fish args[‘frac_p’]: Fraction of market price that accounts for costs rather
than profitargs[‘discount’]: Daily market discount rate
returns nothing
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
make_histograms
(farm, results, output_dir, total_num_runs)¶ Makes a histogram for the given farm and data.
Returns a dict mapping type (e.g. ‘value’, ‘weight’) to the relative file path for the respective histogram.
-
natcap.invest.finfish_aquaculture.finfish_aquaculture_core.
valuation
(price_per_kg, frac_mrkt_price, discount, hrv_weight, cycle_history)¶ This performs the valuation calculations, and returns tuple containing a dictionary with a farm-> float mapping, where each float is the net processed value of the fish processed on that farm, in $1000s of dollars, and a dictionary containing a farm-> list mapping, where each entry in the list is a tuple of (Net Revenue, Net Present Value) for every cycle on that farm.
- Inputs:
- price_per_kg: Float representing the price per kilogram of finfish for
- valuation purposes.
- frac_mrkt_price: Float that represents the fraction of market price
- that is attributable to costs.
discount: Float that is the daily market discount rate. cycle_hisory: Farm->List of Type (day of outplanting,
day of harvest, fish weight (grams))hrv_weight: Farm->List of TPW for each cycle (kilograms)
- Returns a tuple (val_history, valuations):
- val_history: dictionary which will hold a farm->list mapping, where the
- list holds tuples containing (Net Revenue, Net Present Value) for each cycle completed by that farm
- valuations: dictionary with a farm-> float mapping, where each float is
- the net processed value of the fish processed on that farm
Module contents¶
Fisheries Package¶
Table of Contents¶
Fisheries Model Entry Point¶
-
natcap.invest.fisheries.fisheries.
execute
(args, create_outputs=True)¶ Fisheries.
Parameters: - args['workspace_dir'] (str) – location into which all intermediate and output files should be placed.
- args['results_suffix'] (str) – a string to append to output filenames
- args['aoi_uri'] (str) – location of shapefile which will be used as subregions for calculation. Each region must conatin a ‘Name’ attribute (case-sensitive) matching the given name in the population parameters csv file.
- args['timesteps'] (int) – represents the number of time steps that the user desires the model to run.
- args['population_type'] (str) – specifies whether the model is age-specific or stage-specific. Options will be either “Age Specific” or “Stage Specific” and will change which equation is used in modeling growth.
- args['sexsp'] (str) – specifies whether or not the age and stage classes are distinguished by sex.
- args['harvest_units'] (str) – specifies how the user wants to get the harvest data. Options are either “Individuals” or “Weight”, and will change the harvest equation used in core. (Required if args[‘val_cont’] is True)
- args['do_batch'] (bool) – specifies whether program will perform a single model run or a batch (set) of model runs.
- args['population_csv_uri'] (str) – location of the population parameters csv. This will contain all age and stage specific parameters. (Required if args[‘do_batch’] is False)
- args['population_csv_dir'] (str) – location of the directory that contains the Population Parameters CSV files for batch processing (Required if args[‘do_batch’] is True)
- args['spawn_units'] (str) – (description)
- args['total_init_recruits'] (float) – represents the initial number of recruits that will be used in calculation of population on a per area basis.
- args['recruitment_type'] (str) – Name corresponding to one of the built-in recruitment functions {‘Beverton-Holt’, ‘Ricker’, ‘Fecundity’, Fixed}, or ‘Other’, meaning that the user is passing in their own recruitment function as an anonymous python function via the optional dictionary argument ‘recruitment_func’.
- args['recruitment_func'] (function) – Required if args[‘recruitment_type’] is set to ‘Other’. See below for instructions on how to create a user-defined recruitment function.
- args['alpha'] (float) – must exist within args for BH or Ricker Recruitment. Parameter that will be used in calculation of recruitment.
- args['beta'] (float) – must exist within args for BH or Ricker Recruitment. Parameter that will be used in calculation of recruitment.
- args['total_recur_recruits'] (float) – must exist within args for Fixed Recruitment. Parameter that will be used in calculation of recruitment.
- args['migr_cont'] (bool) – if True, model uses migration
- args['migration_dir'] (str) – if this parameter exists, it means migration is desired. This is the location of the parameters folder containing files for migration. There should be one file for every age class which migrates. (Required if args[‘migr_cont’] is True)
- args['val_cont'] (bool) – if True, model computes valuation
- args['frac_post_process'] (float) – represents the fraction of the species remaining after processing of the whole carcass is complete. This will exist only if valuation is desired for the particular species. (Required if args[‘val_cont’] is True)
- args['unit_price'] (float) – represents the price for a single unit of harvest. Exists only if valuation is desired. (Required if args[‘val_cont’] is True)
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'results_suffix': 'scenario_name', 'aoi_uri': 'path/to/aoi_uri', 'total_timesteps': 100, 'population_type': 'Stage-Based', 'sexsp': 'Yes', 'harvest_units': 'Individuals', 'do_batch': False, 'population_csv_uri': 'path/to/csv_uri', 'population_csv_dir': '', 'spawn_units': 'Weight', 'total_init_recruits': 100000.0, 'recruitment_type': 'Ricker', 'alpha': 32.4, 'beta': 54.2, 'total_recur_recruits': 92.1, 'migr_cont': True, 'migration_dir': 'path/to/mig_dir/', 'val_cont': True, 'frac_post_process': 0.5, 'unit_price': 5.0, }
Creating a User-Defined Recruitment Function
An optional argument has been created in the Fisheries Model to allow users proficient in Python to pass their own recruitment function into the program via the args dictionary.
Using the Beverton-Holt recruitment function as an example, here’s how a user might create and pass in their own recruitment function:
import natcap.invest import numpy as np # define input data Matu = np.array([...]) # the Maturity vector in the Population Parameters File Weight = np.array([...]) # the Weight vector in the Population Parameters File LarvDisp = np.array([...]) # the LarvalDispersal vector in the Population Parameters File alpha = 2.0 # scalar value beta = 10.0 # scalar value sexsp = 2 # 1 = not sex-specific, 2 = sex-specific # create recruitment function def spawners(N_prev): return (N_prev * Matu * Weight).sum() def rec_func_BH(N_prev): N_0 = (LarvDisp * ((alpha * spawners( N_prev) / (beta + spawners(N_prev)))) / sexsp) return (N_0, spawners(N_prev)) # fill out args dictionary args = {} # ... define other arguments ... args['recruitment_type'] = 'Other' # lets program know to use user-defined function args['recruitment_func'] = rec_func_BH # pass recruitment function as 'anonymous' Python function # run model natcap.invest.fisheries.fisheries.execute(args)
Conditions that a new recruitment function must meet to run properly:
- The function must accept as an argument: a single numpy three-dimensional array (N_prev) representing the state of the population at the previous time step. N_prev has three dimensions: the indices of the first dimension correspond to the region (must be in same order as provided in the Population Parameters File), the indices of the second dimension represent the sex if it is specific (i.e. two indices representing female, then male if the model is ‘sex-specific’, else just a single zero index representing the female and male populations aggregated together), and the indicies of the third dimension represent age/stage in ascending order.
- The function must return: a tuple of two values. The first value (N_0) being a single numpy one-dimensional array representing the youngest age of the population for the next time step. The indices of the array correspond to the regions of the population (outputted in same order as provided). If the model is sex-specific, it is currently assumed that males and females are produced in equal number and that the returned array has been already been divided by 2 in the recruitment function. The second value (spawners) is the number or weight of the spawners created by the population from the previous time step, provided as a scalar float value (non-negative).
Example of How Recruitment Function Operates within Fisheries Model:
# input data N_prev_xsa = [[[region0-female-age0, region0-female-age1], [region0-male-age0, region1-male-age1]], [[region1-female-age0, region1-female-age1], [region1-male-age0], [region1-male-age1]]] # execute function N_0_x, spawners = rec_func(N_prev_xsa) # output data - where N_0 contains information about the youngest # age/stage of the population for the next time step: N_0_x = [region0-age0, region1-age0] # if sex-specific, rec_func should divide by two before returning type(spawners) is float
Fisheries IO Module¶
The Fisheries IO module contains functions for handling inputs and outputs
-
exception
natcap.invest.fisheries.fisheries_io.
MissingParameter
(msg)¶ Bases:
exceptions.StandardError
An exception class that may be raised when a necessary parameter is not provided by the user.
-
natcap.invest.fisheries.fisheries_io.
create_outputs
(vars_dict)¶ Creates outputs from variables generated in the run_population_model() function in the fisheries_model module
Creates the following:
- Results CSV File
- Results HTML Page
- Results Shapefile (if provided)
- Intermediate CSV File
Parameters: vars_dict (dictionary) – contains variables generated by model run
-
natcap.invest.fisheries.fisheries_io.
fetch_args
(args, create_outputs=True)¶ Fetches input arguments from the user, verifies for correctness and completeness, and returns a list of variables dictionaries
Parameters: args (dictionary) – arguments from the user Returns: model_list – set of variable dictionaries for each modelReturn type: list Example Returns:
model_list = [ { 'workspace_dir': 'path/to/workspace_dir', 'results_suffix': 'scenario_name', 'output_dir': 'path/to/output_dir', 'aoi_uri': 'path/to/aoi_uri', 'total_timesteps': 100, 'population_type': 'Stage-Based', 'sexsp': 2, 'harvest_units': 'Individuals', 'do_batch': False, 'spawn_units': 'Weight', 'total_init_recruits': 100.0, 'recruitment_type': 'Ricker', 'alpha': 32.4, 'beta': 54.2, 'total_recur_recruits': 92.1, 'migr_cont': True, 'val_cont': True, 'frac_post_process': 0.5, 'unit_price': 5.0, # Pop Params 'population_csv_uri': 'path/to/csv_uri', 'Survnaturalfrac': np.array( [[[...], [...]], [[...], [...]], ...]), 'Classes': np.array([...]), 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), 'Regions': np.array([...]), 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), # Mig Params 'migration_dir': 'path/to/mig_dir', 'Migration': [np.matrix, np.matrix, ...] }, { ... # additional dictionary doesn't exist when 'do_batch' # is false } ]
Note
This function receives an unmodified ‘args’ dictionary from the user
-
natcap.invest.fisheries.fisheries_io.
read_migration_tables
(args, class_list, region_list)¶ Parses, verifies and orders list of migration matrices necessary for program.
Parameters: - args (dictionary) – same args as model entry point
- class_list (list) – list of class names
- region_list (list) – list of region names
Returns: mig_dict – see example below
Return type: dictionary
Example Returns:
mig_dict = { 'Migration': [np.matrix, np.matrix, ...] }
Note
If migration matrices are not provided for all classes, the function will generate identity matrices for missing classes
-
natcap.invest.fisheries.fisheries_io.
read_population_csv
(args, uri)¶ Parses and verifies a single Population Parameters CSV file
Parses and verifies inputs from the Population Parameters CSV file. If not all necessary vectors are included, the function will raise a MissingParameter exception. Survival matrix will be arranged by class-elements, 2nd dim: sex, and 3rd dim: region. Class vectors will be arranged by class-elements, 2nd dim: sex (depending on whether model is sex-specific) Region vectors will be arraged by region-elements, sex-agnostic.
Parameters: - args (dictionary) – arguments provided by user
- uri (string) – the particular Population Parameters CSV file to parse and verifiy
Returns: pop_dict – dictionary containing verified population
arguments
Return type: dictionary
Example Returns:
pop_dict = { 'population_csv_uri': 'path/to/csv', 'Survnaturalfrac': np.array( [[...], [...]], [[...], [...]], ...), # Class Vectors 'Classes': np.array([...]), 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), # Region Vectors 'Regions': np.array([...]), 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }
-
natcap.invest.fisheries.fisheries_io.
read_population_csvs
(args)¶ Parses and verifies the Population Parameters CSV files
Parameters: args (dictionary) – arguments provided by user Returns: pop_list – list of dictionaries containing verified population argumentsReturn type: list Example Returns:
pop_list = [ { 'Survnaturalfrac': np.array( [[...], [...]], [[...], [...]], ...), # Class Vectors 'Classes': np.array([...]), 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), # Region Vectors 'Regions': np.array([...]), 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }, { ... } ]
Fisheries Model Module¶
The Fisheries Model module contains functions for running the model
Variable Suffix Notation: t: time x: area/region a: age/class s: sex
-
natcap.invest.fisheries.fisheries_model.
initialize_vars
(vars_dict)¶ Initializes variables for model run
Parameters: vars_dict (dictionary) – verified arguments and variables Returns: vars_dict – modified vars_dict with additional variables Return type: dictionary Example Returns:
vars_dict = { # (original vars) 'Survtotalfrac': np.array([...]), # a,s,x 'G_survtotalfrac': np.array([...]), # (same) 'P_survtotalfrac': np.array([...]), # (same) 'N_tasx': np.array([...]), # Index Order: t,a,s,x 'H_tx': np.array([...]), # t,x 'V_tx': np.array([...]), # t,x 'Spawners_t': np.array([...]), }
-
natcap.invest.fisheries.fisheries_model.
run_population_model
(vars_dict, init_cond_func, cycle_func, harvest_func)¶ Runs the model
Parameters: - vars_dict (dictionary) –
- init_cond_func (lambda function) – sets initial conditions
- cycle_func (lambda function) – computes numbers for the next time step
- harvest_func (lambda function) – computes harvest and valuation
Returns: vars_dict (dictionary)
Example Returned Dictionary:
{ # (other items) ... 'N_tasx': np.array([...]), # Index Order: time, class, sex, region 'H_tx': np.array([...]), # Index Order: time, region 'V_tx': np.array([...]), # Index Order: time, region 'Spawners_t': np,array([...]), 'equilibrate_timestep': <int>, }
-
natcap.invest.fisheries.fisheries_model.
set_cycle_func
(vars_dict, rec_func)¶ Creates a function to run a single cycle in the model
Parameters: - vars_dict (dictionary) –
- rec_func (lambda function) – recruitment function
Example Output of Returned Cycle Function:
N_asx = np.array([...]) spawners = <int> N_next, spawners = cycle_func(N_prev)
-
natcap.invest.fisheries.fisheries_model.
set_harvest_func
(vars_dict)¶ Creates harvest function that calculates the given harvest and valuation of the fisheries population over each time step for a given region. Returns None if harvest isn’t selected by user.
Example Outputs of Returned Harvest Function:
H_x, V_x = harv_func(N_tasx) H_x = np.array([3.0, 4.5, 2.5, ...]) V_x = np.array([6.0, 9.0, 5.0, ...])
-
natcap.invest.fisheries.fisheries_model.
set_init_cond_func
(vars_dict)¶ Creates a function to set the initial conditions of the model
Parameters: vars_dict (dictionary) – variables Returns: init_cond_func – initial conditions function Return type: lambda function Example Return Array:
N_asx = np.ndarray([...])
-
natcap.invest.fisheries.fisheries_model.
set_recru_func
(vars_dict)¶ Creates recruitment function that calculates the number of recruits for class 0 at time t for each region (currently sex agnostic). Also returns number of spawners
Parameters: vars_dict (dictionary) – Returns: rec_func – recruitment function Return type: function Example Output of Returned Recruitment Function:
N_next[0], spawners = rec_func(N_prev)
Fisheries Habitat Scenario Tool Module¶
The Fisheries Habitat Scenario Tool module contains the high-level code for generating a new Population Parameters CSV File based on habitat area change and the dependencies that particular classes of the given species have on particular habitats.
-
natcap.invest.fisheries.fisheries_hst.
execute
(args)¶ Fisheries: Habitat Scenario Tool.
The Fisheries Habitat Scenario Tool generates a new Population Parameters CSV File with modified survival attributes across classes and regions based on habitat area changes and class-level dependencies on those habitats.
param str args[‘workspace_dir’]: location into which the resultant modified Population Parameters CSV file should be placed. param str args[‘sexsp’]: specifies whether or not the age and stage classes are distinguished by sex. Options: ‘Yes’ or ‘No’ param str args[‘population_csv_uri’]: location of the population parameters csv file. This file contains all age and stage specific parameters. param str args[‘habitat_chg_csv_uri’]: location of the habitat change parameters csv file. This file contains habitat area change information. param str args[‘habitat_dep_csv_uri’]: location of the habitat dependency parameters csv file. This file contains habitat-class dependency information. param float args[‘gamma’]: describes the relationship between a change in habitat area and a change in survival of life stages dependent on that habitat - Returns:
- None
Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'sexsp': 'Yes', 'population_csv_uri': 'path/to/csv', 'habitat_chg_csv_uri': 'path/to/csv', 'habitat_dep_csv_uri': 'path/to/csv', 'gamma': 0.5, }
Note:
- Modified Population Parameters CSV File saved to ‘workspace_dir/output/’
‘’‘
# Parse, Verify Inputs vars_dict = io.fetch_args(args)
# Convert Data vars_dict = convert_survival_matrix(vars_dict)
# Generate Modified Population Parameters CSV File io.save_population_csv(vars_dict)
- def convert_survival_matrix(vars_dict):
‘’’ Creates a new survival matrix based on the information provided by the user related to habitat area changes and class-level dependencies on those habitats.
- Args:
- vars_dict (dictionary): see fisheries_preprocessor_io.fetch_args for
- example
- Returns:
- vars_dict (dictionary): modified vars_dict with new Survival matrix
- accessible using the key ‘Surv_nat_xsa_mod’ with element values that exist between [0,1]
Example Returns:
ret = { # Other Variables... 'Surv_nat_xsa_mod': np.ndarray([...]) }
Fisheries Habitat Scenario Tool IO Module¶
The Fisheries Habitat Scenarios Tool IO module contains functions for handling inputs and outputs
-
exception
natcap.invest.fisheries.fisheries_hst_io.
MissingParameter
(msg)¶ Bases:
exceptions.StandardError
An exception class that may be raised when a necessary parameter is not provided by the user.
-
natcap.invest.fisheries.fisheries_hst_io.
fetch_args
(args)¶ Fetches input arguments from the user, verifies for correctness and completeness, and returns a list of variables dictionaries
Parameters: args (dictionary) – arguments from the user (same as Fisheries Preprocessor entry point) Returns: vars_dict – dictionary containing necessary variables Return type: dictionary Raises: ValueError
– parameter mismatch between Population and Habitat CSV filesExample Returns:
vars_dict = { 'workspace_dir': 'path/to/workspace_dir/', 'output_dir': 'path/to/output_dir/', 'sexsp': 2, 'gamma': 0.5, # Pop Vars 'population_csv_uri': 'path/to/csv_uri', 'Surv_nat_xsa': np.array( [[[...], [...]], [[...], [...]], ...]), 'Classes': np.array([...]), 'Class_vectors': { 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), }, 'Regions': np.array([...]), 'Region_vectors': { 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }, # Habitat Vars 'habitat_chg_csv_uri': 'path/to/csv', 'habitat_dep_csv_uri': 'path/to/csv', 'Habitats': ['habitat1', 'habitat2', ...], 'Hab_classes': ['class1', 'class2', ...], 'Hab_regions': ['region1', 'region2', ...], 'Hab_chg_hx': np.array( [[[...], [...]], [[...], [...]], ...]), 'Hab_dep_ha': np.array( [[[...], [...]], [[...], [...]], ...]), 'Hab_class_mvmt_a': np.array([...]), 'Hab_dep_num_a': np.array([...]), }
-
natcap.invest.fisheries.fisheries_hst_io.
read_habitat_chg_csv
(args)¶ Parses and verifies a Habitat Change Parameters CSV file and returns a dictionary of information related to the interaction between a species and the given habitats.
Parses the Habitat Change Parameters CSV file for the following vectors:
- Names of Habitats and Regions
- Habitat Area Change
Parameters: args (dictionary) – arguments from the user (same as Fisheries HST entry point)
Returns: habitat_chg_dict – dictionary containing necessary
variables
Return type: dictionary
Raises: MissingParameter
– required parameter not includedValueError
– values are out of bounds or of wrong typeIndexError
– likely a file formatting issue
Example Returns:
habitat_chg_dict = { 'Habitats': ['habitat1', 'habitat2', ...], 'Hab_regions': ['region1', 'region2', ...], 'Hab_chg_hx': np.array( [[[...], [...]], [[...], [...]], ...]), }
-
natcap.invest.fisheries.fisheries_hst_io.
read_habitat_dep_csv
(args)¶ Parses and verifies a Habitat Dependency Parameters CSV file and returns a dictionary of information related to the interaction between a species and the given habitats.
Parses the Habitat Parameters CSV file for the following vectors:
- Names of Habitats and Classes
- Habitat-Class Dependency
The following vectors are derived from the information given in the file:
- Classes where movement between habitats occurs
- Number of habitats that a particular class depends upon
Parameters: args (dictionary) – arguments from the user (same as Fisheries HST entry point)
Returns: habitat_dep_dict – dictionary containing necessary
variables
Return type: dictionary
Raises: - MissingParameter - required parameter not included
- ValueError - values are out of bounds or of wrong type
- IndexError - likely a file formatting issue
Example Returns:
habitat_dep_dict = { 'Habitats': ['habitat1', 'habitat2', ...], 'Hab_classes': ['class1', 'class2', ...], 'Hab_dep_ha': np.array( [[[...], [...]], [[...], [...]], ...]), 'Hab_class_mvmt_a': np.array([...]), 'Hab_dep_num_a': np.array([...]), }
-
natcap.invest.fisheries.fisheries_hst_io.
read_population_csv
(args)¶ Parses and verifies a single Population Parameters CSV file
Parses and verifies inputs from the Population Parameters CSV file. If not all necessary vectors are included, the function will raise a MissingParameter exception. Survival matrix will be arranged by class-elements, 2nd dim: sex, and 3rd dim: region. Class vectors will be arranged by class-elements, 2nd dim: sex (depending on whether model is sex-specific) Region vectors will be arraged by region-elements, sex-agnostic.
Parameters: args (dictionary) – arguments provided by user
Returns: pop_dict – dictionary containing verified population
arguments
Return type: dictionary
Raises: MissingParameter
– required parameter not includedValueError
– values are out of bounds or of wrong type
Example Returns:
pop_dict = { 'population_csv_uri': 'path/to/csv', 'Surv_nat_xsa': np.array( [[...], [...]], [[...], [...]], ...), # Class Vectors 'Classes': np.array([...]), 'Class_vector_names': [...], 'Class_vectors': { 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), }, # Region Vectors 'Regions': np.array([...]), 'Region_vector_names': [...], 'Region_vectors': { 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }, }
-
natcap.invest.fisheries.fisheries_hst_io.
save_population_csv
(vars_dict)¶ Creates a new Population Parameters CSV file based the provided inputs.
Parameters: vars_dict (dictionary) – variables generated by preprocessor arguments and run. Example Args:
args = { 'workspace_dir': 'path/to/workspace_dir/', 'output_dir': 'path/to/output_dir/', 'sexsp': 2, 'population_csv_uri': 'path/to/csv', # original csv file 'Surv_nat_xsa': np.ndarray([...]), 'Surv_nat_xsa_mod': np.ndarray([...]), # Class Vectors 'Classes': np.array([...]), 'Class_vector_names': [...], 'Class_vectors': { 'Vulnfishing': np.array([...], [...]), 'Maturity': np.array([...], [...]), 'Duration': np.array([...], [...]), 'Weight': np.array([...], [...]), 'Fecundity': np.array([...], [...]), }, # Region Vectors 'Regions': np.array([...]), 'Region_vector_names': [...], 'Region_vectors': { 'Exploitationfraction': np.array([...]), 'Larvaldispersal': np.array([...]), }, # other arguments are ignored ... }
Note
- Creates a modified Population Parameters CSV file located in the ‘workspace/output/’ folder
- Currently appends ‘_modified’ to original filename for new filename
Module contents¶
Hydropower Package¶
Model Entry Point¶
-
natcap.invest.hydropower.hydropower_water_yield.
execute
(args)¶ Annual Water Yield: Reservoir Hydropower Production.
Executes the hydropower/water_yield model
Parameters: - args['workspace_dir'] (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- args['lulc_uri'] (string) – a uri to a land use/land cover raster whose LULC indexes correspond to indexes in the biophysical table input. Used for determining soil retention and other biophysical properties of the landscape. (required)
- args['depth_to_root_rest_layer_uri'] (string) – a uri to an input raster describing the depth of “good” soil before reaching this restrictive layer (required)
- args['precipitation_uri'] (string) – a uri to an input raster describing the average annual precipitation value for each cell (mm) (required)
- args['pawc_uri'] (string) – a uri to an input raster describing the plant available water content value for each cell. Plant Available Water Content fraction (PAWC) is the fraction of water that can be stored in the soil profile that is available for plants’ use. PAWC is a fraction from 0 to 1 (required)
- args['eto_uri'] (string) – a uri to an input raster describing the annual average evapotranspiration value for each cell. Potential evapotranspiration is the potential loss of water from soil by both evaporation from the soil and transpiration by healthy Alfalfa (or grass) if sufficient water is available (mm) (required)
- args['watersheds_uri'] (string) – a uri to an input shapefile of the watersheds of interest as polygons. (required)
- args['sub_watersheds_uri'] (string) – a uri to an input shapefile of
the subwatersheds of interest that are contained in the
args['watersheds_uri']
shape provided as input. (optional) - args['biophysical_table_uri'] (string) – a uri to an input CSV table of land use/land cover classes, containing data on biophysical coefficients such as root_depth (mm) and Kc, which are required. A column with header LULC_veg is also required which should have values of 1 or 0, 1 indicating a land cover type of vegetation, a 0 indicating non vegetation or wetland, water. NOTE: these data are attributes of each LULC class rather than attributes of individual cells in the raster map (required)
- args['seasonality_constant'] (float) – floating point value between 1 and 10 corresponding to the seasonal distribution of precipitation (required)
- args['results_suffix'] (string) – a string that will be concatenated onto the end of file names (optional)
- args['demand_table_uri'] (string) – a uri to an input CSV table of LULC classes, showing consumptive water use for each landuse / land-cover type (cubic meters per year) (required for water scarcity)
- args['valuation_table_uri'] (string) – a uri to an input CSV table of hydropower stations with the following fields (required for valuation): (‘ws_id’, ‘time_span’, ‘discount’, ‘efficiency’, ‘fraction’, ‘cost’, ‘height’, ‘kw_price’)
Returns: None
Hydropower Water Yield¶
Module that contains the core computational components for the hydropower model including the water yield, water scarcity, and valuation functions
-
natcap.invest.hydropower.hydropower_water_yield.
add_dict_to_shape
(shape_uri, field_dict, field_name, key)¶ Add a new field to a shapefile with values from a dictionary. The dictionaries keys should match to the values of a unique fields values in the shapefile
- shape_uri - a URI path to a ogr datasource on disk with a unique field
- ‘key’. The field ‘key’ should have values that correspond to the keys of ‘field_dict’
- field_dict - a python dictionary with keys mapping to values. These
- values will be what is filled in for the new field
field_name - a string for the name of the new field to add
- key - a string for the field name in ‘shape_uri’ that represents
- the unique features
returns - nothing
-
natcap.invest.hydropower.hydropower_water_yield.
compute_rsupply_volume
(watershed_results_uri)¶ Calculate the total realized water supply volume and the mean realized water supply volume per hectare for the given sheds. Output units in cubic meters and cubic meters per hectare respectively.
- watershed_results_uri - a URI path to an OGR shapefile to get water yield
- values from
returns - Nothing
-
natcap.invest.hydropower.hydropower_water_yield.
compute_water_yield_volume
(shape_uri, pixel_area)¶ Calculate the water yield volume per sub-watershed or watershed. Add results to shape_uri, units are cubic meters
- shape_uri - a URI path to an ogr datasource for the sub-watershed
- or watershed shapefile. This shapefiles features should have a ‘wyield_mn’ attribute, which calculations are derived from
- pixel_area - the area in meters squared of a pixel from the wyield
- raster.
returns - Nothing
-
natcap.invest.hydropower.hydropower_water_yield.
compute_watershed_valuation
(watersheds_uri, val_dict)¶ Computes and adds the net present value and energy for the watersheds to an output shapefile.
- watersheds_uri - a URI path to an OGR shapefile for the
- watershed results. Where the results will be added.
- val_dict - a python dictionary that has all the valuation parameters for
- each watershed
returns - Nothing
-
natcap.invest.hydropower.hydropower_water_yield.
execute
(args) Annual Water Yield: Reservoir Hydropower Production.
Executes the hydropower/water_yield model
Parameters: - args['workspace_dir'] (string) – a uri to the directory that will write output and other temporary files during calculation. (required)
- args['lulc_uri'] (string) – a uri to a land use/land cover raster whose LULC indexes correspond to indexes in the biophysical table input. Used for determining soil retention and other biophysical properties of the landscape. (required)
- args['depth_to_root_rest_layer_uri'] (string) – a uri to an input raster describing the depth of “good” soil before reaching this restrictive layer (required)
- args['precipitation_uri'] (string) – a uri to an input raster describing the average annual precipitation value for each cell (mm) (required)
- args['pawc_uri'] (string) – a uri to an input raster describing the plant available water content value for each cell. Plant Available Water Content fraction (PAWC) is the fraction of water that can be stored in the soil profile that is available for plants’ use. PAWC is a fraction from 0 to 1 (required)
- args['eto_uri'] (string) – a uri to an input raster describing the annual average evapotranspiration value for each cell. Potential evapotranspiration is the potential loss of water from soil by both evaporation from the soil and transpiration by healthy Alfalfa (or grass) if sufficient water is available (mm) (required)
- args['watersheds_uri'] (string) – a uri to an input shapefile of the watersheds of interest as polygons. (required)
- args['sub_watersheds_uri'] (string) – a uri to an input shapefile of
the subwatersheds of interest that are contained in the
args['watersheds_uri']
shape provided as input. (optional) - args['biophysical_table_uri'] (string) – a uri to an input CSV table of land use/land cover classes, containing data on biophysical coefficients such as root_depth (mm) and Kc, which are required. A column with header LULC_veg is also required which should have values of 1 or 0, 1 indicating a land cover type of vegetation, a 0 indicating non vegetation or wetland, water. NOTE: these data are attributes of each LULC class rather than attributes of individual cells in the raster map (required)
- args['seasonality_constant'] (float) – floating point value between 1 and 10 corresponding to the seasonal distribution of precipitation (required)
- args['results_suffix'] (string) – a string that will be concatenated onto the end of file names (optional)
- args['demand_table_uri'] (string) – a uri to an input CSV table of LULC classes, showing consumptive water use for each landuse / land-cover type (cubic meters per year) (required for water scarcity)
- args['valuation_table_uri'] (string) – a uri to an input CSV table of hydropower stations with the following fields (required for valuation): (‘ws_id’, ‘time_span’, ‘discount’, ‘efficiency’, ‘fraction’, ‘cost’, ‘height’, ‘kw_price’)
Returns: None
-
natcap.invest.hydropower.hydropower_water_yield.
filter_dictionary
(dict_data, values)¶ Create a subset of a dictionary given keys found in a list.
- The incoming dictionary should have keys that point to dictionary’s.
- Create a subset of that dictionary by using the same outer keys but only using the inner key:val pair if that inner key is found in the values list.
Parameters: - dict_data (dictionary) – a dictionary that has keys which point to dictionary’s.
- values (list) – a list of keys to keep from the inner dictionary’s of ‘dict_data’
Returns: a dictionary
-
natcap.invest.hydropower.hydropower_water_yield.
write_new_table
(filename, fields, data)¶ Create a new csv table from a dictionary
filename - a URI path for the new table to be written to disk
- fields - a python list of the column names. The order of the fields in
- the list will be the order in how they are written. ex: [‘id’, ‘precip’, ‘total’]
- data - a python dictionary representing the table. The dictionary
should be constructed with unique numerical keys that point to a dictionary which represents a row in the table: data = {0 : {‘id’:1, ‘precip’:43, ‘total’: 65},
1 : {‘id’:2, ‘precip’:65, ‘total’: 94}}
returns - nothing
Module contents¶
Nutrient Delivery Ratio Package¶
Model Entry Point¶
-
natcap.invest.ndr.ndr.
execute
(args)¶ Nutrient Delivery Ratio.
Parameters: - args['workspace_dir'] (string) – path to current workspace
- args['dem_uri'] (string) – path to digital elevation map raster
- args['lulc_uri'] (string) – a path to landcover map raster
- args['runoff_proxy_uri'] (string) – a path to a runoff proxy raster
- args['watersheds_uri'] (string) – path to the watershed shapefile
- args['biophysical_table_uri'] (string) –
path to csv table on disk containing nutrient retention values.
For each nutrient type [t] in args[‘calc_[t]’] that is true, must contain the following headers:
‘load_[t]’, ‘eff_[t]’, ‘crit_len_[t]’
If args[‘calc_n’] is True, must also contain the header ‘proportion_subsurface_n’ field.
- args['calc_p'] (boolean) – if True, phosphorous is modeled, additionally if True then biophysical table must have p fields in them
- args['calc_n'] (boolean) – if True nitrogen will be modeled, additionally biophysical table must have n fields in them.
- args['results_suffix'] (string) – (optional) a text field to append to all output files
- args['threshold_flow_accumulation'] – a number representing the flow accumulation in terms of upstream pixels.
- args['_prepare'] – (optional) The preprocessed set of data created by the ndr._prepare call. This argument could be used in cases where the call to this function is scripted and can save a significant amount DEM processing runtime.
Returns: None
Nutrient Delivery Ratio¶
Module for the execution of the biophysical component of the InVEST Nutrient Deposition model.
-
natcap.invest.ndr.ndr.
add_fields_to_shapefile
(key_field, field_summaries, output_layer, field_header_order=None)¶ Adds fields and their values indexed by key fields to an OGR layer open for writing.
- key_field - name of the key field in the output_layer that
- uniquely identifies each polygon.
- field_summaries - a dictionary indexed by the desired field
- name to place in the polygon that indexes to another dictionary indexed by key_field value to map to that particular polygon. ex {‘field_name_1’: {key_val1: value, key_val2: value}, ‘field_name_2’: {key_val1: value, etc.
output_layer - an open writable OGR layer field_header_order - a list of field headers in the order we
wish them to appear in the output table, if None then random key order in field summaries is used.returns nothing
-
natcap.invest.ndr.ndr.
execute
(args) Nutrient Delivery Ratio.
Parameters: - args['workspace_dir'] (string) – path to current workspace
- args['dem_uri'] (string) – path to digital elevation map raster
- args['lulc_uri'] (string) – a path to landcover map raster
- args['runoff_proxy_uri'] (string) – a path to a runoff proxy raster
- args['watersheds_uri'] (string) – path to the watershed shapefile
- args['biophysical_table_uri'] (string) –
path to csv table on disk containing nutrient retention values.
For each nutrient type [t] in args[‘calc_[t]’] that is true, must contain the following headers:
‘load_[t]’, ‘eff_[t]’, ‘crit_len_[t]’
If args[‘calc_n’] is True, must also contain the header ‘proportion_subsurface_n’ field.
- args['calc_p'] (boolean) – if True, phosphorous is modeled, additionally if True then biophysical table must have p fields in them
- args['calc_n'] (boolean) – if True nitrogen will be modeled, additionally biophysical table must have n fields in them.
- args['results_suffix'] (string) – (optional) a text field to append to all output files
- args['threshold_flow_accumulation'] – a number representing the flow accumulation in terms of upstream pixels.
- args['_prepare'] – (optional) The preprocessed set of data created by the ndr._prepare call. This argument could be used in cases where the call to this function is scripted and can save a significant amount DEM processing runtime.
Returns: None
Module contents¶
Pollination Package¶
Model Entry Point¶
-
natcap.invest.overlap_analysis.overlap_analysis.
execute
(args)¶ Overlap Analysis.
This function will take care of preparing files passed into the overlap analysis model. It will handle all files/inputs associated with calculations and manipulations. It may write log, warning, or error messages to stdout.
Parameters: - args – A python dictionary created by the UI and passed to this method. It will contain the following data.
- args['workspace_dir'] (string) – The directory in which to place all resulting files, will come in as a string. (required)
- args['zone_layer_uri'] (string) – A URI pointing to a shapefile with the analysis zones on it. (required)
- args['grid_size'] (int) – This is an int specifying how large the gridded squares over the shapefile should be. (required)
- args['overlap_data_dir_uri'] (string) – URI pointing to a directory where multiple shapefiles are located. Each shapefile represents an activity of interest for the model. (required)
- args['do-inter'] (bool) – Boolean that indicates whether or not inter-activity weighting is desired. This will decide if the overlap table will be created. (required)
- args['do_intra'] (bool) – Boolean which indicates whether or not intra-activity weighting is desired. This will will pull attributes from shapefiles passed in in ‘zone_layer_uri’. (required)
- args['do_hubs'] (bool) – Boolean which indicates if human use hubs are desired. (required)
- args['overlap_layer_tbl'] (string) – URI to a CSV file that holds relational data and identifier data for all layers being passed in within the overlap analysis directory. (optional)
- args['intra_name'] (string) – string which corresponds to a field within the layers being passed in within overlap analysis directory. This is the intra-activity importance for each activity. (optional)
- args['hubs_uri'] (string) – The location of the shapefile containing points for human use hub calculations. (optional)
- args['decay_amt'] (float) – A double representing the decay rate of value from the human use hubs. (optional)
Returns: None
Overlap Analysis¶
Invest overlap analysis filehandler for data passed in through UI
-
natcap.invest.overlap_analysis.overlap_analysis.
create_hubs_raster
(hubs_shape_uri, decay, aoi_raster_uri, hubs_out_uri)¶ This will create a rasterized version of the hubs shapefile where each pixel on the raster will be set accourding to the decay function from the point values themselves. We will rasterize the shapefile so that all land is 0, and nodata is the distance from the closest point.
- Input:
- hubs_shape_uri - Open point shapefile containing the hub locations
- as points.
- decay - Double representing the rate at which the hub importance
- depreciates relative to the distance from the location.
- aoi_raster_uri - The URI to the area interest raster on which we
- want to base our new hubs raster.
- hubs_out_uri - The URI location at which the new hubs raster should
- be placed.
- Output:
- This creates a raster within hubs_out_uri whose data will be a function of the decay around points provided from hubs shape.
Returns nothing.
-
natcap.invest.overlap_analysis.overlap_analysis.
create_unweighted_raster
(output_dir, aoi_raster_uri, raster_files_uri)¶ This will create the set of unweighted rasters- both the AOI and individual rasterizations of the activity layers. These will all be combined to output a final raster displaying unweighted activity frequency within the area of interest.
- Input:
- output_dir- This is the directory in which the final frequency raster
- will be placed. That file will be named ‘hu_freq.tif’.
- aoi_raster_uri- The uri to the rasterized version of the AOI file
- passed in with args[‘zone_layer_file’]. We will use this within the combination function to determine where to place nodata values.
- raster_files_uri - The uris to the rasterized version of the files
- passed in through args[‘over_layer_dict’]. Each raster file shows the presence or absence of the activity that it represents.
- Output:
- A raster file named [‘workspace_dir’]/output/hu_freq.tif. This depicts the unweighted frequency of activity within a gridded area or management zone.
Returns nothing.
-
natcap.invest.overlap_analysis.overlap_analysis.
create_weighted_raster
(out_dir, intermediate_dir, aoi_raster_uri, inter_weights_dict, layers_dict, intra_name, do_inter, do_intra, do_hubs, hubs_raster_uri, raster_uris, raster_names)¶ This function will create an output raster that takes into account both inter-activity weighting and intra-activity weighting. This will produce a map that looks both at where activities are occurring, and how much people value those activities and areas.
- Input:
- out_dir- This is the directory into which our completed raster file
- should be placed when completed.
- intermediate_dir- The directory in which the weighted raster files can
- be stored.
- inter_weights_dict- The dictionary that holds the mappings from layer
- names to the inter-activity weights passed in by CSV. The dictionary key is the string name of each shapefile, minus the .shp extension. This ID maps to a double representing ther inter-activity weight of each activity layer.
- layers_dict- This dictionary contains all the activity layers that are
- included in the particular model run. This maps the name of the shapefile (excluding the .shp extension) to the open datasource itself.
- intra_name- A string which represents the desired field name in our
- shapefiles. This field should contain the intra-activity weight for that particular shape.
- do_inter- A boolean that indicates whether inter-activity weighting is
- desired.
- do_intra- A boolean that indicates whether intra-activity weighting is
- desired.
- aoi_raster_uri - The uri to the dataset for our Area Of Interest.
- This will be the base map for all following datasets.
- raster_uris - A list of uris to the open unweighted raster files
- created by make_indiv_rasters that begins with the AOI raster. This will be used when intra-activity weighting is not desired.
- raster_names- A list of file names that goes along with the unweighted
- raster files. These strings can be used as keys to the other ID-based dictionaries, and will be in the same order as the ‘raster_files’ list.
- Output:
- weighted_raster- A raster file output that takes into account both
- inter-activity weights and intra-activity weights.
Returns nothing.
-
natcap.invest.overlap_analysis.overlap_analysis.
execute
(args) Overlap Analysis.
This function will take care of preparing files passed into the overlap analysis model. It will handle all files/inputs associated with calculations and manipulations. It may write log, warning, or error messages to stdout.
Parameters: - args – A python dictionary created by the UI and passed to this method. It will contain the following data.
- args['workspace_dir'] (string) – The directory in which to place all resulting files, will come in as a string. (required)
- args['zone_layer_uri'] (string) – A URI pointing to a shapefile with the analysis zones on it. (required)
- args['grid_size'] (int) – This is an int specifying how large the gridded squares over the shapefile should be. (required)
- args['overlap_data_dir_uri'] (string) – URI pointing to a directory where multiple shapefiles are located. Each shapefile represents an activity of interest for the model. (required)
- args['do-inter'] (bool) – Boolean that indicates whether or not inter-activity weighting is desired. This will decide if the overlap table will be created. (required)
- args['do_intra'] (bool) – Boolean which indicates whether or not intra-activity weighting is desired. This will will pull attributes from shapefiles passed in in ‘zone_layer_uri’. (required)
- args['do_hubs'] (bool) – Boolean which indicates if human use hubs are desired. (required)
- args['overlap_layer_tbl'] (string) – URI to a CSV file that holds relational data and identifier data for all layers being passed in within the overlap analysis directory. (optional)
- args['intra_name'] (string) – string which corresponds to a field within the layers being passed in within overlap analysis directory. This is the intra-activity importance for each activity. (optional)
- args['hubs_uri'] (string) – The location of the shapefile containing points for human use hub calculations. (optional)
- args['decay_amt'] (float) – A double representing the decay rate of value from the human use hubs. (optional)
Returns: None
-
natcap.invest.overlap_analysis.overlap_analysis.
format_over_table
(over_tbl)¶ This CSV file contains a string which can be used to uniquely identify a .shp file to which the values in that string’s row will correspond. This string, therefore, should be used as the key for the ovlap_analysis dictionary, so that we can get all corresponding values for a shapefile at once by knowing its name.
- Input:
- over_tbl- A CSV that contains a list of each interest shapefile,
- and the inter activity weights corresponding to those layers.
- Returns:
- over_dict- The analysis layer dictionary that maps the unique name
- of each layer to the optional parameter of inter-activity weight. For each entry, the key will be the string name of the layer that it represents, and the value will be the inter-activity weight for that layer.
-
natcap.invest.overlap_analysis.overlap_analysis.
make_indiv_rasters
(out_dir, overlap_shape_uris, aoi_raster_uri)¶ This will pluck each of the files out of the dictionary and create a new raster file out of them. The new file will be named the same as the original shapefile, but with a .tif extension, and will be placed in the intermediate directory that is being passed in as a parameter.
- Input:
- out_dir- This is the directory into which our completed raster files
- should be placed when completed.
- overlap_shape_uris- This is a dictionary containing all of the open
- shapefiles which need to be rasterized. The key for this dictionary is the name of the file itself, minus the .shp extension. This key maps to the open shapefile of that name.
- aoi_raster_uri- The dataset for our AOI. This will be the base map for
- all following datasets.
Returns: - raster_files- This is a list of the datasets that we want to sum. The
- first will ALWAYS be the AOI dataset, and the rest will be the variable number of other datasets that we want to sum.
- raster_names- This is a list of layer names that corresponds to the
- files in ‘raster_files’. The first layer is guaranteed to be the AOI, but all names after that will be in the same order as the files so that it can be used for indexing later.
-
natcap.invest.overlap_analysis.overlap_analysis.
make_indiv_weight_rasters
(input_dir, aoi_raster_uri, layers_dict, intra_name)¶ This is a helper function for create_weighted_raster, which abstracts some of the work for getting the intra-activity weights per pixel to a separate function. This function will take in a list of the activities layers, and using the aoi_raster as a base for the tranformation, will rasterize the shapefile layers into rasters where the burn value is based on a per-pixel intra-activity weight (specified in each polygon on the layer). This function will return a tuple of two lists- the first is a list of the rasterized shapefiles, starting with the aoi. The second is a list of the shapefile names (minus the extension) in the same order as they were added to the first list. This will be used to reference the dictionaries containing the rest of the weighting information for the final weighted raster calculation.
- Input:
- input_dir: The directory into which the weighted rasters should be
- placed.
- aoi_raster_uri: The uri to the rasterized version of the area of
- interest. This will be used as a basis for all following rasterizations.
- layers_dict: A dictionary of all shapefiles to be rasterized. The key
- is the name of the original file, minus the file extension. The value is an open shapefile datasource.
- intra_name: The string corresponding to the value we wish to pull out
- of the shapefile layer. This is an attribute of all polygons corresponding to the intra-activity weight of a given shape.
Returns: - A list of raster versions of the original
- activity shapefiles. The first file will ALWAYS be the AOI, followed by the rasterized layers.
- weighted_names: A list of the filenames minus extensions, of the
- rasterized files in weighted_raster_files. These can be used to reference properties of the raster files that are located in other dictionaries.
Return type: weighted_raster_files
Overlap Analysis Core¶
Core module for both overlap analysis and management zones. This function can be used by either of the secondary modules within the OA model.
-
natcap.invest.overlap_analysis.overlap_core.
get_files_dict
(folder)¶ Returns a dictionary of all .shp files in the folder.
- Input:
- folder- The location of all layer files. Among these, there should
- be files with the extension .shp. These will be used for all activity calculations.
Returns: file_dict- A dictionary which maps the name (minus file extension) of a shapefile to the open datasource itself. The key in this dictionary is the name of the file (not including file path or extension), and the value is the open shapefile.
-
natcap.invest.overlap_analysis.overlap_core.
listdir
(path)¶ A replacement for the standar os.listdir which, instead of returning only the filename, will include the entire path. This will use os as a base, then just lambda transform the whole list.
- Input:
- path- The location container from which we want to gather all files.
Returns: A list of full URIs contained within ‘path’.
Overlap Analysis Management Zone¶
This is the preperatory class for the management zone portion of overlap analysis.
-
natcap.invest.overlap_analysis.overlap_analysis_mz.
execute
(args)¶ Overlap Analysis: Management Zones.
Parameters: - args – A python dictionary created by the UI and passed to this method. It will contain the following data.
- args['workspace_dir'] (string) – The directory in which to place all resulting files, will come in as a string. (required)
- args['zone_layer_loc'] (string) – A URI pointing to a shapefile with the analysis zones on it. (required)
- args['overlap_data_dir_loc'] (string) – URI pointing to a directory where multiple shapefiles are located. Each shapefile represents an activity of interest for the model. (required)
Returns: None
Overlap Analysis Management Zone Core¶
This is the core module for the management zone analysis portion of the Overlap Analysis model.
-
natcap.invest.overlap_analysis.overlap_analysis_mz_core.
execute
(args)¶ This is the core module for the management zone model, which was extracted from the overlap analysis model. This particular one will take in a shapefile conatining a series of AOI’s, and a folder containing activity layers, and will return a modified shapefile of AOI’s, each of which will have an attribute stating how many activities take place within that polygon.
- Input:
- args[‘workspace_dir’]- The folder location into which we can write an
- Output or Intermediate folder as necessary, and where the final shapefile will be placed.
- args[‘zone_layer_file’]- An open shapefile which contains our
- management zone polygons. It should be noted that this should not be edited directly but instead, should have a copy made in order to add the attribute field.
- args[‘over_layer_dict’] - A dictionary which maps the name of the
- shapefile (excluding the .shp extension) to the open datasource itself. These files are each an activity layer that will be counted within the totals per management zone.
- Output:
- A file named [workspace_dir]/Ouput/mz_frequency.shp which is a copy of args[‘zone_layer_file’] with the added attribute “ACTIV_CNT” that will total the number of activities taking place in each polygon.
Returns nothing.
Module contents¶
Recreation Package¶
Model Entry Point¶
-
natcap.invest.recreation.recmodel_client.
execute
(args)¶ Recreation.
Execute recreation client model on remote server.
Parameters: - args['workspace_dir'] (string) – path to workspace directory
- args['aoi_path'] (string) – path to AOI vector
- args['hostname'] (string) – FQDN to recreation server
- args['port'] (string or int) – port on hostname for recreation server
- args['start_year'] (string) – start year in form YYYY. This year is the inclusive lower bound to consider points in the PUD and regression.
- args['end_year'] (string) – end year in form YYYY. This year is the inclusive upper bound to consider points in the PUD and regression.
- args['grid_aoi'] (boolean) – if true the polygon vector in args[‘aoi_path’] should be gridded into a new vector and the recreation model should be executed on that
- args['grid_type'] (string) – optional, but must exist if args[‘grid_aoi’] is True. Is one of ‘hexagon’ or ‘square’ and indicates the style of gridding.
- args['cell_size'] (string/float) – optional, but must exist if args[‘grid_aoi’] is True. Indicates the cell size of square pixels and the width of the horizontal axis for the hexagonal cells.
- args['compute_regression'] (boolean) – if True, then process the predictor table and scenario table (if present).
- args['predictor_table_path'] (string) –
required if args[‘compute_regression’] is True. Path to a table that describes the regression predictors, their IDs and types. Must contain the fields ‘id’, ‘path’, and ‘type’ where:
- ‘id’: is a <=10 character length ID that is used to uniquely describe the predictor. It will be added to the output result shapefile attribute table which is an ESRI Shapefile, thus limited to 10 characters.
- ‘path’: an absolute or relative (to this table) path to the predictor dataset, either a vector or raster type.
- ‘type’: one of the following,
- ‘raster_mean’: mean of values in the raster under the response polygon
- ‘raster_sum’: sum of values in the raster under the response polygon
- ‘point_count’: count of the points contained in the response polygon
- ‘point_nearest_distance’: distance to the nearest point from the response polygon
- ‘line_intersect_length’: length of lines that intersect with the response polygon in projected units of AOI
- ‘polygon_area’: area of the polygon contained within response polygon in projected units of AOI
- args['scenario_predictor_table_path'] (string) – (optional) if present runs the scenario mode of the recreation model with the datasets described in the table on this path. Field headers are identical to args[‘predictor_table_path’] and ids in the table are required to be identical to the predictor list.
- args['results_suffix'] (string) – optional, if exists is appended to any output file paths.
Returns: None
Recreation Server¶
InVEST Recreation Server.
-
class
natcap.invest.recreation.recmodel_server.
RecModel
(*args, **kwargs)¶ Bases:
object
Class that manages RPCs for calculating photo user days.
-
calc_photo_user_days_in_aoi
(*args, **kwargs)¶ General purpose try/except wrapper.
-
fetch_workspace_aoi
(*args, **kwargs)¶ General purpose try/except wrapper.
-
get_valid_year_range
()¶ Return the min and max year queriable.
Returns: (min_year, max_year)
-
get_version
()¶ Return the rec model server version.
This string can be used to uniquely identify the PUD database and algorithm for publication in terms of reproducibility.
-
-
natcap.invest.recreation.recmodel_server.
build_quadtree_shape
(quad_tree_shapefile_path, quadtree, spatial_reference)¶ Generate a vector of the quadtree geometry.
Parameters: - quad_tree_shapefile_path (string) – path to save the vector
- quadtree (out_of_core_quadtree.OutOfCoreQuadTree) – quadtree data structure
- spatial_reference (osr.SpatialReference) – spatial reference for the output vector
Returns: None
-
natcap.invest.recreation.recmodel_server.
construct_userday_quadtree
(initial_bounding_box, raw_photo_csv_table, cache_dir, max_points_per_node)¶ Construct a spatial quadtree for fast querying of userday points.
Parameters: - initial_bounding_box (list of int) –
- () (raw_photo_csv_table) –
- cache_dir (string) – path to a directory that can be used to cache the quadtree files on disk
- max_points_per_node (int) – maximum number of points to allow per node of the quadree. A larger amount will cause the quadtree to subdivide.
Returns: None
-
natcap.invest.recreation.recmodel_server.
execute
(args)¶ Launch recreation server and parse/generate quadtree if necessary.
A call to this function registers a Pyro RPC RecModel entry point given the configuration input parameters described below.
There are many methods to launch a server, including at a Linux command line as shown:
- nohup python -u -c “import natcap.invest.recreation.recmodel_server;
- args={‘hostname’:’$LOCALIP’,
- ‘port’:$REC_SERVER_PORT, ‘raw_csv_point_data_path’: $POINT_DATA_PATH, ‘max_year’: $MAX_YEAR, ‘min_year’: $MIN_YEAR, ‘cache_workspace’: $CACHE_WORKSPACE_PATH’};
natcap.invest.recreation.recmodel_server.execute(args)”
Parameters: - args['raw_csv_point_data_path'] (string) – path to a csv file of the format
- args['hostname'] (string) – hostname to host Pyro server.
- args['port'] (int/or string representation of int) – port number to host Pyro entry point.
- args['max_year'] (int) – maximum year allowed to be queries by user
- args['min_year'] (int) – minimum valid year allowed to be queried by user
Returns: Never returns
Recreation Client¶
InVEST Recreation Client.
-
natcap.invest.recreation.recmodel_client.
delay_op
(last_time, time_delay, func)¶ Execute func if last_time + time_delay >= current time.
Parameters: - last_time (float) – last time in seconds that func was triggered
- time_delay (float) – time to wait in seconds since last_time before triggering func
- func (function) – parameterless function to invoke if current_time >= last_time + time_delay
Returns: If func was triggered, return the time which it was triggered in seconds, otherwise return last_time.
-
natcap.invest.recreation.recmodel_client.
execute
(args) Recreation.
Execute recreation client model on remote server.
Parameters: - args['workspace_dir'] (string) – path to workspace directory
- args['aoi_path'] (string) – path to AOI vector
- args['hostname'] (string) – FQDN to recreation server
- args['port'] (string or int) – port on hostname for recreation server
- args['start_year'] (string) – start year in form YYYY. This year is the inclusive lower bound to consider points in the PUD and regression.
- args['end_year'] (string) – end year in form YYYY. This year is the inclusive upper bound to consider points in the PUD and regression.
- args['grid_aoi'] (boolean) – if true the polygon vector in args[‘aoi_path’] should be gridded into a new vector and the recreation model should be executed on that
- args['grid_type'] (string) – optional, but must exist if args[‘grid_aoi’] is True. Is one of ‘hexagon’ or ‘square’ and indicates the style of gridding.
- args['cell_size'] (string/float) – optional, but must exist if args[‘grid_aoi’] is True. Indicates the cell size of square pixels and the width of the horizontal axis for the hexagonal cells.
- args['compute_regression'] (boolean) – if True, then process the predictor table and scenario table (if present).
- args['predictor_table_path'] (string) –
required if args[‘compute_regression’] is True. Path to a table that describes the regression predictors, their IDs and types. Must contain the fields ‘id’, ‘path’, and ‘type’ where:
- ‘id’: is a <=10 character length ID that is used to uniquely describe the predictor. It will be added to the output result shapefile attribute table which is an ESRI Shapefile, thus limited to 10 characters.
- ‘path’: an absolute or relative (to this table) path to the predictor dataset, either a vector or raster type.
- ‘type’: one of the following,
- ‘raster_mean’: mean of values in the raster under the response polygon
- ‘raster_sum’: sum of values in the raster under the response polygon
- ‘point_count’: count of the points contained in the response polygon
- ‘point_nearest_distance’: distance to the nearest point from the response polygon
- ‘line_intersect_length’: length of lines that intersect with the response polygon in projected units of AOI
- ‘polygon_area’: area of the polygon contained within response polygon in projected units of AOI
- args['scenario_predictor_table_path'] (string) – (optional) if present runs the scenario mode of the recreation model with the datasets described in the table on this path. Field headers are identical to args[‘predictor_table_path’] and ids in the table are required to be identical to the predictor list.
- args['results_suffix'] (string) – optional, if exists is appended to any output file paths.
Returns: None
Recreation Workspace Fetcher¶
InVEST recreation workspace fetcher.
-
natcap.invest.recreation.recmodel_workspace_fetcher.
execute
(args)¶ Fetch workspace from remote server.
After the call a .zip file exists at args[‘workspace_dir’] named args[‘workspace_id’] + ‘.zip’ and contains the zipped workspace of that model run.
Parameters: - args['workspace_dir'] (string) – path to workspace directory
- args['hostname'] (string) – FQDN to recreation server
- args['port'] (string or int) – port on hostname for recreation server
- args['workspace_id'] (string) – workspace identifier
Returns: None
Scenic Quality Package¶
Model Entry Point¶
-
natcap.invest.scenic_quality.scenic_quality.
execute
(args)¶ Scenic Quality.
Warning
The Scenic Quality model is under active development and is currently unstable.
Parameters: - workspace_dir (string) – The selected folder is used as the workspace where all intermediate and output files will be written. If the selected folder does not exist, it will be created. If datasets already exist in the selected folder, they will be overwritten. (required)
- aoi_uri (string) – An OGR-supported vector file. This AOI instructs the model where to clip the input data and the extent of analysis. Users will create a polygon feature layer that defines their area of interest. The AOI must intersect the Digital Elevation Model (DEM). (required)
- cell_size (float) – Length (in meters) of each side of the (square) cell. (optional)
- structure_uri (string) – An OGR-supported vector file. The user must specify a point feature layer that indicates locations of objects that contribute to negative scenic quality, such as aquaculture netpens or wave energy facilities. In order for the viewshed analysis to run correctly, the projection of this input must be consistent with the project of the DEM input. (required)
- dem_uri (string) – A GDAL-supported raster file. An elevation raster layer is required to conduct viewshed analysis. Elevation data allows the model to determine areas within the AOI’s land-seascape where point features contributing to negative scenic quality are visible. (required)
- refraction (float) – The earth curvature correction option corrects for the curvature of the earth and refraction of visible light in air. Changes in air density curve the light downward causing an observer to see further and the earth to appear less curved. While the magnitude of this effect varies with atmospheric conditions, a standard rule of thumb is that refraction of visible light reduces the apparent curvature of the earth by one-seventh. By default, this model corrects for the curvature of the earth and sets the refractivity coefficient to 0.13. (required)
- pop_uri (string) – A GDAL-supported raster file. A population raster layer is required to determine population within the AOI’s land-seascape where point features contributing to negative scenic quality are visible and not visible. (optional)
- overlap_uri (string) – An OGR-supported vector file. The user has the option of providing a polygon feature layer where they would like to determine the impact of objects on visual quality. This input must be a polygon and projected in meters. The model will use this layer to determine what percent of the total area of each polygon feature can see at least one of the point features impacting scenic quality.optional
- valuation_function (string) – Either ‘polynomial’ or ‘logarithmic’. This field indicates the functional form f(x) the model will use to value the visual impact for each viewpoint. For distances less than 1 km (x<1), the model uses a linear form g(x) where the line passes through f(1) (i.e. g(1) == f(1)) and extends to zero with the same slope as f(1) (i.e. g’(x) == f’(1)). (optional)
- a_coefficient (float) – First coefficient used either by the polynomial or by the logarithmic valuation function. (required)
- b_coefficient (float) – Second coefficient used either by the polynomial or by the logarithmic valuation function. (required)
- c_coefficient (float) – Third coefficient for the polynomial’s quadratic term. (required)
- d_coefficient (float) – Fourth coefficient for the polynomial’s cubic exponent. (required)
- max_valuation_radius (float) – Radius beyond which the valuation is set to zero. The valuation function ‘f’ cannot be negative at the radius ‘r’ (f(r)>=0). (required)
Returns: None
Scenic Quality¶
-
natcap.invest.scenic_quality.scenic_quality.
add_field_feature_set_uri
(fs_uri, field_name, field_type)¶
-
natcap.invest.scenic_quality.scenic_quality.
add_id_feature_set_uri
(fs_uri, id_name)¶
-
natcap.invest.scenic_quality.scenic_quality.
compute_viewshed
(input_array, visibility_uri, in_structure_uri, cell_size, rows, cols, nodata, GT, I_uri, J_uri, curvature_correction, refr_coeff, args)¶ array-based function that computes the viewshed as is defined in ArcGIS
-
natcap.invest.scenic_quality.scenic_quality.
compute_viewshed_uri
(in_dem_uri, out_viewshed_uri, in_structure_uri, curvature_correction, refr_coeff, args)¶ Compute the viewshed as it is defined in ArcGIS where the inputs are:
-in_dem_uri: URI to input surface raster -out_viewshed_uri: URI to the output raster -in_structure_uri: URI to a point shapefile that contains the location of the observers and the viewshed radius in (negative) meters -curvature_correction: flag for the curvature of the earth. Either FLAT_EARTH or CURVED_EARTH. Not used yet. -refraction: refraction index between 0 (max effect) and 1 (no effect). Default is 0.13.
-
natcap.invest.scenic_quality.scenic_quality.
execute
(args) Scenic Quality.
Warning
The Scenic Quality model is under active development and is currently unstable.
Parameters: - workspace_dir (string) – The selected folder is used as the workspace where all intermediate and output files will be written. If the selected folder does not exist, it will be created. If datasets already exist in the selected folder, they will be overwritten. (required)
- aoi_uri (string) – An OGR-supported vector file. This AOI instructs the model where to clip the input data and the extent of analysis. Users will create a polygon feature layer that defines their area of interest. The AOI must intersect the Digital Elevation Model (DEM). (required)
- cell_size (float) – Length (in meters) of each side of the (square) cell. (optional)
- structure_uri (string) – An OGR-supported vector file. The user must specify a point feature layer that indicates locations of objects that contribute to negative scenic quality, such as aquaculture netpens or wave energy facilities. In order for the viewshed analysis to run correctly, the projection of this input must be consistent with the project of the DEM input. (required)
- dem_uri (string) – A GDAL-supported raster file. An elevation raster layer is required to conduct viewshed analysis. Elevation data allows the model to determine areas within the AOI’s land-seascape where point features contributing to negative scenic quality are visible. (required)
- refraction (float) – The earth curvature correction option corrects for the curvature of the earth and refraction of visible light in air. Changes in air density curve the light downward causing an observer to see further and the earth to appear less curved. While the magnitude of this effect varies with atmospheric conditions, a standard rule of thumb is that refraction of visible light reduces the apparent curvature of the earth by one-seventh. By default, this model corrects for the curvature of the earth and sets the refractivity coefficient to 0.13. (required)
- pop_uri (string) – A GDAL-supported raster file. A population raster layer is required to determine population within the AOI’s land-seascape where point features contributing to negative scenic quality are visible and not visible. (optional)
- overlap_uri (string) – An OGR-supported vector file. The user has the option of providing a polygon feature layer where they would like to determine the impact of objects on visual quality. This input must be a polygon and projected in meters. The model will use this layer to determine what percent of the total area of each polygon feature can see at least one of the point features impacting scenic quality.optional
- valuation_function (string) – Either ‘polynomial’ or ‘logarithmic’. This field indicates the functional form f(x) the model will use to value the visual impact for each viewpoint. For distances less than 1 km (x<1), the model uses a linear form g(x) where the line passes through f(1) (i.e. g(1) == f(1)) and extends to zero with the same slope as f(1) (i.e. g’(x) == f’(1)). (optional)
- a_coefficient (float) – First coefficient used either by the polynomial or by the logarithmic valuation function. (required)
- b_coefficient (float) – Second coefficient used either by the polynomial or by the logarithmic valuation function. (required)
- c_coefficient (float) – Third coefficient for the polynomial’s quadratic term. (required)
- d_coefficient (float) – Fourth coefficient for the polynomial’s cubic exponent. (required)
- max_valuation_radius (float) – Radius beyond which the valuation is set to zero. The valuation function ‘f’ cannot be negative at the radius ‘r’ (f(r)>=0). (required)
Returns: None
-
natcap.invest.scenic_quality.scenic_quality.
get_count_feature_set_uri
(fs_uri)¶
-
natcap.invest.scenic_quality.scenic_quality.
get_data_type_uri
(ds_uri)¶
-
natcap.invest.scenic_quality.scenic_quality.
old_reproject_dataset_uri
(original_dataset_uri, *args, **kwargs)¶ - A URI wrapper for reproject dataset that opens the original_dataset_uri
- before passing it to reproject_dataset.
original_dataset_uri - a URI to a gdal Dataset on disk
All other arguments to reproject_dataset are passed in.
return - nothing
-
natcap.invest.scenic_quality.scenic_quality.
reclassify_quantile_dataset_uri
(dataset_uri, quantile_list, dataset_out_uri, datatype_out, nodata_out)¶
-
natcap.invest.scenic_quality.scenic_quality.
reproject_dataset_uri
(original_dataset_uri, output_wkt, output_uri, output_type=<Mock id='140294645368848'>)¶ - A function to reproject and resample a GDAL dataset given an output pixel size
- and output reference and uri.
original_dataset - a gdal Dataset to reproject pixel_spacing - output dataset pixel size in projected linear units (probably meters) output_wkt - output project in Well Known Text (the result of ds.GetProjection()) output_uri - location on disk to dump the reprojected dataset output_type - gdal type of the output
return projected dataset
-
natcap.invest.scenic_quality.scenic_quality.
set_field_by_op_feature_set_uri
(fs_uri, value_field_name, op)¶
Scenic Quality Core¶
-
natcap.invest.scenic_quality.scenic_quality_core.
add_active_pixel
(sweep_line, index, distance, visibility)¶ Add a pixel to the sweep line in O(n) using a linked_list of linked_cells.
-
natcap.invest.scenic_quality.scenic_quality_core.
add_active_pixel_fast
(sweep_line, skip_nodes, distance)¶ Insert an active pixel in the sweep_line and update the skip_nodes.
- -sweep_line: a linked list of linked_cell as created by the
- linked_cell_factory.
- -skip_nodes: an array of linked lists that constitutes the hierarchy
- of skip pointers in the skip list. Each cell is defined as ???
-distance: the value to be added to the sweep_line
Return a tuple (sweep_line, skip_nodes) with the updated sweep_line and skip_nodes
-
natcap.invest.scenic_quality.scenic_quality_core.
cell_angles
(cell_coords, viewpoint)¶ Compute angles between cells and viewpoint where 0 angle is right of viewpoint.
- Inputs:
- -cell_coords: coordinate tuple (rows, cols) as numpy.where() from which to compute the angles -viewpoint: tuple (row, col) indicating the position of the observer. Each of row and col is an integer.
Returns a sorted list of angles
-
natcap.invest.scenic_quality.scenic_quality_core.
cell_link_factory
¶ alias of
cell_link
-
natcap.invest.scenic_quality.scenic_quality_core.
compute_viewshed
(input_array, nodata, coordinates, obs_elev, tgt_elev, max_dist, cell_size, refraction_coeff, alg_version)¶ Compute the viewshed for a single observer. Inputs:
-input_array: a numpy array of terrain elevations -nodata: input_array’s nodata value -coordinates: tuple (east, north) of coordinates of viewing
position-obs_elev: observer elevation above the raster map. -tgt_elev: offset for target elevation above the ground. Applied to
every point on the raster-max_dist: maximum visibility radius. By default infinity (-1), -cell_size: cell size in meters (integer) -refraction_coeff: refraction coefficient (0.0-1.0), not used yet -alg_version: name of the algorithm to be used. Either ‘cython’ (default) or ‘python’.
Returns the visibility map for the DEM as a numpy array
-
natcap.invest.scenic_quality.scenic_quality_core.
execute
(args)¶ Entry point for scenic quality core computation.
Inputs:
Returns
-
natcap.invest.scenic_quality.scenic_quality_core.
find_active_pixel
(sweep_line, distance)¶ Find an active pixel based on distance. Return None if can’t be found
-
natcap.invest.scenic_quality.scenic_quality_core.
find_active_pixel_fast
(sweep_line, skip_nodes, distance)¶ Find an active pixel based on distance.
- Inputs:
- -sweep_line: a linked list of linked_cell as created by the
- linked_cell_factory.
- -skip_list: an array of linked lists that constitutes the hierarchy
- of skip pointers in the skip list. Each cell is defined as ???
-distance: the key used to search the sweep_line
Return the linked_cell associated to ‘distance’, or None if such cell doesn’t exist
-
natcap.invest.scenic_quality.scenic_quality_core.
find_pixel_before_fast
(sweep_line, skip_nodes, distance)¶ Find the active pixel before the one with distance.
- Inputs:
- -sweep_line: a linked list of linked_cell as created by the
- linked_cell_factory.
- -skip_list: an array of linked lists that constitutes the hierarchy
- of skip pointers in the skip list. Each cell is defined as ???
-distance: the key used to search the sweep_line
- Return a tuple (pixel, hierarchy) where:
- -pixel is the linked_cell right before ‘distance’, or None if it doesn’t exist (either ‘distance’ is the first cell, or the sweep_line is empty). -hierarchy is the list of intermediate skip nodes starting from the bottom node right above the active pixel up to the top node.
-
natcap.invest.scenic_quality.scenic_quality_core.
get_perimeter_cells
(array_shape, viewpoint, max_dist=-1)¶ Compute cells along the perimeter of an array.
- Inputs:
- -array_shape: tuple (row, col) as ndarray.shape containing the size of the array from which to compute the perimeter -viewpoint: tuple (row, col) indicating the position of the observer -max_dist: maximum distance in pixels from the center of the array. Negative values are ignored (same effect as infinite distance).
Returns a tuple (rows, cols) of the cell rows and columns following the convention of numpy.where() where the first cell is immediately right to the viewpoint, and the others are enumerated clockwise.
-
natcap.invest.scenic_quality.scenic_quality_core.
hierarchy_is_consistent
(pixel, hierarchy, skip_nodes)¶ Makes simple tests to ensure the the hierarchy is consistent
-
natcap.invest.scenic_quality.scenic_quality_core.
linked_cell_factory
¶ alias of
linked_cell
-
natcap.invest.scenic_quality.scenic_quality_core.
list_extreme_cell_angles
(array_shape, viewpoint_coords, max_dist)¶ List the minimum and maximum angles spanned by each cell of a rectangular raster if scanned by a sweep line centered on viewpoint_coords.
- Inputs:
- -array_shape: a shape tuple (rows, cols) as is created from
- calling numpy.ndarray.shape()
-viewpoint_coords: a 2-tuple of coordinates similar to array_shape where the sweep line originates -max_dist: maximum viewing distance
returns a tuple (min, center, max, I, J) with min, center and max Nx1 numpy arrays of each raster cell’s minimum, center, and maximum angles and coords as two Nx1 numpy arrays of row and column of the coordinate of each point.
-
natcap.invest.scenic_quality.scenic_quality_core.
print_hierarchy
(hierarchy)¶
-
natcap.invest.scenic_quality.scenic_quality_core.
print_node
(node)¶ Printing a node by displaying its ‘distance’ and ‘next’ fields
-
natcap.invest.scenic_quality.scenic_quality_core.
print_skip_list
(sweep_line, skip_nodes)¶
-
natcap.invest.scenic_quality.scenic_quality_core.
print_sweep_line
(sweep_line)¶
-
natcap.invest.scenic_quality.scenic_quality_core.
remove_active_pixel
(sweep_line, distance)¶ Remove a pixel based on distance. Do nothing if can’t be found.
-
natcap.invest.scenic_quality.scenic_quality_core.
skip_list_is_consistent
(linked_list, skip_nodes)¶ Function that checks for skip list inconsistencies.
- Inputs:
- -sweep_line: the container proper which is a dictionary
- implementing a linked list that contains the items ordered in increasing distance
- -skip_nodes: python dict that is the hierarchical structure
- that sitting on top of the sweep_line to allow O(log n) operations.
- Returns a tuple (is_consistent, message) where is_consistent is
- True if list is consistent, False otherwise. If is_consistent is False, the string ‘message’ explains the cause
-
natcap.invest.scenic_quality.scenic_quality_core.
sweep_through_angles
(angles, add_events, center_events, remove_events, I, J, distances, visibility, visibility_map)¶ Update the active pixels as the algorithm consumes the sweep angles
-
natcap.invest.scenic_quality.scenic_quality_core.
update_visible_pixels
(active_pixels, I, J, visibility_map)¶ Update the array of visible pixels from the active pixel’s visibility
- Inputs:
-active_pixels: a linked list of dictionaries containing the following fields:
-distance: distance between pixel center and viewpoint -visibility: an elevation/distance ratio used by the algorithm to determine what pixels are bostructed -index: pixel index in the event stream, used to find the pixel’s coordinates ‘i’ and ‘j’. -next: points to the next pixel, or is None if at the endThe linked list is implemented with a dictionary where the pixels distance is the key. The closest pixel is also referenced by the key ‘closest’. -I: the array of pixel rows indexable by pixel[‘index’] -J: the array of pixel columns indexable by pixel[‘index’] -visibility_map: a python array the same size as the DEM with 1s for visible pixels and 0s otherwise. Viewpoint is always visible.
Returns nothing
-
natcap.invest.scenic_quality.scenic_quality_core.
viewshed
(input_array, cell_size, array_shape, nodata, output_uri, coordinates, obs_elev=1.75, tgt_elev=0.0, max_dist=-1.0, refraction_coeff=None, alg_version='cython')¶ URI wrapper for the viewshed computation function
- Inputs:
-input_array: numpy array of the elevation raster map -cell_size: raster cell size in meters -array_shape: input_array_shape as returned from ndarray.shape() -nodata: input_array’s raster nodata value -output_uri: output raster uri, compatible with input_array’s size -coordinates: tuple (east, north) of coordinates of viewing
position-obs_elev: observer elevation above the raster map. -tgt_elev: offset for target elevation above the ground. Applied to
every point on the raster-max_dist: maximum visibility radius. By default infinity (-1), -refraction_coeff: refraction coefficient (0.0-1.0), not used yet -alg_version: name of the algorithm to be used. Either ‘cython’ (default) or ‘python’.
Returns nothing
Scenic Quality Cython Core¶
Grass Examples¶
GRASS Python script examples.
-
class
natcap.invest.scenic_quality.grass_examples.
grasswrapper
(dbBase='', location='', mapset='')¶
-
natcap.invest.scenic_quality.grass_examples.
random_string
(length)¶
Los Sextante¶
-
natcap.invest.scenic_quality.los_sextante.
main
()¶
-
natcap.invest.scenic_quality.los_sextante.
run_script
(iface)¶ this shall be called from Script Runner
Viewshed Grass¶
-
natcap.invest.scenic_quality.viewshed_grass.
execute
(args)¶
-
class
natcap.invest.scenic_quality.viewshed_grass.
grasswrapper
(dbBase='', location='/home/mlacayo/workspace/newLocation', mapset='PERMANENT')¶
-
natcap.invest.scenic_quality.viewshed_grass.
project_cleanup
()¶
-
natcap.invest.scenic_quality.viewshed_grass.
project_setup
(dataset_uri)¶
-
natcap.invest.scenic_quality.viewshed_grass.
viewshed
(dataset_uri, feature_set_uri, dataset_out_uri)¶
Viewshed Sextante¶
-
natcap.invest.scenic_quality.viewshed_sextante.
viewshed
(input_uri, output_uri, coordinates, obs_elev=1.75, tgt_elev=0.0, max_dist=-1, refraction_coeff=0.14286, memory=500, stream_dir=None, consider_curvature=False, consider_refraction=False, boolean_mode=False, elevation_mode=False, verbose=False, quiet=False)¶
Module contents¶
Sediment Delivery Ratio Package¶
Model Entry Point¶
Sediment Delivery Ratio¶
Module contents¶
InVEST Sediment Delivery Ratio (SDR) module.
- The SDR method in this model is based on:
- Winchell, M. F., et al. “Extension and validation of a geographic information system-based method for calculating the Revised Universal Soil Loss Equation length-slope factor for erosion risk assessments in large watersheds.” Journal of Soil and Water Conservation 63.3 (2008): 105-111.
-
natcap.invest.sdr.
execute
(args)¶ Sediment Delivery Ratio.
This function calculates the sediment export and retention of a landscape using the sediment delivery ratio model described in the InVEST user’s guide.
Parameters: - args['workspace_dir'] (string) – output directory for intermediate, temporary, and final files
- args['results_suffix'] (string) – (optional) string to append to any output file names
- args['dem_path'] (string) – path to a digital elevation raster
- args['erosivity_path'] (string) – path to rainfall erosivity index raster
- args['erodibility_path'] (string) – a path to soil erodibility raster
- args['lulc_path'] (string) – path to land use/land cover raster
- args['watersheds_path'] (string) – path to vector of the watersheds
- args['biophysical_table_path'] (string) – path to CSV file with biophysical information of each land use classes. contain the fields ‘usle_c’ and ‘usle_p’
- args['threshold_flow_accumulation'] (number) – number of upstream pixels on the dem to threshold to a stream.
- args['k_param'] (number) – k calibration parameter
- args['sdr_max'] (number) – max value the SDR
- args['ic_0_param'] (number) – ic_0 calibration parameter
- args['drainage_path'] (string) – (optional) path to drainage raster that is used to add additional drainage areas to the internally calculated stream layer
Returns: None.
Timber Package¶
Model Entry Point¶
-
natcap.invest.timber.timber.
execute
(args)¶ Managed Timber Production.
Invoke the timber model given uri inputs specified by the user guide.
Parameters: - args['workspace_dir'] (string) – The file location where the outputs will be written (Required)
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['timber_shape_uri'] (string) – The shapefile describing timber parcels with fields as described in the user guide (Required)
- args['attr_table_uri'] (string) – The CSV attribute table location with fields that describe polygons in timber_shape_uri (Required)
- market_disc_rate (float) – The market discount rate
Returns: nothing
Timber¶
InVEST Timber model.
-
natcap.invest.timber.timber.
execute
(args) Managed Timber Production.
Invoke the timber model given uri inputs specified by the user guide.
Parameters: - args['workspace_dir'] (string) – The file location where the outputs will be written (Required)
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['timber_shape_uri'] (string) – The shapefile describing timber parcels with fields as described in the user guide (Required)
- args['attr_table_uri'] (string) – The CSV attribute table location with fields that describe polygons in timber_shape_uri (Required)
- market_disc_rate (float) – The market discount rate
Returns: nothing
Timber Core¶
Module contents¶
Wave Energy Package¶
Model Entry Point¶
-
natcap.invest.wave_energy.wave_energy.
execute
(args)¶ Wave Energy.
Executes both the biophysical and valuation parts of the wave energy model (WEM). Files will be written on disk to the intermediate and output directories. The outputs computed for biophysical and valuation include: wave energy capacity raster, wave power raster, net present value raster, percentile rasters for the previous three, and a point shapefile of the wave points with attributes.
Parameters: - workspace_dir (string) – Where the intermediate and output folder/files will be saved. (required)
- wave_base_data_uri (string) – Directory location of wave base data including WW3 data and analysis area shapefile. (required)
- analysis_area_uri (string) – A string identifying the analysis area of interest. Used to determine wave data shapefile, wave data text file, and analysis area boundary shape. (required)
- aoi_uri (string) – A polygon shapefile outlining a more detailed area within the analysis area. This shapefile should be projected with linear units being in meters. (required to run Valuation model)
- machine_perf_uri (string) – The path of a CSV file that holds the machine performance table. (required)
- machine_param_uri (string) – The path of a CSV file that holds the machine parameter table. (required)
- dem_uri (string) – The path of the Global Digital Elevation Model (DEM). (required)
- suffix (string) – A python string of characters to append to each output filename (optional)
- valuation_container (boolean) – Indicates whether the model includes valuation
- land_gridPts_uri (string) – A CSV file path containing the Landing and Power Grid Connection Points table. (required for Valuation)
- machine_econ_uri (string) – A CSV file path for the machine economic parameters table. (required for Valuation)
- number_of_machines (int) – An integer specifying the number of machines for a wave farm site. (required for Valuation)
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'wave_base_data_uri': 'path/to/base_data_dir', 'analysis_area_uri': 'West Coast of North America and Hawaii', 'aoi_uri': 'path/to/shapefile', 'machine_perf_uri': 'path/to/csv', 'machine_param_uri': 'path/to/csv', 'dem_uri': 'path/to/raster', 'suffix': '_results', 'valuation_container': True, 'land_gridPts_uri': 'path/to/csv', 'machine_econ_uri': 'path/to/csv', 'number_of_machines': 28, }
Wave Energy¶
InVEST Wave Energy Model Core Code
-
exception
natcap.invest.wave_energy.wave_energy.
IntersectionError
¶ Bases:
exceptions.Exception
A custom error message for when the AOI does not intersect any wave data points.
-
natcap.invest.wave_energy.wave_energy.
build_point_shapefile
(driver_name, layer_name, path, data, prj, coord_trans)¶ This function creates and saves a point geometry shapefile to disk. It specifically only creates one ‘Id’ field and creates as many features as specified in ‘data’
driver_name - A string specifying a valid ogr driver type layer_name - A string representing the name of the layer path - A string of the output path of the file data - A dictionary who’s keys are the Id’s for the field
and who’s values are arrays with two elements being latitude and longitudeprj - A spatial reference acting as the projection/datum coord_trans - A coordinate transformation
returns - Nothing
-
natcap.invest.wave_energy.wave_energy.
calculate_distance
(xy_1, xy_2)¶ For all points in xy_1, this function calculates the distance from point xy_1 to various points in xy_2, and stores the shortest distances found in a list min_dist. The function also stores the index from which ever point in xy_2 was closest, as an id in a list that corresponds to min_dist.
xy_1 - A numpy array of points in the form [x,y] xy_2 - A numpy array of points in the form [x,y]
- returns - A numpy array of shortest distances and a numpy array
- of id’s corresponding to the array of shortest distances
-
natcap.invest.wave_energy.wave_energy.
calculate_percentiles_from_raster
(raster_uri, percentiles)¶ Does a memory efficient sort to determine the percentiles of a raster. Percentile algorithm currently used is the nearest rank method.
raster_uri - a uri to a gdal raster on disk percentiles - a list of desired percentiles to lookup
ex: [25,50,75,90]- returns - a list of values corresponding to the percentiles
- from the percentiles list
-
natcap.invest.wave_energy.wave_energy.
captured_wave_energy_to_shape
(energy_cap, wave_shape_uri)¶ Adds each captured wave energy value from the dictionary energy_cap to a field of the shapefile wave_shape. The values are set corresponding to the same I,J values which is the key of the dictionary and used as the unique identier of the shape.
- energy_cap - A dictionary with keys (I,J), representing the
- wave energy capacity values.
- wave_shape_uri - A uri to a point geometry shapefile to
- write the new field/values to
returns - Nothing
-
natcap.invest.wave_energy.wave_energy.
clip_datasource_layer
(shape_to_clip_path, binding_shape_path, output_path)¶ Clip Shapefile Layer by second Shapefile Layer.
Uses ogr.Layer.Clip() to clip a Shapefile, where the output Layer inherits the projection and fields from the original Shapefile.
Parameters: - shape_to_clip_path (string) – a path to a Shapefile on disk. This is the Layer to clip. Must have same spatial reference as ‘binding_shape_path’.
- binding_shape_path (string) – a path to a Shapefile on disk. This is the Layer to clip to. Must have same spatial reference as ‘shape_to_clip_path’
- output_path (string) – a path on disk to write the clipped Shapefile to. Should end with a ‘.shp’ extension.
Returns: Nothing
-
natcap.invest.wave_energy.wave_energy.
compute_wave_energy_capacity
(wave_data, interp_z, machine_param)¶ - Computes the wave energy capacity for each point and
generates a dictionary whos keys are the points (I,J) and whos value is the wave energy capacity.
- wave_data - A dictionary containing wave watch data with the following
- structure:
- {‘periods’: [1,2,3,4,...],
‘heights’: [.5,1.0,1.5,...], ‘bin_matrix’: { (i0,j0): [[2,5,3,2,...], [6,3,4,1,...],...],
- (i1,j1): [[2,5,3,2,...], [6,3,4,1,...],...],
- ...
(in, jn): [[2,5,3,2,...], [6,3,4,1,...],...]
}
}
- interp_z - A 2D array of the interpolated values for the machine
- performance table
- machine_param - A dictionary containing the restrictions for the
- machines (CapMax, TpMax, HsMax)
- returns - A dictionary representing the wave energy capacity at
- each wave point
-
natcap.invest.wave_energy.wave_energy.
count_pixels_groups
(raster_uri, group_values)¶ Does a pixel count for each value in ‘group_values’ over the raster provided by ‘raster_uri’. Returns a list of pixel counts for each value in ‘group_values’
raster_uri - a uri path to a gdal raster on disk group_values - a list of unique numbers for which to get a pixel count
- returns - A list of integers, where each integer at an index
- corresponds to the pixel count of the value from ‘group_values’ found at the same index
-
natcap.invest.wave_energy.wave_energy.
create_attribute_csv_table
(attribute_table_uri, fields, data)¶ Create a new csv table from a dictionary
filename - a URI path for the new table to be written to disk
- fields - a python list of the column names. The order of the fields in
- the list will be the order in how they are written. ex: [‘id’, ‘precip’, ‘total’]
- data - a python dictionary representing the table. The dictionary
should be constructed with unique numerical keys that point to a dictionary which represents a row in the table: data = {0 : {‘id’:1, ‘precip’:43, ‘total’: 65},
1 : {‘id’:2, ‘precip’:65, ‘total’: 94}}
returns - nothing
-
natcap.invest.wave_energy.wave_energy.
create_percentile_ranges
(percentiles, units_short, units_long, start_value)¶ Constructs the percentile ranges as Strings, with the first range starting at 1 and the last range being greater than the last percentile mark. Each string range is stored in a list that gets returned
percentiles - A list of the percentile marks in ascending order units_short - A String that represents the shorthand for the units of
the raster values (ex: kW/m)- units_long - A String that represents the description of the units of
- the raster values (ex: wave power per unit width of wave crest length (kW/m))
- start_value - A String representing the first value that goes to the
- first percentile range (start_value - percentile_one)
returns - A list of Strings representing the ranges of the percentiles
-
natcap.invest.wave_energy.wave_energy.
create_percentile_rasters
(raster_path, output_path, units_short, units_long, start_value, percentile_list, aoi_shape_path)¶ Creates a percentile (quartile) raster based on the raster_dataset. An attribute table is also constructed for the raster_dataset that displays the ranges provided by taking the quartile of values. The following inputs are required:
raster_path - A uri to a gdal raster dataset with data of type integer output_path - A String for the destination of new raster units_short - A String that represents the shorthand for the units
of the raster values (ex: kW/m)- units_long - A String that represents the description of the units
- of the raster values (ex: wave power per unit width of wave crest length (kW/m))
- start_value - A String representing the first value that goes to the
- first percentile range (start_value - percentile_one)
- percentile_list - a python list of the percentiles ranges
- ex: [25, 50, 75, 90]
- aoi_shape_path - a uri to an OGR polygon shapefile to clip the
- rasters to
return - Nothing
-
natcap.invest.wave_energy.wave_energy.
execute
(args) Wave Energy.
Executes both the biophysical and valuation parts of the wave energy model (WEM). Files will be written on disk to the intermediate and output directories. The outputs computed for biophysical and valuation include: wave energy capacity raster, wave power raster, net present value raster, percentile rasters for the previous three, and a point shapefile of the wave points with attributes.
Parameters: - workspace_dir (string) – Where the intermediate and output folder/files will be saved. (required)
- wave_base_data_uri (string) – Directory location of wave base data including WW3 data and analysis area shapefile. (required)
- analysis_area_uri (string) – A string identifying the analysis area of interest. Used to determine wave data shapefile, wave data text file, and analysis area boundary shape. (required)
- aoi_uri (string) – A polygon shapefile outlining a more detailed area within the analysis area. This shapefile should be projected with linear units being in meters. (required to run Valuation model)
- machine_perf_uri (string) – The path of a CSV file that holds the machine performance table. (required)
- machine_param_uri (string) – The path of a CSV file that holds the machine parameter table. (required)
- dem_uri (string) – The path of the Global Digital Elevation Model (DEM). (required)
- suffix (string) – A python string of characters to append to each output filename (optional)
- valuation_container (boolean) – Indicates whether the model includes valuation
- land_gridPts_uri (string) – A CSV file path containing the Landing and Power Grid Connection Points table. (required for Valuation)
- machine_econ_uri (string) – A CSV file path for the machine economic parameters table. (required for Valuation)
- number_of_machines (int) – An integer specifying the number of machines for a wave farm site. (required for Valuation)
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'wave_base_data_uri': 'path/to/base_data_dir', 'analysis_area_uri': 'West Coast of North America and Hawaii', 'aoi_uri': 'path/to/shapefile', 'machine_perf_uri': 'path/to/csv', 'machine_param_uri': 'path/to/csv', 'dem_uri': 'path/to/raster', 'suffix': '_results', 'valuation_container': True, 'land_gridPts_uri': 'path/to/csv', 'machine_econ_uri': 'path/to/csv', 'number_of_machines': 28, }
-
natcap.invest.wave_energy.wave_energy.
get_coordinate_transformation
(source_sr, target_sr)¶ This function takes a source and target spatial reference and creates a coordinate transformation from source to target, and one from target to source.
source_sr - A spatial reference target_sr - A spatial reference
- return - A tuple, coord_trans (source to target) and
- coord_trans_opposite (target to source)
-
natcap.invest.wave_energy.wave_energy.
get_points_geometries
(shape_uri)¶ This function takes a shapefile and for each feature retrieves the X and Y value from it’s geometry. The X and Y value are stored in a numpy array as a point [x_location,y_location], which is returned when all the features have been iterated through.
shape_uri - An uri to an OGR shapefile datasource
- returns - A numpy array of points, which represent the shape’s feature’s
- geometries.
-
natcap.invest.wave_energy.wave_energy.
load_binary_wave_data
(wave_file_uri)¶ The load_binary_wave_data function converts a pickled WW3 text file into a dictionary who’s keys are the corresponding (I,J) values and whose value is a two-dimensional array representing a matrix of the number of hours a seastate occurs over a 5 year period. The row and column headers are extracted once and stored in the dictionary as well.
wave_file_uri - The path to a pickled binary WW3 file.
- returns - A dictionary of matrices representing hours of specific
seastates, as well as the period and height ranges. It has the following structure:
- {‘periods’: [1,2,3,4,...],
‘heights’: [.5,1.0,1.5,...], ‘bin_matrix’: { (i0,j0): [[2,5,3,2,...], [6,3,4,1,...],...],
- (i1,j1): [[2,5,3,2,...], [6,3,4,1,...],...],
- ...
(in, jn): [[2,5,3,2,...], [6,3,4,1,...],...]
}
}
-
natcap.invest.wave_energy.wave_energy.
pixel_size_based_on_coordinate_transform
(dataset_uri, coord_trans, point)¶ Get width and height of cell in meters.
Calculates the pixel width and height in meters given a coordinate transform and reference point on the dataset that’s close to the transform’s projected coordinate sytem. This is only necessary if dataset is not already in a meter coordinate system, for example dataset may be in lat/long (WGS84).
Parameters: - dataset_uri (string) – a String for a GDAL path on disk, projected in the form of lat/long decimal degrees
- coord_trans (osr.CoordinateTransformation) – an OSR coordinate transformation from dataset coordinate system to meters
- point (tuple) – a reference point close to the coordinate transform coordinate system. must be in the same coordinate system as dataset.
Returns: pixel_diff – a 2-tuple containing (pixel width in meters, pixel
height in meters)
Return type: tuple
-
natcap.invest.wave_energy.wave_energy.
pixel_size_helper
(shape_path, coord_trans, coord_trans_opposite, ds_uri)¶ This function helps retrieve the pixel sizes of the global DEM when given an area of interest that has a certain projection.
- shape_path - A uri to a point shapefile datasource indicating where
- in the world we are interested in
coord_trans - A coordinate transformation coord_trans_opposite - A coordinate transformation that transforms in
the opposite direction of ‘coord_trans’ds_uri - A uri to a gdal dataset to get the pixel size from
- returns - A tuple of the x and y pixel sizes of the global DEM
- given in the units of what ‘shape’ is projected in
-
natcap.invest.wave_energy.wave_energy.
wave_energy_interp
(wave_data, machine_perf)¶ - Generates a matrix representing the interpolation of the
machine performance table using new ranges from wave watch data.
- wave_data - A dictionary holding the new x range (period) and
y range (height) values for the interpolation. The dictionary has the following structure:
- {‘periods’: [1,2,3,4,...],
‘heights’: [.5,1.0,1.5,...], ‘bin_matrix’: { (i0,j0): [[2,5,3,2,...], [6,3,4,1,...],...],
- (i1,j1): [[2,5,3,2,...], [6,3,4,1,...],...],
- ...
(in, jn): [[2,5,3,2,...], [6,3,4,1,...],...]
}
}
- machine_perf - a dictionary that holds the machine performance
- information with the following keys and structure:
- machine_perf[‘periods’] - [1,2,3,...] machine_perf[‘heights’] - [.5,1,1.5,...] machine_perf[‘bin_matrix’] - [[1,2,3,...],[5,6,7,...],...].
returns - The interpolated matrix
-
natcap.invest.wave_energy.wave_energy.
wave_power
(shape_uri)¶ Calculates the wave power from the fields in the shapefile and writes the wave power value to a field for the corresponding feature.
- shape_uri - A uri to a Shapefile that has all the attributes
- represented in fields to calculate wave power at a specific wave farm
returns - Nothing
Module contents¶
Wind Energy Package¶
Model Entry Point¶
-
natcap.invest.wind_energy.wind_energy.
execute
(args)¶ Wind Energy.
This module handles the execution of the wind energy model given the following dictionary:
Parameters: - workspace_dir (string) – a python string which is the uri path to where the outputs will be saved (required)
- wind_data_uri (string) – path to a CSV file with the following header: [‘LONG’,’LATI’,’LAM’, ‘K’, ‘REF’]. Each following row is a location with at least the Longitude, Latitude, Scale (‘LAM’), Shape (‘K’), and reference height (‘REF’) at which the data was collected (required)
- aoi_uri (string) – a uri to an OGR datasource that is of type polygon and projected in linear units of meters. The polygon specifies the area of interest for the wind data points. If limiting the wind farm bins by distance, then the aoi should also cover a portion of the land polygon that is of interest (optional for biophysical and no distance masking, required for biophysical and distance masking, required for valuation)
- bathymetry_uri (string) – a uri to a GDAL dataset that has the depth values of the area of interest (required)
- land_polygon_uri (string) – a uri to an OGR datasource of type polygon that provides a coastline for determining distances from wind farm bins. Enabled by AOI and required if wanting to mask by distances or run valuation
- global_wind_parameters_uri (string) – a float for the average distance in kilometers from a grid connection point to a land connection point (required for valuation if grid connection points are not provided)
- suffix (string) – a String to append to the end of the output files (optional)
- turbine_parameters_uri (string) – a uri to a CSV file that holds the turbines biophysical parameters as well as valuation parameters (required)
- number_of_turbines (int) – an integer value for the number of machines for the wind farm (required for valuation)
- min_depth (float) – a float value for the minimum depth for offshore wind farm installation (meters) (required)
- max_depth (float) – a float value for the maximum depth for offshore wind farm installation (meters) (required)
- min_distance (float) – a float value for the minimum distance from shore for offshore wind farm installation (meters) The land polygon must be selected for this input to be active (optional, required for valuation)
- max_distance (float) – a float value for the maximum distance from shore for offshore wind farm installation (meters) The land polygon must be selected for this input to be active (optional, required for valuation)
- valuation_container (boolean) – Indicates whether model includes valuation
- foundation_cost (float) – a float representing how much the foundation will cost for the specific type of turbine (required for valuation)
- discount_rate (float) – a float value for the discount rate (required for valuation)
- grid_points_uri (string) – a uri to a CSV file that specifies the landing and grid point locations (optional)
- avg_grid_distance (float) – a float for the average distance in kilometers from a grid connection point to a land connection point (required for valuation if grid connection points are not provided)
- price_table (boolean) – a bool indicating whether to use the wind energy price table or not (required)
- wind_schedule (string) – a URI to a CSV file for the yearly prices of wind energy for the lifespan of the farm (required if ‘price_table’ is true)
- wind_price (float) – a float for the wind energy price at year 0 (required if price_table is false)
- rate_change (float) – a float as a percent for the annual rate of change in the price of wind energy. (required if price_table is false)
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'wind_data_uri': 'path/to/file', 'aoi_uri': 'path/to/shapefile', 'bathymetry_uri': 'path/to/raster', 'land_polygon_uri': 'path/to/shapefile', 'global_wind_parameters_uri': 'path/to/csv', 'suffix': '_results', 'turbine_parameters_uri': 'path/to/csv', 'number_of_turbines': 10, 'min_depth': 3, 'max_depth': 60, 'min_distance': 0, 'max_distance': 200000, 'valuation_container': True, 'foundation_cost': 3.4, 'discount_rate': 7.0, 'grid_points_uri': 'path/to/csv', 'avg_grid_distance': 4, 'price_table': True, 'wind_schedule': 'path/to/csv', 'wind_price': 0.4, 'rate_change': 0.0, }
Returns: None
Wave Energy¶
InVEST Wind Energy model
-
exception
natcap.invest.wind_energy.wind_energy.
FieldError
¶ Bases:
exceptions.Exception
A custom error message for fields that are missing
-
exception
natcap.invest.wind_energy.wind_energy.
TimePeriodError
¶ Bases:
exceptions.Exception
A custom error message for when the number of years does not match the number of years given in the price table
-
natcap.invest.wind_energy.wind_energy.
add_field_to_shape_given_list
(shape_ds_uri, value_list, field_name)¶ Adds a field and a value to a given shapefile from a list of values. The list of values must be the same size as the number of features in the shape
shape_ds_uri - a URI to an OGR datasource
- value_list - a list of values that is the same length as there are
- features in ‘shape_ds’
field_name - a String for the name of the new field
returns - nothing
-
natcap.invest.wind_energy.wind_energy.
calculate_distances_grid
(land_shape_uri, harvested_masked_uri, tmp_dist_final_uri)¶ Creates a distance transform raster from an OGR shapefile. The function first burns the features from ‘land_shape_uri’ onto a raster using ‘harvested_masked_uri’ as the base for that raster. It then does a distance transform from those locations and converts from pixel distances to distance in meters.
- land_shape_uri - a URI to an OGR shapefile that has the desired
- features to get the distance from (required)
- harvested_masked_uri - a URI to a GDAL raster that is used to get
- the proper extents and configuration for new rasters
- tmp_dist_final_uri - a URI to a GDAL raster for the final
- distance transform raster output
returns - Nothing
-
natcap.invest.wind_energy.wind_energy.
calculate_distances_land_grid
(land_shape_uri, harvested_masked_uri, tmp_dist_final_uri)¶ Creates a distance transform raster based on the shortest distances of each point feature in ‘land_shape_uri’ and each features ‘L2G’ field.
- land_shape_uri - a URI to an OGR shapefile that has the desired
- features to get the distance from (required)
- harvested_masked_uri - a URI to a GDAL raster that is used to get
- the proper extents and configuration for new rasters
- tmp_dist_final_uri - a URI to a GDAL raster for the final
- distance transform raster output
returns - Nothing
-
natcap.invest.wind_energy.wind_energy.
clip_and_reproject_raster
(raster_uri, aoi_uri, projected_uri)¶ Clip and project a Dataset to an area of interest
raster_uri - a URI to a gdal Dataset
aoi_uri - a URI to a ogr DataSource of geometry type polygon
- projected_uri - a URI string for the output dataset to be written to
- disk
returns - nothing
-
natcap.invest.wind_energy.wind_energy.
clip_and_reproject_shapefile
(shapefile_uri, aoi_uri, projected_uri)¶ Clip and project a DataSource to an area of interest
shapefile_uri - a URI to a ogr Datasource
aoi_uri - a URI to a ogr DataSource of geometry type polygon
- projected_uri - a URI string for the output shapefile to be written to
- disk
returns - nothing
-
natcap.invest.wind_energy.wind_energy.
clip_datasource
(aoi_uri, orig_ds_uri, output_uri)¶ Clip an OGR Datasource of geometry type polygon by another OGR Datasource geometry type polygon. The aoi should be a shapefile with a layer that has only one polygon feature
aoi_uri - a URI to an OGR Datasource that is the clipping bounding box
orig_ds_uri - a URI to an OGR Datasource to clip
out_uri - output uri path for the clipped datasource
returns - Nothing
-
natcap.invest.wind_energy.wind_energy.
combine_dictionaries
(dict_1, dict_2)¶ Add dict_2 to dict_1 and return in a new dictionary. Both dictionaries should be single level with a key that points to a value. If there is a key in ‘dict_2’ that already exists in ‘dict_1’ it will be ignored.
- dict_1 - a python dictionary
- ex: {‘ws_id’:1, ‘vol’:65}
- dict_2 - a python dictionary
- ex: {‘size’:11, ‘area’:5}
returns - a python dictionary that is the combination of ‘dict_1’ and ‘dict_2’ ex:
ex: {‘ws_id’:1, ‘vol’:65, ‘area’:5, ‘size’:11}
-
natcap.invest.wind_energy.wind_energy.
create_wind_farm_box
(spat_ref, start_point, x_len, y_len, out_uri)¶ Create an OGR shapefile where the geometry is a set of lines
- spat_ref - a SpatialReference to use in creating the output shapefile
- (required)
- start_point - a tuple of floats indicating the first vertice of the
- line (required)
- x_len - an integer value for the length of the line segment in
- the X direction (required)
- y_len - an integer value for the length of the line segment in
- the Y direction (required)
- out_uri - a string representing the file path to disk for the new
- shapefile (required)
return - nothing
-
natcap.invest.wind_energy.wind_energy.
execute
(args) Wind Energy.
This module handles the execution of the wind energy model given the following dictionary:
Parameters: - workspace_dir (string) – a python string which is the uri path to where the outputs will be saved (required)
- wind_data_uri (string) – path to a CSV file with the following header: [‘LONG’,’LATI’,’LAM’, ‘K’, ‘REF’]. Each following row is a location with at least the Longitude, Latitude, Scale (‘LAM’), Shape (‘K’), and reference height (‘REF’) at which the data was collected (required)
- aoi_uri (string) – a uri to an OGR datasource that is of type polygon and projected in linear units of meters. The polygon specifies the area of interest for the wind data points. If limiting the wind farm bins by distance, then the aoi should also cover a portion of the land polygon that is of interest (optional for biophysical and no distance masking, required for biophysical and distance masking, required for valuation)
- bathymetry_uri (string) – a uri to a GDAL dataset that has the depth values of the area of interest (required)
- land_polygon_uri (string) – a uri to an OGR datasource of type polygon that provides a coastline for determining distances from wind farm bins. Enabled by AOI and required if wanting to mask by distances or run valuation
- global_wind_parameters_uri (string) – a float for the average distance in kilometers from a grid connection point to a land connection point (required for valuation if grid connection points are not provided)
- suffix (string) – a String to append to the end of the output files (optional)
- turbine_parameters_uri (string) – a uri to a CSV file that holds the turbines biophysical parameters as well as valuation parameters (required)
- number_of_turbines (int) – an integer value for the number of machines for the wind farm (required for valuation)
- min_depth (float) – a float value for the minimum depth for offshore wind farm installation (meters) (required)
- max_depth (float) – a float value for the maximum depth for offshore wind farm installation (meters) (required)
- min_distance (float) – a float value for the minimum distance from shore for offshore wind farm installation (meters) The land polygon must be selected for this input to be active (optional, required for valuation)
- max_distance (float) – a float value for the maximum distance from shore for offshore wind farm installation (meters) The land polygon must be selected for this input to be active (optional, required for valuation)
- valuation_container (boolean) – Indicates whether model includes valuation
- foundation_cost (float) – a float representing how much the foundation will cost for the specific type of turbine (required for valuation)
- discount_rate (float) – a float value for the discount rate (required for valuation)
- grid_points_uri (string) – a uri to a CSV file that specifies the landing and grid point locations (optional)
- avg_grid_distance (float) – a float for the average distance in kilometers from a grid connection point to a land connection point (required for valuation if grid connection points are not provided)
- price_table (boolean) – a bool indicating whether to use the wind energy price table or not (required)
- wind_schedule (string) – a URI to a CSV file for the yearly prices of wind energy for the lifespan of the farm (required if ‘price_table’ is true)
- wind_price (float) – a float for the wind energy price at year 0 (required if price_table is false)
- rate_change (float) – a float as a percent for the annual rate of change in the price of wind energy. (required if price_table is false)
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'wind_data_uri': 'path/to/file', 'aoi_uri': 'path/to/shapefile', 'bathymetry_uri': 'path/to/raster', 'land_polygon_uri': 'path/to/shapefile', 'global_wind_parameters_uri': 'path/to/csv', 'suffix': '_results', 'turbine_parameters_uri': 'path/to/csv', 'number_of_turbines': 10, 'min_depth': 3, 'max_depth': 60, 'min_distance': 0, 'max_distance': 200000, 'valuation_container': True, 'foundation_cost': 3.4, 'discount_rate': 7.0, 'grid_points_uri': 'path/to/csv', 'avg_grid_distance': 4, 'price_table': True, 'wind_schedule': 'path/to/csv', 'wind_price': 0.4, 'rate_change': 0.0, }
Returns: None
-
natcap.invest.wind_energy.wind_energy.
get_highest_harvested_geom
(wind_points_uri)¶ Find the point with the highest harvested value for wind energy and return its geometry
- wind_points_uri - a URI to an OGR Datasource of a point geometry
- shapefile for wind energy
returns - the geometry of the point with the highest harvested value
-
natcap.invest.wind_energy.wind_energy.
mask_by_distance
(dataset_uri, min_dist, max_dist, out_nodata, dist_uri, mask_uri)¶ Given a raster whose pixels are distances, bound them by a minimum and maximum distance
dataset_uri - a URI to a GDAL raster with distance values
min_dist - an integer of the minimum distance allowed in meters
max_dist - an integer of the maximum distance allowed in meters
mask_uri - the URI output of the raster masked by distance values
- dist_uri - the URI output of the raster converted from distance
- transform ranks to distance values in meters
out_nodata - the nodata value of the raster
returns - nothing
-
natcap.invest.wind_energy.wind_energy.
pixel_size_based_on_coordinate_transform_uri
(dataset_uri, coord_trans, point)¶ Get width and height of cell in meters.
A wrapper for pixel_size_based_on_coordinate_transform that takes a dataset uri as an input and opens it before sending it along.
Parameters: - dataset_uri (string) – a URI to a gdal dataset
- other parameters pass along (All) –
Returns: result – (pixel_width_meters, pixel_height_meters)
Return type: tuple
-
natcap.invest.wind_energy.wind_energy.
point_to_polygon_distance
(poly_ds_uri, point_ds_uri)¶ Calculates the distances from points in a point geometry shapefile to the nearest polygon from a polygon shapefile. Both datasources must be projected in meters
- poly_ds_uri - a URI to an OGR polygon geometry datasource projected in
- meters
- point_ds_uri - a URI to an OGR point geometry datasource projected in
- meters
returns - a list of the distances from each point
-
natcap.invest.wind_energy.wind_energy.
read_csv_wind_data
(wind_data_uri, hub_height)¶ Unpack the csv wind data into a dictionary.
Parameters: - wind_data_uri (string) – a path for the csv wind data file with header of: “LONG”,”LATI”,”LAM”,”K”,”REF”
- hub_height (int) – the hub height to use for calculating weibell parameters and wind energy values
Returns: A dictionary where the keys are lat/long tuples which point to dictionaries that hold wind data at that location.
-
natcap.invest.wind_energy.wind_energy.
read_csv_wind_parameters
(csv_uri, parameter_list)¶ Construct a dictionary from a csv file given a list of keys in ‘parameter_list’. The list of keys corresponds to the parameters names in ‘csv_uri’ which are represented in the first column of the file.
- csv_uri - a URI to a CSV file where every row is a parameter with the
- parameter name in the first column followed by the value in the second column
- parameter_list - a List of Strings that represent the parameter names to
- be found in ‘csv_uri’. These Strings will be the keys in the returned dictionary
- returns - a Dictionary where the the ‘parameter_list’ Strings are the
- keys that have values pulled from ‘csv_uri’
-
natcap.invest.wind_energy.wind_energy.
wind_data_to_point_shape
(dict_data, layer_name, output_uri)¶ Given a dictionary of the wind data create a point shapefile that represents this data
- dict_data - a python dictionary with the wind data, where the keys are
- tuples of the lat/long coordinates: { (97, 43) : {‘LATI’:97, ‘LONG’:43, ‘LAM’:6.3, ‘K’:2.7, ‘REF’:10}, (55, 51) : {‘LATI’:55, ‘LONG’:51, ‘LAM’:6.2, ‘K’:2.4, ‘REF’:10}, (73, 47) : {‘LATI’:73, ‘LONG’:47, ‘LAM’:6.5, ‘K’:2.3, ‘REF’:10} }
layer_name - a python string for the name of the layer
output_uri - a uri for the output destination of the shapefile
return - nothing
Module contents¶
Supporting Ecosystem Services¶
Habitat Quality Package¶
Model Entry Point¶
-
natcap.invest.habitat_quality.habitat_quality.
execute
(args)¶ Habitat Quality.
Open files necessary for the portion of the habitat_quality model.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation (required)
- landuse_cur_uri (string) – a uri to an input land use/land cover raster (required)
- landuse_fut_uri (string) – a uri to an input land use/land cover raster (optional)
- landuse_bas_uri (string) – a uri to an input land use/land cover raster (optional, but required for rarity calculations)
- threat_folder (string) – a uri to the directory that will contain all threat rasters (required)
- threats_uri (string) – a uri to an input CSV containing data of all the considered threats. Each row is a degradation source and each column a different attribute of the source with the following names: ‘THREAT’,’MAX_DIST’,’WEIGHT’ (required).
- access_uri (string) – a uri to an input polygon shapefile containing data on the relative protection against threats (optional)
- sensitivity_uri (string) – a uri to an input CSV file of LULC types, whether they are considered habitat, and their sensitivity to each threat (required)
- half_saturation_constant (float) – a python float that determines the spread and central tendency of habitat quality scores (required)
- suffix (string) – a python string that will be inserted into all raster uri paths just before the file extension.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'landuse_cur_uri': 'path/to/landuse_cur_raster', 'landuse_fut_uri': 'path/to/landuse_fut_raster', 'landuse_bas_uri': 'path/to/landuse_bas_raster', 'threat_raster_folder': 'path/to/threat_rasters/', 'threats_uri': 'path/to/threats_csv', 'access_uri': 'path/to/access_shapefile', 'sensitivity_uri': 'path/to/sensitivity_csv', 'half_saturation_constant': 0.5, 'suffix': '_results', }
Returns: none
Habitat Quality¶
InVEST Habitat Quality model
-
natcap.invest.habitat_quality.habitat_quality.
check_projections
(ds_uri_dict, proj_unit)¶ Check that a group of gdal datasets are projected and that they are projected in a certain unit.
ds_uri_dict - a dictionary of uris to gdal datasets proj_unit - a float that specifies what units the projection should be
in. ex: 1.0 is meters.- returns - False if one of the datasets is not projected or not in the
- correct projection type, otherwise returns True if datasets are properly projected
-
natcap.invest.habitat_quality.habitat_quality.
execute
(args) Habitat Quality.
Open files necessary for the portion of the habitat_quality model.
Parameters: - workspace_dir (string) – a uri to the directory that will write output and other temporary files during calculation (required)
- landuse_cur_uri (string) – a uri to an input land use/land cover raster (required)
- landuse_fut_uri (string) – a uri to an input land use/land cover raster (optional)
- landuse_bas_uri (string) – a uri to an input land use/land cover raster (optional, but required for rarity calculations)
- threat_folder (string) – a uri to the directory that will contain all threat rasters (required)
- threats_uri (string) – a uri to an input CSV containing data of all the considered threats. Each row is a degradation source and each column a different attribute of the source with the following names: ‘THREAT’,’MAX_DIST’,’WEIGHT’ (required).
- access_uri (string) – a uri to an input polygon shapefile containing data on the relative protection against threats (optional)
- sensitivity_uri (string) – a uri to an input CSV file of LULC types, whether they are considered habitat, and their sensitivity to each threat (required)
- half_saturation_constant (float) – a python float that determines the spread and central tendency of habitat quality scores (required)
- suffix (string) – a python string that will be inserted into all raster uri paths just before the file extension.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'landuse_cur_uri': 'path/to/landuse_cur_raster', 'landuse_fut_uri': 'path/to/landuse_fut_raster', 'landuse_bas_uri': 'path/to/landuse_bas_raster', 'threat_raster_folder': 'path/to/threat_rasters/', 'threats_uri': 'path/to/threats_csv', 'access_uri': 'path/to/access_shapefile', 'sensitivity_uri': 'path/to/sensitivity_csv', 'half_saturation_constant': 0.5, 'suffix': '_results', }
Returns: none
-
natcap.invest.habitat_quality.habitat_quality.
make_dictionary_from_csv
(csv_uri, key_field)¶ Make a basic dictionary representing a CSV file, where the keys are a unique field from the CSV file and the values are a dictionary representing each row
csv_uri - a string for the path to the csv file key_field - a string representing which field is to be used
from the csv file as the key in the dictionaryreturns - a python dictionary
-
natcap.invest.habitat_quality.habitat_quality.
make_linear_decay_kernel_uri
(max_distance, kernel_uri)¶
-
natcap.invest.habitat_quality.habitat_quality.
map_raster_to_dict_values
(key_raster_uri, out_uri, attr_dict, field, out_nodata, raise_error)¶ Creates a new raster from ‘key_raster’ where the pixel values from ‘key_raster’ are the keys to a dictionary ‘attr_dict’. The values corresponding to those keys is what is written to the new raster. If a value from ‘key_raster’ does not appear as a key in ‘attr_dict’ then raise an Exception if ‘raise_error’ is True, otherwise return a ‘out_nodata’
- key_raster_uri - a GDAL raster uri dataset whose pixel values relate to
- the keys in ‘attr_dict’
out_uri - a string for the output path of the created raster attr_dict - a dictionary representing a table of values we are interested
in making into a raster- field - a string of which field in the table or key in the dictionary
- to use as the new raster pixel values
out_nodata - a floating point value that is the nodata value. raise_error - a string that decides how to handle the case where the
value from ‘key_raster’ is not found in ‘attr_dict’. If ‘raise_error’ is ‘values_required’, raise Exception, if ‘none’, return ‘out_nodata’- returns - a GDAL raster, or raises an Exception and fail if:
- raise_error is True and
- the value from ‘key_raster’ is not a key in ‘attr_dict’
-
natcap.invest.habitat_quality.habitat_quality.
raster_pixel_count
(dataset_uri)¶ Determine how many of each unique pixel lies in the dataset (dataset)
dataset_uri - a GDAL raster dataset
- returns - a dictionary whose keys are the unique pixel values and
- whose values are the number of occurrences
-
natcap.invest.habitat_quality.habitat_quality.
resolve_ambiguous_raster_path
(uri, raise_error=True)¶ Get the real uri for a raster when we don’t know the extension of how the raster may be represented.
- uri - a python string of the file path that includes the name of the
- file but not its extension
- raise_error - a Boolean that indicates whether the function should
- raise an error if a raster file could not be opened.
return - the resolved uri to the rasster
-
natcap.invest.habitat_quality.habitat_quality.
threat_names_match
(threat_dict, sens_dict, prefix)¶ Check that the threat names in the threat table match the columns in the sensitivity table that represent the sensitivity of each threat on a lulc.
- threat_dict - a dictionary representing the threat table:
- {‘crp’:{‘THREAT’:’crp’,’MAX_DIST’:‘8.0’,’WEIGHT’:‘0.7’},
- ‘urb’:{‘THREAT’:’urb’,’MAX_DIST’:‘5.0’,’WEIGHT’:‘0.3’}, ... }
- sens_dict - a dictionary representing the sensitivity table:
- {‘1’:{‘LULC’:‘1’, ‘NAME’:’Residential’, ‘HABITAT’:‘1’,
- ‘L_crp’:‘0.4’, ‘L_urb’:‘0.45’...},
- ‘11’:{‘LULC’:‘11’, ‘NAME’:’Urban’, ‘HABITAT’:‘1’,
- ‘L_crp’:‘0.6’, ‘L_urb’:‘0.3’...},
...}
- prefix - a string that specifies the prefix to the threat names that is
- found in the sensitivity table
- returns - False if there is a mismatch in threat names or True if
- everything passes
Module contents¶
Habitat Risk Assessment Package¶
Model Entry Point¶
-
natcap.invest.habitat_risk_assessment.hra.
execute
(args)¶ Habitat Risk Assessment.
This function will prepare files passed from the UI to be sent on to the hra_core module.
All inputs are required.
Parameters: - workspace_dir (string) – The location of the directory into which intermediate and output files should be placed.
- csv_uri (string) – The location of the directory containing the CSV files of habitat, stressor, and overlap ratings. Will also contain a .txt JSON file that has directory locations (potentially) for habitats, species, stressors, and criteria.
- grid_size (int) – Represents the desired pixel dimensions of both intermediate and ouput rasters.
- risk_eq (string) – A string identifying the equation that should be used in calculating risk scores for each H-S overlap cell. This will be either ‘Euclidean’ or ‘Multiplicative’.
- decay_eq (string) – A string identifying the equation that should be used in calculating the decay of stressor buffer influence. This can be ‘None’, ‘Linear’, or ‘Exponential’.
- max_rating (int) – An int representing the highest potential value that should be represented in rating, data quality, or weight in the CSV table.
- max_stress (int) – This is the highest score that is used to rate a criteria within this model run. These values would be placed within the Rating column of the habitat, species, and stressor CSVs.
- aoi_tables (string) – A shapefile containing one or more planning regions for a given model. This will be used to get the average risk value over a larger area. Each potential region MUST contain the attribute “name” as a way of identifying each individual shape.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'csv_uri': 'path/to/csv', 'grid_size': 200, 'risk_eq': 'Euclidean', 'decay_eq': 'None', 'max_rating': 3, 'max_stress': 4, 'aoi_tables': 'path/to/shapefile', }
Returns: None
Habitat Risk Assessment¶
This will be the preperatory module for HRA. It will take all unprocessed and pre-processed data from the UI and pass it to the hra_core module.
-
exception
natcap.invest.habitat_risk_assessment.hra.
DQWeightNotFound
¶ Bases:
exceptions.Exception
An exception to be passed if there is a shapefile within the spatial criteria directory, but no corresponing data quality and weight to support it. This would likely indicate that the user is try to run HRA without having added the criteria name into hra_preprocessor properly.
-
exception
natcap.invest.habitat_risk_assessment.hra.
ImproperAOIAttributeName
¶ Bases:
exceptions.Exception
An exception to pass in hra non core if the AOIzone files do not contain the proper attribute name for individual indentification. The attribute should be named ‘name’, and must exist for every shape in the AOI layer.
-
exception
natcap.invest.habitat_risk_assessment.hra.
ImproperCriteriaAttributeName
¶ Bases:
exceptions.Exception
An excepion to pass in hra non core if the criteria provided by the user for use in spatially explicit rating do not contain the proper attribute name. The attribute should be named ‘RATING’, and must exist for every shape in every layer provided.
-
natcap.invest.habitat_risk_assessment.hra.
add_crit_rasters
(dir, crit_dict, habitats, h_s_e, h_s_c, grid_size)¶ This will take in the dictionary of criteria shapefiles, rasterize them, and add the URI of that raster to the proper subdictionary within h/s/h-s.
- Input:
- dir- Directory into which the raserized criteria shapefiles should be
- placed.
- crit_dict- A multi-level dictionary of criteria shapefiles. The
outermost keys refer to the dictionary they belong with. The structure will be as follows:
- {‘h’:
- {‘HabA’:
- {‘CriteriaName: “Shapefile Datasource URI”...}, ...
},
- ‘h_s_c’:
- {(‘HabA’, ‘Stress1’):
- {‘CriteriaName: “Shapefile Datasource URI”, ...}, ...
},
- ‘h_s_e’
- {(‘HabA’, ‘Stress1’):
- {‘CriteriaName: “Shapefile Datasource URI”, ...}, ...
}
}
- h_s_c- A multi-level structure which holds numerical criteria
ratings, as well as weights and data qualities for criteria rasters. h-s will hold only criteria that apply to habitat and stressor overlaps. The structure’s outermost keys are tuples of (Habitat, Stressor) names. The overall structure will be as pictured:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}, ‘DS’: “HabitatStressor Raster URI”
}
- habitats- Similar to the h-s dictionary, a multi-level
- dictionary containing all habitat-specific criteria ratings and raster information. The outermost keys are habitat names. Within the dictionary, the habitats[‘habName’][‘DS’] will be the URI of the raster of that habitat.
- h_s_e- Similar to the h-s dictionary, a multi-level dictionary
- containing all stressor-specific criteria ratings and raster information. The outermost keys are tuples of (Habitat, Stressor) names.
- grid_size- An int representing the desired pixel size for the criteria
- rasters.
- Output:
- A set of rasterized criteria files. The criteria shapefiles will be
- burned based on their ‘Rating’ attribute. These will be placed in the ‘dir’ folder.
An appended version of habitats, h_s_e, and h_s_c which will include entries for criteria rasters at ‘Rating’ in the appropriate dictionary. ‘Rating’ will map to the URI of the corresponding criteria dataset.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra.
add_hab_rasters
(dir, habitats, hab_list, grid_size, grid_path)¶ Want to get all shapefiles within any directories in hab_list, and burn them to a raster.
- Input:
- dir- Directory into which all completed habitat rasters should be
- placed.
- habitats- A multi-level dictionary containing all habitat and
- species-specific criteria ratings and rasters.
- hab_list- File URI’s for all shapefile in habitats dir, species dir, or
- both.
- grid_size- Int representing the desired pixel dimensions of
- both intermediate and ouput rasters.
- grid_path- A string for a raster file path on disk. Used as a
- universal base raster to create other rasters which to burn vectors onto.
- Output:
- A modified version of habitats, into which we have placed the URI to
- the rasterized version of the habitat shapefile. It will be placed at habitats[habitatName][‘DS’].
-
natcap.invest.habitat_risk_assessment.hra.
calc_max_rating
(risk_eq, max_rating)¶ Should take in the max possible risk, and return the highest possible per pixel risk that would be seen on a H-S raster pixel.
- Input:
risk_eq- The equation that will be used to determine risk. max_rating- The highest possible value that could be given as a
criteria rating, data quality, or weight.
Returns: An int representing the highest possible risk value for any given h-s overlap raster.
-
natcap.invest.habitat_risk_assessment.hra.
execute
(args) Habitat Risk Assessment.
This function will prepare files passed from the UI to be sent on to the hra_core module.
All inputs are required.
Parameters: - workspace_dir (string) – The location of the directory into which intermediate and output files should be placed.
- csv_uri (string) – The location of the directory containing the CSV files of habitat, stressor, and overlap ratings. Will also contain a .txt JSON file that has directory locations (potentially) for habitats, species, stressors, and criteria.
- grid_size (int) – Represents the desired pixel dimensions of both intermediate and ouput rasters.
- risk_eq (string) – A string identifying the equation that should be used in calculating risk scores for each H-S overlap cell. This will be either ‘Euclidean’ or ‘Multiplicative’.
- decay_eq (string) – A string identifying the equation that should be used in calculating the decay of stressor buffer influence. This can be ‘None’, ‘Linear’, or ‘Exponential’.
- max_rating (int) – An int representing the highest potential value that should be represented in rating, data quality, or weight in the CSV table.
- max_stress (int) – This is the highest score that is used to rate a criteria within this model run. These values would be placed within the Rating column of the habitat, species, and stressor CSVs.
- aoi_tables (string) – A shapefile containing one or more planning regions for a given model. This will be used to get the average risk value over a larger area. Each potential region MUST contain the attribute “name” as a way of identifying each individual shape.
Example Args Dictionary:
{ 'workspace_dir': 'path/to/workspace_dir', 'csv_uri': 'path/to/csv', 'grid_size': 200, 'risk_eq': 'Euclidean', 'decay_eq': 'None', 'max_rating': 3, 'max_stress': 4, 'aoi_tables': 'path/to/shapefile', }
Returns: None
-
natcap.invest.habitat_risk_assessment.hra.
listdir
(path)¶ A replacement for the standar os.listdir which, instead of returning only the filename, will include the entire path. This will use os as a base, then just lambda transform the whole list.
- Input:
- path- The location container from which we want to gather all files.
Returns: A list of full URIs contained within ‘path’.
-
natcap.invest.habitat_risk_assessment.hra.
make_add_overlap_rasters
(dir, habitats, stress_dict, h_s_c, h_s_e, grid_size)¶ For every pair in h_s_c and h_s_e, want to get the corresponding habitat and stressor raster, and return the overlap of the two. Should add that as the ‘DS’ entry within each (h, s) pair key in h_s_e and h_s_c.
- Input:
- dir- Directory into which all completed h-s overlap files shoudl be
- placed.
- habitats- The habitats criteria dictionary, which will contain a
dict[Habitat][‘DS’]. The structure will be as follows:
- {Habitat A:
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {
- ‘DS’: “CritName Raster URI”, ‘Weight’: 1.0, ‘DQ’: 1.0
}
},
‘DS’: “A Dataset URI” }
}
- stress_dict- A dictionary containing all stressor DS’s. The key will be
- the name of the stressor, and it will map to the URI of the stressor DS.
- h_s_c- A multi-level structure which holds numerical criteria
ratings, as well as weights and data qualities for criteria rasters. h-s will hold criteria that apply to habitat and stressor overlaps, and be applied to the consequence score. The structure’s outermost keys are tuples of (Habitat, Stressor) names. The overall structure will be as pictured:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- h_s_e- Similar to the h_s dictionary, a multi-level
- dictionary containing habitat-stressor-specific criteria ratings and raster information which should be applied to the exposure score. The outermost keys are tuples of (Habitat, Stressor) names.
- grid_size- The desired pixel size for the rasters that will be created
- for each habitat and stressor.
- Output:
- An edited versions of h_s_e and h_s_c, each of which contains an overlap DS at dict[(Hab, Stress)][‘DS’]. That key will map to the URI for the corresponding raster DS.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra.
make_exp_decay_array
(dist_trans_uri, out_uri, buff, nodata)¶ Should create a raster where the area around the land is a function of exponential decay from the land values.
- Input:
- dist_trans_uri- uri to a gdal raster where each pixel value represents
- the distance to the closest piece of land.
out_uri- uri for the gdal raster output with the buffered outputs buff- The distance surrounding the land that the user desires to buffer
with exponentially decaying values.- nodata- The value which should be placed into anything not land or
- buffer area.
Returns: Nothing
-
natcap.invest.habitat_risk_assessment.hra.
make_lin_decay_array
(dist_trans_uri, out_uri, buff, nodata)¶ Should create a raster where the area around land is a function of linear decay from the values representing the land.
- Input:
- dist_trans_uri- uri to a gdal raster where each pixel value represents
- the distance to the closest piece of land.
out_uri- uri for the gdal raster output with the buffered outputs buff- The distance surrounding the land that the user desires to buffer
with linearly decaying values.- nodata- The value which should be placed into anything not land or
- buffer area.
Returns: Nothing
-
natcap.invest.habitat_risk_assessment.hra.
make_no_decay_array
(dist_trans_uri, out_uri, buff, nodata)¶ Should create a raster where the buffer zone surrounding the land is buffered with the same values as the land, essentially creating an equally weighted larger landmass.
- Input:
- dist_trans_uri- uri to a gdal raster where each pixel value represents
- the distance to the closest piece of land.
out_uri- uri for the gdal raster output with the buffered outputs buff- The distance surrounding the land that the user desires to buffer
with land data values.- nodata- The value which should be placed into anything not land or
- buffer area.
Returns: Nothing
-
natcap.invest.habitat_risk_assessment.hra.
make_stress_rasters
(dir, stress_list, grid_size, decay_eq, buffer_dict, grid_path)¶ Creating a simple dictionary that will map stressor name to a rasterized version of that stressor shapefile. The key will be a string containing stressor name, and the value will be the URI of the rasterized shapefile.
- Input:
dir- The directory into which completed shapefiles should be placed. stress_list- A list containing stressor shapefile URIs for all
stressors desired within the given model run.- grid_size- The pixel size desired for the rasters produced based on the
- shapefiles.
- decay_eq- A string identifying the equation that should be used
- in calculating the decay of stressor buffer influence.
- buffer_dict- A dictionary that holds desired buffer sizes for each
- stressors. The key is the name of the stressor, and the value is an int which correlates to desired buffer size.
- grid_path- A string for a raster file path on disk. Used as a
- universal base raster to create other rasters which to burn vectors onto.
- Output:
- A potentially buffered and rasterized version of each stressor
- shapefile provided, which will be stored in ‘dir’.
Returns: stress_dict- A simple dictionary which maps a string key of the stressor name to the URI for the output raster.
-
natcap.invest.habitat_risk_assessment.hra.
make_zero_buff_decay_array
(dist_trans_uri, out_uri, nodata)¶ Creates a raster in the case of a zero buffer width, where we should have is land and nodata values.
- Input:
- dist_trans_uri- uri to a gdal raster where each pixel value represents
- the distance to the closest piece of land.
out_uri- uri for the gdal raster output with the buffered outputs nodata- The value which should be placed into anything that is not
land.
Returns: Nothing
-
natcap.invest.habitat_risk_assessment.hra.
merge_bounding_boxes
(bb1, bb2, mode)¶ Merge two bounding boxes through union or intersection.
Parameters: - bb1 (list) – [upper_left_x, upper_left_y, lower_right_x, lower_right_y]
- bb2 (list) – [upper_left_x, upper_left_y, lower_right_x, lower_right_y]
- mode (string) –
Returns: A list of the merged bounding boxes.
-
natcap.invest.habitat_risk_assessment.hra.
unpack_over_dict
(csv_uri, args)¶ This throws the dictionary coming from the pre-processor into the equivalent dictionaries in args so that they can be processed before being passed into the core module.
- Input:
- csv_uri- Reference to the folder location of the CSV tables containing
- all habitat and stressor rating information.
- args- The dictionary into which the individual ratings dictionaries
- should be placed.
- Output:
A modified args dictionary containing dictionary versions of the CSV tables located in csv_uri. The dictionaries should be of the forms as follows.
- h_s_c- A multi-level structure which will hold all criteria ratings,
both numerical and raster that apply to habitat and stressor overlaps. The structure, whose keys are tuples of (Habitat, Stressor) names and map to an inner dictionary will have 2 outer keys containing numeric-only criteria, and raster-based criteria. At this time, we should only have two entries in a criteria raster entry, since we have yet to add the rasterized versions of the criteria.
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- habitats- Similar to the h-s dictionary, a multi-level
- dictionary containing all habitat-specific criteria ratings and weights and data quality for the rasters.
- h_s_e- Similar to the h-s dictionary, a multi-level dictionary
- containing habitat stressor-specific criteria ratings and weights and data quality for the rasters.
Returns nothing.
Habitat Risk Assessment Core¶
This is the core module for HRA functionality. This will perform all HRA calcs, and return the appropriate outputs.
-
natcap.invest.habitat_risk_assessment.hra_core.
aggregate_multi_rasters_uri
(aoi_rast_uri, rast_uris, rast_labels, ignore_value_list=[])¶ Will take a stack of rasters and an AOI, and return a dictionary containing the number of overlap pixels, and the value of those pixels for each overlap of raster and AOI.
- Input:
- aoi_uri- The location of an AOI raster which MUST have individual ID
- numbers with the attribute name ‘BURN_ID’ for each feature on the map.
- rast_uris- List of locations of the rasters which should be overlapped
- with the AOI.
- rast_labels- Names for each raster layer that will be retrievable from
- the output dictionary.
- ignore_value_list- Optional argument that provides a list of values
- which should be ignored if they crop up for a pixel value of one of the layers.
Returns: layer_overlap_info- {AOI Data Value 1: {rast_label: [#of pix, pix value], rast_label: [200, 2567.97], ...}
-
natcap.invest.habitat_risk_assessment.hra_core.
calc_C_raster
(out_uri, h_s_list, h_s_denom_dict, h_list, h_denom_dict, h_uri, h_s_uri)¶ Should return a raster burned with a ‘C’ raster that is a combination of all the rasters passed in within the list, divided by the denominator.
- Input:
- out_uri- The location to which the calculated C raster should be
- bGurned.
- h_s_list- A list of rasters burned with the equation r/dq*w for every
- criteria applicable for that h, s pair.
- h_s_denom_dict- A dictionary containing criteria names applicable to
- this particular h,s pair. Each criteria string name maps to a double representing the denominator for that raster, using the equation 1/dq*w.
- h_list- A list of rasters burned with the equation r/dq*w for every
- criteria applicable for that s.
- h_denom_dict- A dictionary containing criteria names applicable to this
- particular habitat. Each criteria string name maps to a double representing the denominator for that raster, using the equation 1/dq*w.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
calc_E_raster
(out_uri, h_s_list, denom_dict, h_s_base_uri, h_base_uri)¶ Should return a raster burned with an ‘E’ raster that is a combination of all the rasters passed in within the list, divided by the denominator.
- Input:
out_uri- The location to which the E raster should be burned. h_s_list- A list of rasters burned with the equation r/dq*w for every
criteria applicable for that h, s pair.- denom_dict- A double representing the sum total of all applicable
- criteria using the equation 1/dq*w. criteria applicable for that s.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
copy_raster
(in_uri, out_uri)¶ Quick function that will copy the raster in in_raster, and put it into out_raster.
-
natcap.invest.habitat_risk_assessment.hra_core.
execute
(args)¶ This provides the main calculation functionaility of the HRA model. This will call all parts necessary for calculation of final outputs.
- Inputs:
- args- Dictionary containing everything that hra_core will need to
- complete the rest of the model run. It will contain the following.
- args[‘workspace_dir’]- Directory in which all data resides. Output
- and intermediate folders will be subfolders of this one.
- args[‘h_s_c’]- The same as intermediate/’h-s’, but with the addition
of a 3rd key ‘DS’ to the outer dictionary layer. This will map to a dataset URI that shows the potentially buffered overlap between the habitat and stressor. Additionally, any raster criteria will be placed in their criteria name subdictionary. The overall structure will be as pictured:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {
- ‘DS’: “CritName Raster URI”, ‘Weight’: 1.0, ‘DQ’: 1.0
}
},
‘DS’: “A-1 Dataset URI” }
}
- args[‘habitats’]- Similar to the h-s dictionary, a multi-level
- dictionary containing all habitat-specific criteria ratings and rasters. In this case, however, the outermost key is by habitat name, and habitats[‘habitatName’][‘DS’] points to the rasterized habitat shapefile URI provided by the user.
- args[‘h_s_e’]- Similar to the h_s_c dictionary, a multi-level
- dictionary containing habitat-stressor-specific criteria ratings and shapes. The same as intermediate/’h-s’, but with the addition of a 3rd key ‘DS’ to the outer dictionary layer. This will map to a dataset URI that shows the potentially buffered overlap between the habitat and stressor. Additionally, any raster criteria will be placed in their criteria name subdictionary.
- args[‘risk_eq’]- String which identifies the equation to be used
- for calculating risk. The core module should check for possibilities, and send to a different function when deciding R dependent on this.
- args[‘max_risk’]- The highest possible risk value for any given pairing
- of habitat and stressor.
- args[‘max_stress’]- The largest number of stressors that the user
- believes will overlap. This will be used to get an accurate estimate of risk.
- args[‘aoi_tables’]- May or may not exist within this model run, but if
- it does, the user desires to have the average risk values by stressor/habitat using E/C axes for each feature in the AOI layer specified by ‘aoi_tables’. If the risk_eq is ‘Euclidean’, this will create risk plots, otherwise it will just create the standard HTML table for either ‘Euclidean’ or ‘Multiplicative.’
- args[‘aoi_key’]- The form of the word ‘Name’ that the aoi layer uses
- for this particular model run.
- args[‘warnings’]- A dictionary containing items which need to be
acted upon by hra_core. These will be split into two categories. ‘print’ contains statements which will be printed using logger.warn() at the end of a run. ‘unbuff’ is for pairs which should use the unbuffered stressor file in lieu of the decayed rated raster.
- {‘print’: [‘This is a warning to the user.’, ‘This is another.’],
- ‘unbuff’: [(HabA, Stress1), (HabC, Stress2)]
}
- Outputs:
--Intermediate-- These should be the temp risk and criteria files needed for the final output calcs. --Output-- - /output/maps/recov_potent_H[habitatname].tif- Raster layer
- depicting the recovery potential of each individual habitat.
- /output/maps/cum_risk_H[habitatname]- Raster layer depicting the
- cumulative risk for all stressors in a cell for the given habitat.
- /output/maps/ecosys_risk- Raster layer that depicts the sum of all
- cumulative risk scores of all habitats for that cell.
- /output/maps/[habitatname]_HIGH_RISK- A raster-shaped shapefile
- containing only the “high risk” areas of each habitat, defined as being above a certain risk threshold.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_aoi_tables
(out_dir, aoi_pairs)¶ This function will take in an shapefile containing multiple AOIs, and output a table containing values averaged over those areas.
- Input:
- out_dir- The directory into which the completed HTML tables should be
- placed.
- aoi_pairs- Replacement for avgs_dict, holds all the averaged values on
a H, S basis.
- {‘AOIName’:
}
- Output:
- A set of HTML tables which will contain averaged values of E, C, and risk for each H, S pair within each AOI. Additionally, the tables will contain a column for risk %, which is the averaged risk value in that area divided by the total potential risk for a given pixel in the map.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_ecosys_risk_raster
(dir, h_dict)¶ This will make the compiled raster for all habitats within the ecosystem. The ecosystem raster will be a direct sum of each of the included habitat rasters.
- Input:
dir- The directory in which all completed should be placed. h_dict- A dictionary of raster dataset URIs which can be combined to
create an overall ecosystem raster. The key is the habitat name, and the value is the dataset URI.
{‘Habitat A’: “Overall Habitat A Risk Map URI”, ‘Habitat B’: “Overall Habitat B Risk URI”
...}
- Output:
- ecosys_risk.tif- An overall risk raster for the ecosystem. It will
- be placed in the dir folder.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_hab_risk_raster
(dir, risk_dict)¶ This will create a combined raster for all habitat-stressor pairings within one habitat. It should return a list of open rasters that correspond to all habitats within the model.
- Input:
- dir- The directory in which all completed habitat rasters should be
- placed.
- risk_dict- A dictionary containing the risk rasters for each pairing of
habitat and stressor. The key is the tuple of (habitat, stressor), and the value is the raster dataset URI corresponding to that combination.
{(‘HabA’, ‘Stress1’): “A-1 Risk Raster URI”, (‘HabA’, ‘Stress2’): “A-2 Risk Raster URI”, ... }
- Output:
- A cumulative risk raster for every habitat included within the model.
Returns: - h_rasters- A dictionary containing habitat names mapped to the dataset
- URI of the overarching habitat risk map for this model run.
{‘Habitat A’: “Overall Habitat A Risk Map URI”, ‘Habitat B’: “Overall Habitat B Risk URI”
...}
- h_s_rasters- A dictionary that maps a habitat name to the risk rasters
- for each of the applicable stressors.
- {‘HabA’: [“A-1 Risk Raster URI”, “A-2 Risk Raster URI”, ...],
- ‘HabB’: [“B-1 Risk Raster URI”, “B-2 Risk Raster URI”, ...], ...
}
-
natcap.invest.habitat_risk_assessment.hra_core.
make_recov_potent_raster
(dir, crit_lists, denoms)¶ This will do the same h-s calculation as used for the individual E/C calculations, but instead will use r/dq as the equation for each criteria. The full equation will be:
SUM HAB CRITS( 1/dq )
- Input:
dir- Directory in which the completed raster files should be placed. crit_lists- A dictionary containing pre-burned criteria which can be
combined to get the E/C for that H-S pairing.
- {‘Risk’: {
- ‘h_s_c’: {
- (hab1, stressA):
- [“indiv num raster URI”,
- “raster 1 URI”, ...],
(hab1, stressB): ...
},
- ‘h’: {
- hab1: [“indiv num raster URI”, “raster 1 URI”],
- ...
},
- ‘h_s_e’: { (hab1, stressA): [“indiv num raster URI”]
- }
}
- ‘Recovery’: { hab1: [“indiv num raster URI”, ...],
- hab2: ...
}
}
- denoms- Dictionary containing the combined denominator for a given
H-S overlap. Once all of the rasters are combined, each H-S raster can be divided by this.
- {‘Risk’: {
- ‘h_s_c’: {
- (hab1, stressA): {
- ‘CritName’: 2.0, ...},
- (hab1, stressB): {‘CritName’: 1.3, ...}
- },
- ‘h’: { hab1: {‘CritName’: 1.3, ...},
- ...
},
- ‘h_s_e’: { (hab1, stressA): {‘CritName’: 1.3, ...}
- }
}
- ‘Recovery’: { hab1: {‘critname’: 1.6, ...}
- hab2: ...
}
}
- Output:
- A raster file for each of the habitats included in the model displaying
- the recovery potential within each potential grid cell.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_euc
(base_uri, e_uri, c_uri, risk_uri)¶ Combines the E and C rasters according to the euclidean combination equation.
- Input:
- base- The h-s overlap raster, including potentially decayed values from
- the stressor layer.
- e_rast- The r/dq*w burned raster for all stressor-specific criteria
- in this model run.
- c_rast- The r/dq*w burned raster for all habitat-specific and
- habitat-stressor-specific criteria in this model run.
risk_uri- The file path to which we should be burning our new raster.
Returns a raster representing the euclidean calculated E raster, C raster, and the base raster. The equation will be sqrt((C-1)^2 + (E-1)^2)
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_mult
(base_uri, e_uri, c_uri, risk_uri)¶ Combines the E and C rasters according to the multiplicative combination equation.
- Input:
- base- The h-s overlap raster, including potentially decayed values from
- the stressor layer.
- e_rast- The r/dq*w burned raster for all stressor-specific criteria
- in this model run.
- c_rast- The r/dq*w burned raster for all habitat-specific and
- habitat-stressor-specific criteria in this model run.
risk_uri- The file path to which we should be burning our new raster.
- Returns the URI for a raster representing the multiplied E raster,
- C raster, and the base raster.
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_plots
(out_dir, aoi_pairs, max_risk, max_stress, num_stress, num_habs)¶ This function will produce risk plots when the risk equation is euclidean.
Parameters: - out_dir (string) – The directory into which the completed risk plots should be placed.
- aoi_pairs (dictionary) –
- {‘AOIName’:
}
- max_risk (float) – Double representing the highest potential value for a single h-s raster. The amount of risk for a given Habitat raster would be SUM(s) for a given h.
- max_stress (float) – The largest number of stressors that the user believes will overlap. This will be used to get an accurate estimate of risk.
- num_stress (dict) – A dictionary that simply associates every habaitat with the number of stressors associated with it. This will help us determine the max E/C we should be expecting in our overarching ecosystem plot.
Returns: None
- Outputs:
A set of .png images containing the matplotlib plots for every H-S combination. Within that, each AOI will be displayed as plotted by (E,C) values.
A single png that is the “ecosystem plot” where the E’s for each AOI are the summed
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_rasters
(h_s_c, habs, inter_dir, crit_lists, denoms, risk_eq, warnings)¶ This will combine all of the intermediate criteria rasters that we pre-processed with their r/dq*w. At this juncture, we should be able to straight add the E/C within themselves. The way in which the E/C rasters are combined depends on the risk equation desired.
- Input:
- h_s_c- Args dictionary containing much of the H-S overlap data in
- addition to the H-S base rasters. (In this function, we are only using it for the base h-s raster information.)
- habs- Args dictionary containing habitat criteria information in
- addition to the habitat base rasters. (In this function, we are only using it for the base raster information.)
- inter_dir- Intermediate directory in which the H_S risk-burned rasters
- can be placed.
- crit_lists- A dictionary containing pre-burned criteria which can be
combined to get the E/C for that H-S pairing.
- {‘Risk’: {
- ‘h_s_c’: {
- (hab1, stressA): [“indiv num raster URI”,
- “raster 1 URI”, ...],
(hab1, stressB): ...
},
- ‘h’: {
- hab1: [“indiv num raster URI”,
- “raster 1 URI”, ...],
...
},
- ‘h_s_e’: { (hab1, stressA): [“indiv num raster URI”,
- ...]
}
}
- ‘Recovery’: { hab1: [“indiv num raster URI”, ...],
- hab2: ...
}
}
- denoms- Dictionary containing the denomincator scores for each overlap
for each criteria. These can be combined to get the final denom by which the rasters should be divided.
- {‘Risk’: { ‘h_s_c’: { (hab1, stressA): {‘CritName’: 2.0,...},
- (hab1, stressB): {CritName’: 1.3, ...}
},
- ‘h’: { hab1: {‘CritName’: 2.5, ...},
- ...
},
- ‘h_s_e’: { (hab1, stressA): {‘CritName’: 2.3},
- }
}
- ‘Recovery’: { hab1: {‘CritName’: 3.4},
- hab2: ...
}
}
- risk_eq- A string description of the desired equation to use when
- preforming risk calculation.
- warnings- A dictionary containing items which need to be acted upon by
hra_core. These will be split into two categories. ‘print’ contains statements which will be printed using logger.warn() at the end of a run. ‘unbuff’ is for pairs which should use the unbuffered stressor file in lieu of the decayed rated raster.
- {‘print’: [‘This is a warning to the user.’, ‘This is another.’],
- ‘unbuff’: [(HabA, Stress1), (HabC, Stress2)]
}
- Output:
- A new raster file for each overlapping of habitat and stressor. This file will be the overall risk for that pairing from all H/S/H-S subdictionaries.
Returns: risk_rasters- A simple dictionary that maps a tuple of (Habitat, Stressor) to the URI for the risk raster created when the various sub components (H/S/H_S) are combined. {(‘HabA’, ‘Stress1’): “A-1 Risk Raster URI”, (‘HabA’, ‘Stress2’): “A-2 Risk Raster URI”, ... }
-
natcap.invest.habitat_risk_assessment.hra_core.
make_risk_shapes
(dir, crit_lists, h_dict, h_s_dict, max_risk, max_stress)¶ This function will take in the current rasterized risk files for each habitat, and output a shapefile where the areas that are “HIGH RISK” (high percentage of risk over potential risk) are the only existing polygonized areas.
Additonally, we also want to create a shapefile which is only the “low risk” areas- actually, those that are just not high risk (it’s the combination of low risk areas and medium risk areas).
Since the pygeoprocessing.geoprocessing function can only take in ints, want to predetermine
what areas are or are not going to be shapefile, and pass in a raster that is only 1 or nodata.
- Input:
dir- Directory in which the completed shapefiles should be placed. crit_lists- A dictionary containing pre-burned criteria which can be
combined to get the E/C for that H-S pairing.
- {‘Risk’: {
- ‘h_s_c’: { (hab1, stressA): [“indiv num raster URI”,
- “raster 1 URI”, ...],
(hab1, stressB): ...
},
- ‘h’: {
- hab1: [“indiv num raster URI”, “raster 1 URI”],
- ...
},
- ‘h_s_e’: {(hab1, stressA): [“indiv num raster URI”]
- }
}
- ‘Recovery’: { hab1: [“indiv num raster URI”, ...],
- hab2: ...
}
}
- h_dict- A dictionary that contains raster dataset URIs corresponding
- to each of the habitats in the model. The key in this dictionary is the name of the habiat, and it maps to the open dataset.
- h_s_dict- A dictionary that maps a habitat name to the risk rasters
for each of the applicable stressors.
- {‘HabA’: [“A-1 Risk Raster URI”, “A-2 Risk Raster URI”, ...],
- ‘HabB’: [“B-1 Risk Raster URI”, “B-2 Risk Raster URI”, ...], ...
}
- max_risk- Double representing the highest potential value for a single
- h-s raster. The amount of risk for a given Habitat raster would be SUM(s) for a given h.
- max_stress- The largest number of stressors that the user believes will
- overlap. This will be used to get an accurate estimate of risk.
- Output:
- Returns two shapefiles for every habitat, one which shows features only for the areas that are “high risk” within that habitat, and one which shows features only for the combined low + medium risk areas.
- Return:
- num_stress- A dictionary containing the number of stressors being
- associated with each habitat. The key is the string name of the habitat, and it maps to an int counter of number of stressors.
-
natcap.invest.habitat_risk_assessment.hra_core.
pre_calc_avgs
(inter_dir, risk_dict, aoi_uri, aoi_key, risk_eq, max_risk)¶ This funtion is a helper to make_aoi_tables, and will just handle pre-calculation of the average values for each aoi zone.
- Input:
- inter_dir- The directory which contains the individual E and C rasters.
- We can use these to get the avg. E and C values per area. Since we don’t really have these in any sort of dictionary, will probably just need to explicitly call each individual file based on the names that we pull from the risk_dict keys.
- risk_dict- A simple dictionary that maps a tuple of
(Habitat, Stressor) to the URI for the risk raster created when the various sub components (H/S/H_S) are combined.
{(‘HabA’, ‘Stress1’): “A-1 Risk Raster URI”, (‘HabA’, ‘Stress2’): “A-2 Risk Raster URI”, ... }
- aoi_uri- The location of the AOI zone files. Each feature within this
- file (identified by a ‘name’ attribute) will be used to average an area of E/C/Risk values.
- risk_eq- A string identifier, either ‘Euclidean’ or ‘Multiplicative’
- that tells us which equation should be used for calculation of risk. This will be used to get the risk value for the average E and C.
max_risk- The user reported highest risk score present in the CSVs.
Returns: - avgs_dict- A multi level dictionary to hold the average values that
- will be placed into the HTML table.
- {‘HabitatName’:
- {‘StressorName’:
- [{‘Name’: AOIName, ‘E’: 4.6, ‘C’: 2.8, ‘Risk’: 4.2},
- {...},
... ]
}
aoi_names- Quick and dirty way of getting the AOI keys.
-
natcap.invest.habitat_risk_assessment.hra_core.
pre_calc_denoms_and_criteria
(dir, h_s_c, hab, h_s_e)¶ Want to return two dictionaries in the format of the following: (Note: the individual num raster comes from the crit_ratings subdictionary and should be pre-summed together to get the numerator for that particular raster. )
- Input:
- dir- Directory into which the rasterized criteria can be placed. This
- will need to have a subfolder added to it specifically to hold the rasterized criteria for now.
- h_s_c- A multi-level structure which holds all criteria ratings,
both numerical and raster that apply to habitat and stressor overlaps. The structure, whose keys are tuples of (Habitat, Stressor) names and map to an inner dictionary will have 3 outer keys containing numeric-only criteria, raster-based criteria, and a dataset that shows the potentially buffered overlap between the habitat and stressor. The overall structure will be as pictured:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {
- ‘DS’: “CritName Raster URI”,
- ‘Weight’: 1.0, ‘DQ’: 1.0}
},
‘DS’: “A-1 Raster URI” }
}
- hab- Similar to the h-s dictionary, a multi-level
- dictionary containing all habitat-specific criteria ratings and rasters. In this case, however, the outermost key is by habitat name, and habitats[‘habitatName’][‘DS’] points to the rasterized habitat shapefile URI provided by the user.
- h_s_e- Similar to the h_s_c dictionary, a multi-level
- dictionary containing habitat-stressor-specific criteria ratings and rasters. The outermost key is by (habitat, stressor) pair, but the criteria will be applied to the exposure portion of the risk calcs.
- Output:
- Creates a version of every criteria for every h-s paring that is burned with both a r/dq*w value for risk calculation, as well as a r/dq burned raster for recovery potential calculations.
Returns: - crit_lists- A dictionary containing pre-burned criteria URI which can
- be combined to get the E/C for that H-S pairing.
- {‘Risk’: {
- ‘h_s_c’:
- { (hab1, stressA): [“indiv num raster”, “raster 1”, ...],
- (hab1, stressB): ...
},
- ‘h’: {
- hab1: [“indiv num raster URI”,
- “raster 1 URI”, ...],
...
},
- ‘h_s_e’: {
- (hab1, stressA):
- [“indiv num raster URI”, ...]
}
}
- ‘Recovery’: { hab1: [“indiv num raster URI”, ...],
- hab2: ...
}
}
- denoms- Dictionary containing the combined denominator for a given
- H-S overlap. Once all of the rasters are combined, each H-S raster
can be divided by this.
- {‘Risk’: {
- ‘h_s_c’: {
- (hab1, stressA): {‘CritName’: 2.0, ...},
- (hab1, stressB): {‘CritName’: 1.3, ...}
},
- ‘h’: { hab1: {‘CritName’: 1.3, ...},
- ...
},
- ‘h_s_e’: { (hab1, stressA): {‘CritName’: 1.3, ...}
- }
}
- ‘Recovery’: { hab1: 1.6,
- hab2: ...
}
}
-
natcap.invest.habitat_risk_assessment.hra_core.
raster_to_polygon
(raster_uri, out_uri, layer_name, field_name)¶ This will take in a raster file, and output a shapefile of the same area and shape.
- Input:
- raster_uri- The raster that needs to be turned into a shapefile. This
- is only the URI to the raster, we will need to get the band.
out_uri- The desired URI for the new shapefile. layer_name- The name of the layer going into the new shapefile. field-name- The name of the field that will contain the raster pixel
value.- Output:
- This will be a shapefile in the shape of the raster. The raster being passed in will be solely “high risk” areas that conatin data, and nodata values for everything else.
Returns nothing.
-
natcap.invest.habitat_risk_assessment.hra_core.
rewrite_avgs_dict
(avgs_dict, aoi_names)¶ Aftermarket rejigger of the avgs_dict setup so that everything is AOI centric instead. Should produce something like the following:
- {‘AOIName’:
}
Habitat Risk Assessment Pre-processor¶
Entry point for the Habitat Risk Assessment module
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
ImproperCriteriaSpread
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor which can be passed if there are not one or more criteria in each of the 3 criteria categories: resilience, exposure, and sensitivity.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
ImproperECSelection
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor that should catch selections for exposure vs consequence scoring that are not either E or C. The user must decide in this column which the criteria applies to, and my only designate this with an ‘E’ or ‘C’.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
MissingHabitatsOrSpecies
¶ Bases:
exceptions.Exception
An exception to pass if the hra_preprocessor args dictionary being passed is missing a habitats directory or a species directory.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
MissingSensOrResilException
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor that catches h-s pairings who are missing either Sensitivity or Resilience or C criteria, though not both. The user must either zero all criteria for that pair, or make sure that both E and C are represented.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
NA_RatingsError
¶ Bases:
exceptions.Exception
An exception that is raised on an invalid ‘NA’ input.
When one or more Ratings value is set to “NA” for a habitat - stressor pair, but not ALL are set to “NA”. If ALL Rating values for a habitat - stressor pair are “NA”, then the habitat - stressor pair is considered to have NO interaction.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
NotEnoughCriteria
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor which can be passed if the number of criteria in the resilience, exposure, and sensitivity categories all sums to less than 4.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
UnexpectedString
¶ Bases:
exceptions.Exception
An exception for hra_preprocessor that should catch any strings that are left over in the CSVs. Since everything from the CSV’s are being cast to floats, this will be a hook off of python’s ValueError, which will re-raise our exception with a more accurate message.
-
exception
natcap.invest.habitat_risk_assessment.hra_preprocessor.
ZeroDQWeightValue
¶ Bases:
exceptions.Exception
An exception specifically for the parsing of the preprocessor tables in which the model should break loudly if a user tries to enter a zero value for either a data quality or a weight. However, we should confirm that it will only break if the rating is not also zero. If they’re removing the criteria entirely from that H-S overlap, it should be allowed.
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
error_check
(line, hab_name, stress_name)¶ Throwing together a simple error checking function for all of the inputs coming from the CSV file. Want to do checks for strings vs floats, as well as some explicit string checking for ‘E’/’C’.
- Input:
- line- An array containing a line of H-S overlap data. The format of a
line would look like the following:
[‘CritName’, ‘Rating’, ‘Weight’, ‘DataQuality’, ‘Exp/Cons’]
The following restrictions should be placed on the data:
- CritName- This will be propogated by default by
- HRA_Preprocessor. Since it’s coming in as a string, we shouldn’t need to check anything.
- Rating- Can either be the explicit string ‘SHAPE’, which would
- be placed automatically by HRA_Preprocessor, or a float. ERROR: if string that isn’t ‘SHAPE’.
- Weight- Must be a float (or an int), but cannot be 0.
- ERROR: if string, or anything not castable to float, or 0.
- DataQuality- Most be a float (or an int), but cannot be 0.
- ERROR: if string, or anything not castable to float, or 0.
- Exp/Cons- Most be the string ‘E’ or ‘C’.
- ERROR: if string that isn’t one of the acceptable ones, or ANYTHING else.
Returns nothing, should raise exception if there’s an issue.
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
execute
(args)¶ Habitat Risk Assessment Preprocessor.
Want to read in multiple hab/stressors directories, in addition to named criteria, and make an appropriate csv file.
Parameters: - args['workspace_dir'] (string) – The directory to dump the output CSV files to. (required)
- args['habitats_dir'] (string) – A directory of shapefiles that are habitats. This is not required, and may not exist if there is a species layer directory. (optional)
- args['species_dir'] (string) – Directory which holds all species shapefiles, but may or may not exist if there is a habitats layer directory. (optional)
- args['stressors_dir'] (string) – A directory of ArcGIS shapefiles that are stressors. (required)
- args['exposure_crits'] (list) – list containing string names of exposure criteria (hab-stress) which should be applied to the exposure score. (optional)
- args['sensitivity-crits'] (list) – List containing string names of sensitivity (habitat-stressor overlap specific) criteria which should be applied to the consequence score. (optional)
- args['resilience_crits'] (list) – List containing string names of resilience (habitat or species-specific) criteria which should be applied to the consequence score. (optional)
- args['criteria_dir'] (string) – Directory which holds the criteria shapefiles. May not exist if the user does not desire criteria shapefiles. This needs to be in a VERY specific format, which shall be described in the user’s guide. (optional)
Returns: None
This function creates a series of CSVs within
args['workspace_dir']
. There will be one CSV for every habitat/species. These files will contain information relevant to each habitat or species, including all criteria. The criteria will be broken up into those which apply to only the habitat, and those which apply to the overlap of that habitat, and each stressor.JSON file containing vars that need to be passed on to hra non-core when that gets run. Should live inside the preprocessor folder which will be created in
args['workspace_dir']
. It will contain habitats_dir, species_dir, stressors_dir, and criteria_dir.
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
listdir
(path)¶ A replacement for the standar os.listdir which, instead of returning only the filename, will include the entire path. This will use os as a base, then just lambda transform the whole list.
- Input:
- path- The location container from which we want to gather all files.
Returns: A list of full URIs contained within ‘path’.
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
make_crit_shape_dict
(crit_uri)¶ This will take in the location of the file structure, and will return a dictionary containing all the shapefiles that we find. Hypothetically, we should be able to parse easily through the files, since it should be EXACTLY of the specs that we laid out.
- Input:
- crit_uri- Location of the file structure containing all of the
- shapefile criteria.
Returns: A dictionary containing shapefile URI’s, indexed by their criteria name, in addition to which dictionaries and h-s pairs they apply to. The structure will be as follows: - {‘h’:
- {‘HabA’:
- {‘CriteriaName: “Shapefile Datasource URI”...}, ...
},
- ‘h_s_c’:
- {(‘HabA’, ‘Stress1’):
- {‘CriteriaName: “Shapefile Datasource URI”, ...}, ...
},
- ‘h_s_e’
- {(‘HabA’, ‘Stress1’):
- {‘CriteriaName: “Shapefile Datasource URI”, ...}, ...
}
}
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
parse_hra_tables
(folder_uri)¶ This takes in the directory containing the criteria rating csv’s, and returns a coherent set of dictionaries that can be used to do EVERYTHING in non-core and core.
It will return a massive dictionary containing all of the subdictionaries needed by non core, as well as directory URI’s. It will be of the following form:
{‘habitats_dir’: ‘Habitat Directory URI’, ‘species_dir’: ‘Species Directory URI’, ‘stressors_dir’: ‘Stressors Directory URI’, ‘criteria_dir’: ‘Criteria Directory URI’, ‘buffer_dict’:
{‘Stressor 1’: 50, ‘Stressor 2’: ..., },- ‘h_s_c’:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
},
- ‘h_s_c’:
- {(Habitat A, Stressor 1):
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
},
- ‘habitats’:
- {Habitat A:
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- ‘warnings’:
- {‘print’:
- [‘This is a warning to the user.’, ‘This is another.’],
- ‘unbuff’:
- [(HabA, Stress1), (HabC, Stress2)]
}
}
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
parse_overlaps
(uri, habs, h_s_e, h_s_c)¶ This function will take in a location, and update the dictionaries being passed with the new Hab/Stress subdictionary info that we’re getting from the CSV at URI.
- Input:
- uri- The location of the CSV that we want to get ratings info from.
- This will contain information for a given habitat’s individual criteria ratings, as well as criteria ratings for the overlap of every stressor.
- habs- A dictionary which contains all resilience specific criteria
info. The key for these will be the habitat name. It will map to a subdictionary containing criteria information. The whole dictionary will look like the following:
- {Habitat A:
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- h_s_e- A dictionary containing all information applicable to exposure
- criteria. The dictionary will look identical to the ‘habs’ dictionary, but each key will be a tuple of two strings - (HabName, StressName).
- h_s_c- A dictionary containing all information applicable to
- sensitivity criteria. The dictionary will look identical to the ‘habs’ dictionary, but each key will be a tuple of two strings - (HabName, StressName).
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
parse_stress_buffer
(uri)¶ This will take the stressor buffer CSV and parse it into a dictionary where the stressor name maps to a float of the about by which it should be buffered.
- Input:
- uri- The location of the CSV file from which we should pull the buffer
- amounts.
Returns: A dictionary containing stressor names mapped to their corresponding buffer amounts. The float may be 0, but may not be a string. The form will be the following: {‘Stress 1’: 2000, ‘Stress 2’: 1500, ‘Stress 3’: 0, ...}
-
natcap.invest.habitat_risk_assessment.hra_preprocessor.
zero_check
(h_s_c, h_s_e, habs)¶ Any criteria that have a rating of 0 mean that they are not a desired input to the assessment. We should delete the criteria’s entire subdictionary out of the dictionary.
- Input:
- habs- A dictionary which contains all resilience specific criteria
info. The key for these will be the habitat name. It will map to a subdictionary containing criteria information. The whole dictionary will look like the following:
- {Habitat A:
- {‘Crit_Ratings’:
- {‘CritName’:
- {‘Rating’: 2.0, ‘DQ’: 1.0, ‘Weight’: 1.0}
},
- ‘Crit_Rasters’:
- {‘CritName’:
- {‘Weight’: 1.0, ‘DQ’: 1.0}
},
}
}
- h_s_e- A dictionary containing all information applicable to exposure
- criteria. The dictionary will look identical to the ‘habs’ dictionary, but each key will be a tuple of two strings - (HabName, StressName).
- h_s_c- A dictionary containing all information applicable to
- sensitivity criteria. The dictionary will look identical to the ‘habs’ dictionary, but each key will be a tuple of two strings - (HabName, StressName).
- Output:
- Will update each of the three dictionaries by deleting any criteria where the rating aspect is 0.
Returns: warnings- A dictionary containing items which need to be acted upon by hra_core. These will be split into two categories. ‘print’ contains statements which will be printed using logger.warn() at the end of a run. ‘unbuff’ is for pairs which should use the unbuffered stressor file in lieu of the decayed rated raster. - {‘print’: [‘This is a warning to the user.’, ‘This is another.’],
- ‘unbuff’: [(HabA, Stress1), (HabC, Stress2)]
}
Module contents¶
Marine Water Quality Package¶
Model Entry Point¶
-
natcap.invest.marine_water_quality.marine_water_quality_biophysical.
execute
(args)¶ Marine Water Quality.
Main entry point for the InVEST 3.0 marine water quality biophysical model.
Parameters: - args['workspace_dir'] (string) – Directory to place outputs
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['aoi_poly_uri'] (string) – OGR polygon Datasource indicating region of interest to run the model. Will define the grid.
- args['land_poly_uri'] (string) – OGR polygon DataSource indicating areas where land is.
- args['pixel_size'] (float) – float indicating pixel size in meters of output grid.
- args['layer_depth'] (float) – float indicating the depth of the grid cells in meters.
- args['source_points_uri'] (string) – OGR point Datasource indicating point sources of pollution.
- args['source_point_data_uri'] (string) – csv file indicating the biophysical properties of the point sources.
- args['kps'] (float) – float indicating decay rate of pollutant (kg/day)
- args['tide_e_points_uri'] (string) – OGR point Datasource with spatial information about the E parameter
- args['adv_uv_points_uri'] (string) – optional OGR point Datasource with spatial advection u and v vectors.
Returns: nothing
Marine Water Quality Biophysical¶
InVEST Marine Water Quality Biophysical module at the “uri” level
-
natcap.invest.marine_water_quality.marine_water_quality_biophysical.
execute
(args) Marine Water Quality.
Main entry point for the InVEST 3.0 marine water quality biophysical model.
Parameters: - args['workspace_dir'] (string) – Directory to place outputs
- args['results_suffix'] (string) – a string to append to any output file name (optional)
- args['aoi_poly_uri'] (string) – OGR polygon Datasource indicating region of interest to run the model. Will define the grid.
- args['land_poly_uri'] (string) – OGR polygon DataSource indicating areas where land is.
- args['pixel_size'] (float) – float indicating pixel size in meters of output grid.
- args['layer_depth'] (float) – float indicating the depth of the grid cells in meters.
- args['source_points_uri'] (string) – OGR point Datasource indicating point sources of pollution.
- args['source_point_data_uri'] (string) – csv file indicating the biophysical properties of the point sources.
- args['kps'] (float) – float indicating decay rate of pollutant (kg/day)
- args['tide_e_points_uri'] (string) – OGR point Datasource with spatial information about the E parameter
- args['adv_uv_points_uri'] (string) – optional OGR point Datasource with spatial advection u and v vectors.
Returns: nothing
Marine Water Quality Core¶
-
natcap.invest.marine_water_quality.marine_water_quality_core.
diffusion_advection_solver
(source_point_data, kps, in_water_array, tide_e_array, adv_u_array, adv_v_array, nodata, cell_size, layer_depth)¶ - 2D Water quality model to track a pollutant in the ocean. Three input
- arrays must be of the same shape. Returns the solution in an array of the same shape.
- source_point_data - dictionary of the form:
- { source_point_id_0: {‘point’: [row_point, col_point] (in gridspace),
- ‘WPS’: float (loading?), ‘point’: ...},
source_point_id_1: ...}
kps - absorption rate for the source point pollutants in_water_array - 2D numpy array of booleans where False is a land pixel and
True is a water pixel.- tide_e_array - 2D numpy array with tidal E values or nodata values, must
- be same shape as in_water_array (m^2/sec)
- adv_u_array, adv_v_array - the u and v components of advection, must be
- same shape as in_water_array (units?)
nodata - the value in the input arrays that indicate a nodata value. cell_size - the length of the side of a cell in meters layer_depth - float indicating the depth of the grid cells in
meters.
Create Grid¶
Interpolate Points to Raster¶
Module contents¶
Testing¶
Testing Package¶
The natcap.invest.testing package defines core testing routines and functionality.
Rationale¶
While the python standard library’s unittest
package provides valuable
resources for testing, GIS applications such as the various InVEST models
output GIS data that require more in-depth testing to verify equality. For
cases such as this, natcap.invest.testing
provides a GISTest
class that
provides assertions for common data formats.
Writing Tests with natcap.invest.testing
¶
The easiest way to take advantage of the functionality in natcap.invest.testing
is to use the GISTest
class whenever you write a TestCase class for your
model. Doing so will grant you access to the GIS assertions provided by
GISTest
.
This example is relatively simplistic, since there will often be many more assertions you may need to make to be able to test your model effectively:
import natcap.invest.testing
import natcap.invest.example_model
class ExampleTest(natcap.invest.testing.GISTest):
def test_some_model(self):
example_args = {
'workspace_dir': './workspace',
'arg_1': 'foo',
'arg_2': 'bar',
}
natcap.invest.example_model.execute(example_args)
# example GISTest assertion
self.assertRastersEqual('workspace/raster_1.tif',
'regression_data/raster_1.tif')
natcap.invest.testing.GISTest¶
-
class
natcap.invest.testing.
GISTest
(methodName='runTest')¶ Bases:
unittest.case.TestCase
A test class with an emphasis on testing GIS outputs.
The
GISTest
class provides many functions for asserting the equality of various GIS files. This is particularly useful for GIS tool outputs, when we wish to assert the accuracy of very detailed outputs.GISTest
is a subclass ofunittest.TestCase
, so all members that exist inunittest.TestCase
also exist here. Read the python documentation onunittest
for more information about these test fixtures and their usage. The important thing to note is thatGISTest
merely provides more assertions for the more specialized testing and assertions that GIS outputs require.Example usage of
GISTest
:import natcap.invest.testing class ModelTest(natcap.invest.testing.GISTest): def test_some_function(self): # perform your tests here.
Note that to take advantage of these additional assertions, you need only to create a subclass of
GISTest
in your test file to gain access to theGISTest
assertions.-
assertArchives
(archive_1_uri, archive_2_uri)¶ Compare the contents of two archived workspaces against each other.
Takes two archived workspaces, each generated from
build_regression_archives()
, unzips them and compares the resulting workspaces against each other.Parameters: - archive_1_uri (string) – a URI to a .tar.gz workspace archive
- archive_2_uri (string) – a URI to a .tar.gz workspace archive
Raises: AssertionError
– Raised when the two workspaces are found to be different.Returns: Nothing.
-
assertCSVEqual
(aUri, bUri)¶ Tests if csv files a and b are ‘almost equal’ to each other on a per cell basis. Numeric cells are asserted to be equal out to 7 decimal places. Other cell types are asserted to be equal.
Parameters: - aUri (string) – a URI to a csv file
- bUri (string) – a URI to a csv file
Raises: AssertionError
– Raised when the two CSV files are found to be different.Returns: Nothing.
-
assertFiles
(file_1_uri, file_2_uri)¶ Assert two files are equal.
If the extension of the provided file is recognized, the relevant filetype-specific function is called and a more detailed check of the file can be done. If the extension is not recognized, the MD5sums of the two files are compared instead.
Known extensions:
.json
,.tif
,.shp
,.csv
,.txt.
,.html
Parameters: - file_1_uri (string) – a string URI to a file on disk.
- file_2_uru (string) – a string URI to a file on disk.
Raises: AssertionError
– Raised when one of the input files does not exist, when the extensions of the input files differ, or if the two files are found to differ.Returns: Nothing.
-
assertJSON
(json_1_uri, json_2_uri)¶ Assert two JSON files against each other.
The two JSON files provided will be opened, read, and their contents will be asserted to be equal. If the two are found to be different, the diff of the two files will be printed.
Parameters: - json_1_uri (string) – a uri to a JSON file.
- json_2_uri (string) – a uri to a JSON file.
Raises: AssertionError
– Raised when the two JSON objects differ.Returns: Nothing.
-
assertMD5
(uri, regression_hash)¶ Assert the MD5sum of a file against a regression MD5sum.
This method is a convenience method that uses
natcap.invest.testing.get_hash()
to determine the MD5sum of the file located at uri. It is functionally equivalent to calling:self.assertEqual(get_hash(uri), '<some md5sum>')
Regression MD5sums can be calculated for you by using
natcap.invest.testing.get_hash()
or a system-level md5sum program.Parameters: - uri (string) – a string URI to the file to be tested.
- regression_hash (string) –
Raises: AssertionError
– Raised when the MD5sum of the file at uri differs from the provided regression md5sum hash.Returns: Nothing.
-
assertMatrixes
(matrix_a, matrix_b, decimal=6)¶ Tests if the input numpy matrices are equal up to decimal places.
This is a convenience function that wraps up required functionality in
numpy.testing
.Parameters: - matrix_a (numpy.ndarray) – a numpy matrix
- matrix_b (numpy.ndarray) – a numpy matrix
- decimal (int) – an integer of the desired precision.
Raises: AssertionError
– Raised when the two matrices are determined to be different.Returns: Nothing.
-
assertRastersEqual
(a_uri, b_uri)¶ Tests if datasets a and b are ‘almost equal’ to each other on a per pixel basis
This assertion method asserts the equality of these raster characteristics:
- Raster height and width
- The number of layers in the raster
- Each pixel value, out to a precision of 7 decimal places if the pixel value is a float.
Parameters: - a_uri (string) – a URI to a GDAL dataset
- b_uri (string) – a URI to a GDAL dataset
Returns: Nothing.
Raises: IOError
– Raised when one of the input files is not found on disk.AssertionError
– Raised when the two rasters are found to be not equal to each other.
-
assertTextEqual
(text_1_uri, text_2_uri)¶ Assert that two text files are equal
This comparison is done line-by-line.
Parameters: - text_1_uri (string) – a python string uri to a text file. Considered the file to be tested.
- text_2_uri (string) – a python string uri to a text file. Considered the regression file.
Raises: AssertionError
– Raised when a line differs in the two files.Returns: Nothing.
-
assertVectorsEqual
(aUri, bUri)¶ Tests if vector datasources are equal to each other.
This assertion method asserts the equality of these vector characteristics:
- Number of layers in the vector
- Number of features in each layer
- Feature geometry type
- Number of fields in each feature
- Name of each field
- Field values for each feature
Parameters: - aUri (string) – a URI to an OGR vector
- bUri (string) – a URI to an OGR vector
Raises: IOError
– Raised if one of the input files is not found on disk.AssertionError
– Raised if the vectors are not found to be equal to one another.
- Returns
- Nothing.
-
assertWorkspace
(archive_1_folder, archive_2_folder, glob_exclude='')¶ Check the contents of two folders against each other.
This method iterates through the contents of each workspace folder and verifies that all files exist in both folders. If this passes, then each file is compared against each other using
GISTest.assertFiles()
.If one of these workspaces includes files that are known to be different between model runs (such as logs, or other files that include timestamps), you may wish to specify a glob pattern matching those filenames and passing it to glob_exclude.
Parameters: - archive_1_folder (string) – a uri to a folder on disk
- archive_2_folder (string) – a uri to a folder on disk
- glob_exclude (string) – a string in glob format representing files to ignore
Raises: AssertionError
– Raised when the two folders are found to have different contents.Returns: Nothing.
-
Utilities¶
Reporting Package¶
Style¶
Table Generator¶
A helper module for generating html tables that are represented as Strings
-
natcap.invest.reporting.table_generator.
add_checkbox_column
(col_list, row_list, checkbox_pos=1)¶ Insert a new column into the list of column dictionaries so that it is the second column dictionary found in the list. Also add the checkbox column header to the list of row dictionaries and subsequent checkbox value
- ‘col_list’- a list of dictionaries that defines the column
structure for the table (required). The order of the columns from left to right is depicted by the index of the column dictionary in the list. Each dictionary in the list has the following keys and values:
‘name’ - a string for the column name (required) ‘total’ - a boolean for whether the column should be
totaled (required)- ‘row_list’ - a list of dictionaries that represent the rows. Each
dictionaries keys should match the column names found in ‘col_list’ (required) Example: [{col_name_1: value, col_name_2: value, ...},
{col_name_1: value, col_name_2: value, ...}, ...]- checkbox_pos - an integer for the position of the checkbox
- column. Defaulted at 1 (optional)
- returns - a tuple of the updated column and rows list of dictionaries
- in that order
-
natcap.invest.reporting.table_generator.
add_totals_row
(col_headers, total_list, total_name, checkbox_total, tdata_tuples)¶ Construct a totals row as an html string. Creates one row element with data where the row gets a class name and the data get a class name if the corresponding column is a totalable column
col_headers - a list of the column headers in order (required)
- total_list - a list of booleans that corresponds to ‘col_headers’ and
- indicates whether a column should be totaled (required)
- total_name - a string for the name of the total row, ex: ‘Total’, ‘Sum’
- (required)
- checkbox_total - a boolean value that distinguishes whether a checkbox
- total row is being added or a regular total row. Checkbox total row is True. This will determine the row class name and row data class name (required)
- tdata_tuples - a list of tuples where the first index in the tuple is a
- boolean which indicates if a table data element has a attribute class. The second index is the String value of that class or None (required)
- return - a string representing the html contents of a row which should
- later be used in a ‘tfoot’ element
-
natcap.invest.reporting.table_generator.
generate_table
(table_dict, attributes=None)¶ Takes in a dictionary representation of a table and generates a String of the the table in the form of hmtl
- table_dict - a dictionary with the following arguments:
- ‘cols’- a list of dictionaries that defines the column
structure for the table (required). The order of the columns from left to right is depicted by the index of the column dictionary in the list. Each dictionary in the list has the following keys and values:
‘name’ - a string for the column name (required) ‘total’ - a boolean for whether the column should be
totaled (required)- ‘attr’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attr’: {‘class’: ‘offsets’}
- ‘td_class’ - a String to assign as a class name to
- the table data tags under the column. Each table data tag under the column will have a class attribute assigned to ‘td_class’ value (optional)
- ‘rows’ - a list of dictionaries that represent the rows. Each
dictionaries keys should match the column names found in ‘cols’ (possibly empty list) (required) Example: [{col_name_1: value, col_name_2: value, ...},
{col_name_1: value, col_name_2: value, ...}, ...]- ‘checkbox’ - a boolean value for whether there should be a
- checkbox column. If True a ‘selected total’ row will be added to the bottom of the table that will show the total of the columns selected (optional)
- ‘checkbox_pos’ - an integer value for in which column
- position the the checkbox column should appear (optional)
- ‘total’- a boolean value for whether there should be a constant
- total row at the bottom of the table that sums the column values (optional)
- ‘attributes’ - a dictionary of html table attributes. The attribute
- name is the key which gets set to the value of the key. (optional) Example: {‘class’: ‘sorttable’, ‘id’: ‘parcel_table’}
returns - a string representing an html table
-
natcap.invest.reporting.table_generator.
get_dictionary_values_ordered
(dict_list, key_name)¶ Generate a list, with values from ‘key_name’ found in each dictionary in the list of dictionaries ‘dict_list’. The order of the values in the returned list match the order they are retrieved from ‘dict_list’
- dict_list - a list of dictionaries where each dictionary has the same
- keys. Each dictionary should have at least one key:value pair with the key being ‘key_name’ (required)
- key_name - a String or Int for the key name of interest in the
- dictionaries (required)
- return - a list of values from ‘key_name’ in ascending order based
- on ‘dict_list’ keys
-
natcap.invest.reporting.table_generator.
get_row_data
(row_list, col_headers)¶ Construct the rows in a 2D List from the list of dictionaries, using col_headers to properly order the row data.
- ‘row_list’ - a list of dictionaries that represent the rows. Each
dictionaries keys should match the column names found in ‘col_headers’. The rows will be ordered the same as they are found in the dictionary list (required) Example: [{‘col_name_1’:‘9/13’, ‘col_name_3’:’expensive’,
‘col_name_2’:’chips’},- {‘col_name_1’:‘3/13’, ‘col_name_2’:’cheap’,
- ‘col_name_3’:’peanuts’},
- {‘col_name_1’:‘5/12’, ‘col_name_2’:’moderate’,
- ‘col_name_3’:’mints’}]
- col_headers - a List of the names of the column headers in order
- example : [col_name_1, col_name_2, col_name_3...]
return - a 2D list with each inner list representing a row
HTML¶
Module contents¶
The natcap.invest.testing package defines core testing routines and functionality.
-
natcap.invest.reporting.
add_head_element
(param_args)¶ Generates a string that represents a valid element in the head section of an html file. Currently handles ‘style’ and ‘script’ elements, where both the script and style are locally embedded
param_args - a dictionary that holds the following arguments:
- param_args[‘format’] - a string representing the type of element to
- be added. Currently : ‘script’, ‘style’ (required)
- param_args[‘data_src’] - a string URI path for the external source
- of the element OR a String representing the html (DO NOT include html tags, tags are automatically generated). If a URI the file is read in as a String. (required)
- param_args[‘input_type’] - ‘Text’ or ‘File’. Determines how the
- input from ‘data_src’ is handled (required)
- ‘attributes’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attributes’: {‘class’: ‘offsets’}
returns - a string representation of the html head element
-
natcap.invest.reporting.
add_text_element
(param_args)¶ Generates a string that represents a html text block. The input string should be wrapped in proper html tags
param_args - a dictionary with the following arguments:
param_args[‘text’] - a stringreturns - a string
-
natcap.invest.reporting.
build_table
(param_args)¶ Generates a string representing a table in html format.
- param_args - a dictionary that has the parameters for building up the
html table. The dictionary includes the following:
- ‘attributes’ - a dictionary of html table attributes. The attribute
- name is the key which gets set to the value of the key. (optional) Example: {‘class’: ‘sorttable’, ‘id’: ‘parcel_table’}
- param_args[‘sortable’] - a boolean value that determines whether the
- table should be sortable (required)
- param_args[‘data_type’] - a string depicting the type of input to
- build the table from. Either ‘shapefile’, ‘csv’, or ‘dictionary’ (required)
- param_args[‘data’] - a URI to a csv or shapefile OR a list of
dictionaries. If a list of dictionaries the data should be represented in the following format: (required)
- [{col_name_1: value, col_name_2: value, ...},
- {col_name_1: value, col_name_2: value, ...}, ...]
- param_args[‘key’] - a string that depicts which column (csv) or
- field (shapefile) will be the unique key to use in extracting the data into a dictionary. (required for ‘data_type’ ‘shapefile’ and ‘csv’)
- param_args[‘columns’] - a list of dictionaries that defines the
column structure for the table (required). The order of the columns from left to right is depicted by the index of the column dictionary in the list. Each dictionary in the list has the following keys and values:
‘name’ - a string for the column name (required) ‘total’ - a boolean for whether the column should be
totaled (required)- ‘attr’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attr’: {‘class’: ‘offsets’}
- ‘td_class’ - a String to assign as a class name to
- the table data tags under the column. Each table data tag under the column will have a class attribute assigned to ‘td_class’ value (optional)
- param_args[‘total’] - a boolean value where if True a constant
- total row will be placed at the bottom of the table that sums the columns (required)
returns - a string that represents an html table
-
natcap.invest.reporting.
data_dict_to_list
(data_dict)¶ Abstract out inner dictionaries from data_dict into a list, where the inner dictionaries are added to the list in the order of their sorted keys
- data_dict - a dictionary with unique keys pointing to dictionaries.
- Could be empty (required)
returns - a list of dictionaries, or empty list if data_dict is empty
-
natcap.invest.reporting.
generate_report
(args)¶ Generate an html page from the arguments given in ‘reporting_args’
- reporting_args[title] - a string for the title of the html page
- (required)
- reporting_args[sortable] - a boolean value indicating whether
- the sorttable.js library should be added for table sorting functionality (optional)
- reporting_args[totals] - a boolean value indicating whether
- the totals_function.js script should be added for table totals functionality (optional)
- reporting_args[out_uri] - a URI to the output destination for the html
- page (required)
- reporting_args[elements] - a list of dictionaries that represent html
elements to be added to the html page. (required) If no elements are provided (list is empty) a blank html page will be generated. The 3 main element types are ‘table’, ‘head’, and ‘text’. All elements share the following arguments:
- ‘type’ - a string that depicts the type of element being add.
- Currently ‘table’, ‘head’, and ‘text’ are defined (required)
- ‘section’ - a string that depicts whether the element belongs
- in the body or head of the html page. Values: ‘body’ | ‘head’ (required)
Table element dictionary has at least the following additional arguments:
- ‘attributes’ - a dictionary of html table attributes. The
- attribute name is the key which gets set to the value of the key. (optional) Example: {‘class’: ‘sorttable’, ‘id’: ‘parcel_table’}
- ‘sortable’ - a boolean value for whether the tables columns
- should be sortable (required)
- ‘checkbox’ - a boolean value for whether there should be a
- checkbox column. If True a ‘selected total’ row will be added to the bottom of the table that will show the total of the columns selected (optional)
- ‘checkbox_pos’ - an integer value for in which column
- position the the checkbox column should appear (optional)
- ‘data_type’ - one of the following string values:
- ‘shapefile’|’hg csv’|’dictionary’. Depicts the type of data structure to build the table from (required)
- ‘data’ - either a list of dictionaries if ‘data_type’ is
‘dictionary’ or a URI to a CSV table or shapefile if ‘data_type’ is ‘shapefile’ or ‘csv’ (required). If a list of dictionaries, each dictionary should have keys that represent the columns, where each dictionary is a row (list could be empty) How the rows are ordered are defined by their index in the list. Formatted example: [{col_name_1: value, col_name_2: value, ...},
{col_name_1: value, col_name_2: value, ...}, ...]- ‘key’ - a string that defines which column or field should be
- used as the keys for extracting data from a shapefile or csv table ‘key_field’. (required for ‘data_type’ = ‘shapefile’ | ‘csv’)
- ‘columns’- a list of dictionaries that defines the column
structure for the table (required). The order of the columns from left to right is depicted by the index of the column dictionary in the list. Each dictionary in the list has the following keys and values:
‘name’ - a string for the column name (required) ‘total’ - a boolean for whether the column should be
totaled (required)- ‘attr’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attr’: {‘class’: ‘offsets’}
- ‘td_class’ - a String to assign as a class name to
- the table data tags under the column. Each table data tag under the column will have a class attribute assigned to ‘td_class’ value (optional)
- ‘total’- a boolean value for whether there should be a constant
- total row at the bottom of the table that sums the column values (optional)
Head element dictionary has at least the following additional arguments:
- ‘format’ - a string representing the type of head element being
- added. Currently ‘script’ (javascript) and ‘style’ (css style) accepted (required)
- ‘data_src’- a URI to the location of the external file for
- either the ‘script’ or the ‘style’ OR a String representing the html script or style (DO NOT include the tags) (required)
- ‘input_type’ - a String, ‘File’ or ‘Text’ that refers to how
- ‘data_src’ is being passed in (URI vs String) (required).
- ‘attributes’ - a dictionary that has key value pairs for
- optional tag attributes (optional). Ex: ‘attributes’: {‘id’: ‘muni_data’}
Text element dictionary has at least the following additional arguments:
- ‘text’- a string to add as a paragraph element in the html page
- (required)
returns - nothing
-
natcap.invest.reporting.
u
(string)¶
-
natcap.invest.reporting.
write_html
(html_obj, out_uri)¶ Write an html file to ‘out_uri’ from html element represented as strings in ‘html_obj’
- html_obj - a dictionary with two keys, ‘head’ and ‘body’, that point to
lists. The list for each key is a list of the htmls elements as strings (required) example: {‘head’:[‘elem_1’, ‘elem_2’,...],
‘body’:[‘elem_1’, ‘elem_2’,...]}
out_uri - a URI for the output html file
returns - nothing