“Early View” is like Christmas!

I’m extremely happy to say that the following papers are now out in early view – the first two papers are the results of Eryn Schineder’s and Kyle Rodman’s thesis work. For those who may not know, Eryn’s work focused on spatial patterns and reference conditions at the Barney Springs site south of Flagstaff, a pure ponderosa pine site on limestone soils that has managed to avoid being harvested. Truly a unique system to study… Kyle’s work also focused on spatial patterns and reference conditions, but in dry mixed-conifer sites along the Mogollon Rim. He presents a variety of reference attributes that will be interesting and applicable to many of you currently working in dry mixed-conifer forests (especially this findings regarding long-term changes in species composition). I’m am really proud of these two and both works are significant contributions to our knowledge regarding HRV and long-term vegetation dynamics. In case you’re wondering, Eryn and Kyle are both currently pursuing PhDs – Eryn with Andrew Larson at Univ. of Montana and Kyle at Univ. of Colorado at Boulder with Tom Veblen.

Lastly, the third paper presents an idea that Daniel Laughlin, Rob Strahan, Dave Huffman and I have been developing for a while now. In this paper we present a functional (species trait-based) approach to restoring resilient ecosystems in light of changing environmental conditions and explore it’s application in dry mixed-conifer forests (study sites at Black Mesa and on the north rim of Grand Canyon NP). Really exciting work that I’m happy to have been a part of!!!

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Using rLiDAR and FUSION to delineate individual trees through canopy height model segmentation

example3Let’s face it. Light detection and ranging (LiDAR) is awesome and it’s completely arrived. Currently, the best sources for free nationwide LiDAR datasets are the United States Interagency Elevation Inventory, USGS Center for LIDAR Information Coordination and Knowledge, and NSF’s OpenTopography. Just as there are quite a few sources for datasets, your choices are equally diverse when it comes to tools for processing LiDAR data and for the detection and delineation of individual trees from LiDAR (see Jakubowski et al 2013 for a nice review). Personally, I use a combination of FUSION, Global Mapper’s LiDAR Module, LiForest’s implementation of Li et al’s 2012 point cloud segmentation method, Swetnam and Falk’s 2014 variable area local maxima algorithm (implemented in MatLab), and the local maximum with a fixed window size algorithm implemented in rLiDAR by Carlos Alberto Silva*.

The following is a worked example using a forested LiDAR dataset, FUSION to build a CHM (called form R) and processing the resulting output using rLiDAR*. You need to install FUSION from here, and I also assume it’s install in “C:/FUSION/”. I hope you find the script useful…

# First things first, we need to set up some directories to keep the raw data
# separate from that produced by the analysis. I basically have an input (.las)
# and output (.las, .dtm, etc) directory in a dropbox folder.
mainDir <- "C:/Dropbox/LiDAR_Prj"
inDir <- "inputDirectory"
outDir<- "outputDirectory"
dir.create(file.path(mainDir, outDir), showWarnings = FALSE)

# First, we process the data in FUSION

# Read in the .las file and use FUSION to produce a .las file of points that
# approximate the ground's surface (bare-earth points).
# http://forsys.cfr.washington.edu/fusion/FUSION_manual.pdf#page=94&zoom=auto,70,720
system(paste(file.path("C:","Fusion", "groundfilter.exe"),
"/gparam:0 /wparam:1 /tolerance:1 /iterations:10",
file.path(mainDir, outDir, "Example_GroundPts.las"),
1,
file.path(mainDir, inDir, "Example.las"),
sep=" "))

# Next we use GridSurfaceCreate to compute the elevation of each grid cell using the
# average elevation of all points within the cell. Check the manual for arguments and uasge
# http://forsys.cfr.washington.edu/fusion/FUSION_manual.pdf#page=88&zoom=auto,70,720
system(paste(file.path("C:","Fusion", "gridsurfacecreate.exe"),
file.path(mainDir, outDir, "Example_DEM.dtm"),
"1 M M 1 12 2 2",
file.path(mainDir, outDir, "Example_GroundPts.las"),
sep=" "))

# Next we use CanopyModel to create a canopy surface model using a LIDAR point cloud.
# By default, the algorithm used by CanopyModel assigns the elevation of the highest return within
# each grid cell to the grid cell center.
#http://forsys.cfr.washington.edu/fusion/FUSION_manual.pdf#page=32&zoom=auto,70,720
system(paste(file.path("C:","Fusion", "canopymodel.exe"),
paste("/ground:",file.path(mainDir, outDir, "Example_DEM.dtm"), sep=""),
file.path(mainDir, outDir, "Example_CHM.dtm"),
"1 M M 1 12 2 2",
file.path(mainDir, inDir, "Example.las"),
sep=" "))

# Lastly, we use DTM2ASCII to convert the data stored in the PLANS DTM format into ASCII raster
# an file. Such files can be imported into GIS software such as ArcGIS or QGIS.
# http://forsys.cfr.washington.edu/fusion/FUSION_manual.pdf#page=88&zoom=auto,70,720
system(paste(file.path("C:","Fusion", "dtm2ascii.exe"),
file.path(mainDir, outDir, "Example_CHM.dtm"),
file.path(mainDir, outDir, "Example_CHM.asc"),
sep=" "))

# Second, we process the resulting CHM in rLiDAR

#install.packages("rLiDAR", type="source")
#install.packages("raster", dependencies = TRUE)
library(rLiDAR)
library(raster)
library(rgeos)

# Import the LiDAR-derived CHM file that we just made in the above section and plot it
chm<-raster(file.path(mainDir, outDir, "Example_CHM.asc"))
plot(chm)

Here’s the resulting canopy height model
Rplot02

# If we want, rLiDAR can smooth the CHM using a Gaussian filter
# Set the window size (ws)
ws<-3 # dimension 3x3
# Set the filter type
filter<-"Gaussian"
# filter<-"mean"
# Set the sigma value for the Gaussian filter
sigma<-0.5
sCHM<-CHMsmoothing(chm, filter, ws, sigma)
plot(sCHM)

#########################
# Edited - Have a look at the comments section in this post to see the discussion that triggered this fix... basically, before we can call FindTreesCHM(), it needs to be
# replaced with an edited, custom version. Why? - well as of 04/06/2016 the FindTreesCHM funciton has a couple of bugs - namely one form a call to SpatialPoints() with specifying the projection
# and one in the call to colnames() (really, it's cbind(), but who's counting)
FindTreesCHM<-function(chm, fws = 5, minht = 1.37)
{
if (class(chm)[1] != "RasterLayer") {
chm <- raster(chm)
}
if (class(fws) != "numeric") {
stop("The fws parameter is invalid. It is not a numeric input")
}
if (class(minht) != "numeric") {
stop("The minht parameter is invalid. It is not a numeric input")
}
w <- matrix(c(rep(1, fws * fws)), nrow = fws, ncol = fws)
chm[chm < minht] <- NA
f <- function(chm) max(chm)
rlocalmax <- focal(chm, fun = f, w = w, pad = TRUE, padValue = NA)
setNull <- chm == rlocalmax
XYmax <- SpatialPoints(xyFromCell(setNull, Which(setNull ==
1, cells = TRUE)), proj4string = crs(chm))                # Edited
htExtract <- over(XYmax, as(chm, "SpatialGridDataFrame"))
treeList <- cbind(coordinates(XYmax), htExtract)              # Edited
colnames(treeList) <- c("x", "y", "height")
return(treeList)
}
#########################

# Setting the fixed window size (fws)
fws<-3 # dimention 3x3
# Set the specified height above ground for the detection break
minht<-2.0
# Create the individual tree detection list and summarize it
loc<-FindTreesCHM(sCHM, fws, minht)
summary(loc)

# Set the maxcrown parameter - maximum individual tree crown radius expected
maxcrown=10.0
# Set the exclusion parameter - A single value from 0 to 1 that represents the % of pixel
# exclusion. E.g. a value of 0.5 will exclude all of the pixels for a single tree that has
#a height value of less than 50% of the maximum height from the same tree. Default value is 0.3.
exclusion=0.1
# Compute canopy areas for the individual tree detections
canopy<-ForestCAS(sCHM, loc, maxcrown, exclusion)

# Retrieve the boundary for individual tree detection and canopy area calculation
boundaryTrees<-canopy[[1]]

# Retrieve the list of individual trees detected for canopy area calculation
canopyList<-canopy[[2]] # list of ground-projected areas of individual tree canopies
summary(canopyList)     # summary
canopyList$crad<-sqrt(canopyList$ca/pi) # Compute the corresponding crown radii

# Write the output to a CSV file. This will make bringing it into ArcGIS or QGIS by others easier
write.csv(canopyList, file.path(mainDir, outDir, "Example_Out.csv"))

# Plot the results in ggplot

# Let's convert the tree results into spatial points to be used in ggplot
library(sp)
XY<-SpatialPoints(canopyList[,1:2])    # Spatial points
XY<-data.frame(XY) # Converted to a dataframe
# We're not dealing with lat and long, so let's rename the columns
names(XY)[names(XY)=="longitude"]<-"x"
names(XY)[names(XY)=="latitude"]<-"y"

# Rasters are a little problematic, so convert the values into rows in a dataframe with
# the corresponding information
CHMdf <- rasterToPoints(sCHM); CHMdf <- data.frame(CHMdf)
colnames(CHMdf) <- c("X","Y","Ht")
# Build the breaks for plotting
b.chm <- seq(0,50,10)

# Plotting the individual tree canopy boundary over the CHM
library(ggplot2)
ggplot(CHMdf) +
geom_raster(data=CHMdf,aes(X,Y,fill=Ht)) +
scale_fill_gradientn(name="Canopy Height",colours = terrain.colors(length(b.chm))[length(b.chm):1],breaks=b.chm) +
geom_polygon(data = fortify(boundaryTrees), aes(x=long, y=lat,
group = group),colour='black', fill='transparent')+
geom_point(data=XY, aes(x=x, y=y), color="black", shape=3, size=0.5)+
scale_shape_discrete(name = "Tree Locations", labels=c("Tree Centroid","")) +
scale_alpha(range = c(0, 0.5)) +
ggtitle("Location map showing individual trees and associated crown areas \n segmented from a LiDAR-derived canopy height model") +
coord_equal() + theme_bw()

You can download the script from here and here’s the resulting figure.

Rplot01* – As per Carlos suggestion in the comments, here’s how one might cite rLiDAR – “Silva, C.A., Crookston, N.L., Hudak, A.T., and Vierling, L.A. 2015. rLiDAR: An R package for reading, processing
and visualizing LiDAR (Light Detection and Ranging) data, version 0.1, accessed Oct. 15 2015.”

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

It’s been a productive year thus far…

I’ve been out of pocket for awhile on this blog, but it’s fir a good reason. I’ve been writing my butt off! Below are four manuscripts published in the last few months, all of which I was a part of… I’ll like the work speak for itself.

Taylor The Economics of Ecological Restoration and Hazardous Fuel Reduction Treatments in the Ponderosa Pine Forest Ecosystem
by M H Taylor, A J Sanchez Meador, Y S Kim, K Rollins, and H Will
Abstract: In this article, we develop a simulation model of the benefits and costs of managing the ponderosa pine forest ecosystem in the southwestern United States. Using the model, we evaluate and compare the economic benefits and costs of ecological restoration and hazardous fuel reduction treatments. Both treatment approaches increase the expected number of low-severity wildfires, which can promote postfire rehabilitation. Hazardous fuel reduction treatments are likely to reduce expected wildfire suppression costs, but not enough to offset the costs of implementing treatments. Conversely, ecological restoration treatments do not necessarily reduce expected wild-fire suppression costs but fully restore the ecosystem in more than half of the simulation runs, which lowers the need for future fire suppression and reduces the chance of conversion to nonforest, alternative stable states. We find that the choice between hazardous fuel reduction and ecological treatments will depend on the management objective being pursued, as well as on site-specific factors such as the wildfire return interval and the economic value of biomass removed.
 Stoddard Five-year post-restoration conditions and simulated climate-change trajectories in a warm/dry mixed-conifer forest, southwestern Colorado, USA
by M T Stoddard, A J Sánchez Meador, P Z Fulé, and J E Korb
Abstract: Some warm/dry mixed-conifer forests are at increasing risk of uncharacteristically large, high-severity fires. As a result, managers have begun ecological restoration efforts using treatments such as mechanical thinning and prescribed fire. Empirical information on the long-term impacts of these treatments is limited, especially in light of potential climate change. We assessed changes in forest structure and composition five-years following three alternative restoration treatments in a warm/dry mixed-conifer forest: (1) thin/burn, (2) prescribe burn, and (3) control. We used the Climate-Forest Vegetation Simulator (Climate-FVS) model to quantify potential forest trajectories under alternative climate scenarios. Five years following treatments, changes in forest structure were similar to initial post-treatment conditions, with thin/burn being the only treatment to shift and maintain forest structure and composition within historical reference conditions. By 2013, the thin/burn had reduced basal area (11.3 m2 ha-1) and tree density (117.2 tree ha-1) by 56% and 79% respectively, compared to pre-treatment values. In the burn, basal area (20.5 m2 ha-1) and tree density (316.6 tree ha-1) was reduced by 20% and 35% respectively, from 2002 to 2013. Mortality of large ponderosa pine trees (the most fire-resistant species) throughout the duration of the experiment, averaged 6% in the burn compared to 16% in the thin/burn treatment. Changes five years following treatments were largely due to increases in sprouting species. Shrub and sapling densities were approximately two to three times higher (respectively) in the thin/burn compared to burn and control and dominated by sprouting oak and aspen. Under climate simulations, the thin/burn was more resilient in maintaining forest conditions compared to burn and control which approached meager forest conditions (3–4 m2 ha-1). These results indicate that restoration treatment that include both thinning and burning can maintain forest integrity over the next few decades.
 Tuten Ecological restoration and fine-scale forest structure regulation in southwestern ponderosa pine forests
by M C. Tuten, A J Sánchez Meador, and P Z. Fulé
Abstract: Fine-scale forest patterns are an important component of forest ecosystem complexity and spatial pattern objectives are an increasingly common component of contemporary silviculture prescriptions in dry fire-adapted forests of North America. Despite their importance, questions remain regarding the assessment of silvicultural treatments designed to meet spatial objectives. We initiated a replicated silvicultural assessment of two forest management approaches commonly applied in dense ponderosa pine forests of the Southwest United States: historical evidence-based ecological restoration guidelines (ERG) and northern goshawk (Accipiter gentilis) foraging area management recommendations (GMR). We compared stand-level characteristics, global tree location point patterns and tree group-level attributes resulting from the marking of these approaches to current forest conditions and patterns of historical forest remnants in six, 2.02 ha stem mapped plots. We also assessed group-level Vegetative Structural Stage (VSS; a classification of fine-scale forest structural development used to regulate fine-scale spatial patterns in these forests). ERG and GMR-based treatments significantly reduced densities and basal area from the current condition, but did not significantly differ in density from historical forest remnant estimates. GMR-based treatments retained greater stand level basal area than ERG-based treatments, primarily in large, 28–48 cm tree diameter classes. GMR-based treatments approximated global tree location point patterns of forest remnants better than ERG-based treatments, primarily due to a 5–6 m minimum spacing of residual trees, but also likely due to specific aspects of ERG-based marking techniques. Despite this difference, both treatments resulted in group-level characteristics similar to those exhibited by historical forest remnants. Both treatments significantly altered group-level VSS area and reduced variation of tree diameters within classified VSS groups.
 Outzs Post-fire ponderosa pine regeneration with and without planting in Arizona and New Mexico
by J Ouzts, T Kolb, D Huffman, A J Sánchez Meador
Abstract: Forest fires are increasing in size and severity globally, yet the roles of natural and artificial regeneration in promoting forest recovery are poorly understood. Post-fire regeneration of ponderosa pine (Pinus ponderosa, Lawson and C. Lawson) in the southwestern U.S. is slow, episodic, and difficult to predict. Planting of ponderosa pine after wildfire may accelerate reforestation, but little is known about survival of plantings and the amount of post-fire natural regeneration. We compared ponderosa pine regeneration between paired planted and unplanted plots at eight sites in Arizona and New Mexico that recently (2002– 2005) burned severely. Two sites had no natural regeneration and no survival of planted seedlings. Seedling presence increased with number of years since burning across all plots, was positively associated with forb and litter cover on planted plots, and was positively associated with litter cover on unplanted plots. Survival of planted seedlings, measured five to eight years after planting, averaged 25% (SE = 8) and varied from 0% to 70% across sites resulting in seedling densities of 0–521 trees ha-1. Based on a projected 44% survival of seedlings to mature trees and target density of mature trees determined by historical range of variability and ecological restoration principles, four of eight sites have a seedling density in planted plots (125–240 ha-1) that will produce a density of mature trees (55–106 ha-1) close to desired levels, whereas seedlings are currently deficient at three planted sites, and in surplus at one site, which had abundant natural regeneration. Natural regeneration in unplanted plots during the first decade after burning produced seedling densities inconsistent with desired numbers of mature trees. Natural regeneration in unplanted plots produced less than 33 seedlings ha-1 at seven of eight sites, but produced 1433 seedlings ha-1 at one high-elevation site that supported a more mesic vegetation community before burning than the other sites. Our results show that current practices for planting ponderosa pine after severe fires in Arizona and New Mexico produce desired numbers of seedlings in approximately half of all projects, whereas natural regeneration rarely does within the first decade after burning.

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A little shameless press…

0407151049a

Last month’s Journal of Forestry and the Forestry Source Newspaper featuring our article “Implications of diameter caps on multiple forest resource responses in the context of 4FRI: Results from the Forest Vegetation Simulator” and an interview with Dr. Sanchez Meador.

 

Last month, the Forestry Source (the Society of American Foresters’ newspaper) decided to feature one of our recently published articles and the resulting news article included an interview with yours truly. I’ll be the first to admin that while I’m a huge extrovert, I hate giving interviews. The editor did a great job and I’m happy with the way the article turned out. If you’d like to read the new article you can download it here and if you like to read manuscript, which appeared in the Journal of Forestry that same month, you can find it here.

Here’s the citation for the manuscript: Sánchez Meador, A.J., Waring, K.M., and E.L. Kalies. 2015. Implications of diameter caps on multiple forest resource responses in the context of 4FRI: Results from the Forest Vegetation Simulator. Journal of Forestry 113(2) 219-230.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

My script to install the “top” R packages

Here’s a script that I use to query the CRAN package download logs and figure out what packages are the “top” packages being used/downloaded. It borrows heavily from this post, which is badass… User be wary, as it download all of the logs for the specified date and creates a data.table to house this information (if you specify a large timeframe, this thing will be HUGE). To get around that, I randomly r sample the data.table for half of the entries and at the end, I’ve got a section to install these packages. However, it’s currently commented out.

Geek+1

## Inspired and heavily dependent on the code Felix Schönbrodt
## http://www.nicebread.de/finally-tracking-cran-packages-downloads/

## ======================================================================
## Step 1: Parameterize the script with dates and the number of packages
## that we're interested in...
## ======================================================================

# My advice would be to set this to a day or a week. If you do 6-months like I've
# done here you better have the memory to support it!
start <- as.Date('2014-09-01')
end <- as.Date('2015-02-28')

# How many "top" packages are we interested in?
top.x <- 20

## ======================================================================
## Step 2: Download all log files for each week
## ======================================================================

# Here's an easy way to get all the URLs in R
all_days <- seq(start, end, by = 'day')

# If we were to look we'd  see a strong weekly pattern in the the downloads,
# with Saturday and Sunday having much fewer downloads than other days. This is
# not surprising since we know that the countries which use R don't work these
# days. Let's just look at MWF to be safe...
weekdays(all_days)
days.To.Keep<- c("Monday", "Wednesday", "Friday")
all_days <- subset(all_days, weekdays(all_days) %in% days.To.Keep)
weekdays(all_days)

year <- as.POSIXlt(all_days)$year + 1900
urls <- paste0('http://cran-logs.rstudio.com/', year, '/', all_days, '.csv.gz')

# only download the files you don't have:
missing_files <- setdiff(as.character(all_days), tools::file_path_sans_ext(dir("CRANlogs"), TRUE))

dir.create("CRANlogs")
for (i in 1:length(missing_files)) {
  print(paste0(i, "/", length(missing_files)))
  download.file(urls[i], paste0('CRANlogs/', missing_files[i], '.csv.gz'))
}

## ======================================================================
## Step 3: Load single data files into one big data.table and then clean
## up the files (delete them) once we're done
## ======================================================================

file_list <- list.files("CRANlogs", full.names=TRUE)

logs <- list()
for (file in file_list) {
  print(paste("Reading", file, "..."))
  logs[[file]] <- read.table(file, header = TRUE, sep = ",", quote = "\"",
                             dec = ".", fill = TRUE, comment.char = "", as.is=TRUE)
}

# rbind all of the files together
library(data.table)
dat <- rbindlist(logs)
# logs will likely be huge, so unless you have memory for days, we best delete it
# and free up that memmory
#rm(logs); gc(verbose=T);

#Let's make this data.table smaller, to save memory, and randomly sample half of it
dat<-dat[sample(nrow(dat), ceiling(0.5*nrow(dat))), ]

# define the remaining variable types
dat[, date:=as.Date(date)]
dat[, package:=factor(package)]
dat[, week:=strftime(as.POSIXlt(date),format="%Y-%W")]

# set the key
setkey(dat, package, date, week)

# Delete the files and thier directory (gots to keep our shit clean!!!!)
# Just comment this out if you don't want to delete the files (i.e., you
# might want them for later use)
#unlink("CRANlogs", recursive = TRUE) 

## ======================================================================
## Step 4: Analyze it!
## ======================================================================

library(ggplot2)
library(plyr)

# Overall downloads of packages
d1 <- dat[, length(week), by=package]
d1 <- d1[order(-V1), ]

# Build a vector of package names, to be  used later for install.packages
package.names<-as.character(d1$package[1:top.x])

# plot 1: Compare downloads of "top" packages on a weekly basis
agg1 <- dat[J(package.names), length(unique(ip_id)), by=c("week", "package")]

ggplot(agg1, aes(x=week, y=V1*2, color=package, group=package)) + geom_line(size=1) +
  ylab("Downloads") + theme_bw() +
  theme(axis.text.x  = element_text(angle=90, vjust=0.5))

## ======================================================================
## Step 5: Install them all (plus their dependencies)!
## ======================================================================

# Uncomment this line if you want to install all of the "top" packages
# install.packages(package.names,dep=TRUE)
FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A couple of new wildfire-related publications

Over the past year I have had the opportunity to work a several wildfire-related projects, two of which are now available from their publishers. The first focuses on the effectiveness of fuel treatments following the Wallow  (2011) fire and the second focuses on long-term forest dynamics under alternative climate and management scenarios following the Rodeo-Chediski (2002) fire. These were both great projects and I think collectively they provide quite a bit of insight into the abilities of managers and agencies to mitigate wildfire effects (during and after) and highlight the effects treatments have on resiliency and given different climate scenarios.

Screenshot 2014-10-02 07.12.24 Amy E. M. Waltz, Michael T. Stoddard, Elizabeth L. Kalies, Judith D. Springer, David W. Huffman, and A.J. Sánchez Meador. 2014. Effectiveness of fuel reduction treatments: assessing metrics of forest resiliency and wildfire severity after the Wallow Fire, AZ. Forest Ecology and Management 334(15): 43-52. http://dx.doi.org/10.1016/j.foreco.2014.08.026

Abstract: Landscape-scale wildfire has occurred in higher frequencies across the planet. Fuel reduction treatments to fire-adapted systems have been shown to reduce the impact to human values-at-risk. However, few studies have examined if these treatments contribute to ecosystem resilience, or the capacity of a system to absorb perturbation and return to a similar set of structures or processes. We defined short-term metrics of resiliency to test the hypothesis that fuel reduction treatments in mixed conifer forests increased a fire-adapted system’s resiliency to uncharacteristically severe wildfire. In addition, we tested the hypothesis that fuel reduction treatments reduced burn severity, thereby increasing protection for adjacent human communities. We examined a mixed conifer forested landscape in the southwestern U.S. that was burned by a landscape-scale “mega-fire” in 2011; fuel reduction treatments had been established around communities in the 10 years prior to the fire. Fire effects were highly variable in both treated and untreated forests. However, analysis of resiliency metrics showed that: (a) treated units retained a higher proportion of large trees and had post-fire tree densities within the natural range of variability; (b) the understory herbaceous community had significantly higher cover of native grasses in the treated units, but no significant differences in nonnative cover between treated and untreated units; and (c) high-severity patch sizes were significantly larger in untreated stands and covered a larger proportion of the landscape than historical reference conditions. Fire severity, as defined by overstory mortality and basal area loss, was significantly lower in treated units; on average, trees killed per hectare in untreated units was six times the number of trees killed in treated units. Fuel reduction treatments simultaneously reduced fire severity and enhanced short-term metrics of ecosystem resiliency to uncharacteristically severe fire.

 Screenshot 2014-10-02 07.12.15 Alicia Azpeleta Tarancón, Peter Z. Fulé, Kristen L. Shive, Carolyn H. Sieg, Andrew Sánchez Meador, and Barbara Strom 2014. Simulating post-wildfire forest trajectories under alternative climate and management scenarios. Ecological Applications 24:1626–1637. http://dx.doi.org/10.1890/13-1787.1

Abstract: Post-fire predictions of forest recovery under future climate change and management actions are necessary for forest managers to make decisions about treatments. We applied the Climate-Forest Vegetation Simulator (Climate-FVS), a new version of a widely used forest management model, to compare alternative climate and management scenarios in a severely burned multispecies forest of Arizona, USA. The incorporation of seven combinations of General Circulation Models (GCM) and emissions scenarios altered long-term (100 years) predictions of future forest condition compared to a No Climate Change (NCC) scenario, which forecast a gradual increase to high levels of forest density and carbon stock. In contrast, emissions scenarios that included continued high greenhouse gas releases led to near-complete deforestation by 2111. GCM-emissions scenario combinations that were less severe reduced forest structure and carbon stock relative to NCC. Fuel reduction treatments that had been applied prior to the severe wildfire did have persistent effects, especially under NCC, but were overwhelmed by increasingly severe climate change. We tested six management strategies aimed at sustaining future forests: prescribed burning at 5, 10, or 20-year intervals, thinning 40% or 60% of stand basal area, and no treatment. Severe climate change led to deforestation under all management regimes, but important differences emerged under the moderate scenarios: treatments that included regular prescribed burning fostered low density, wildfire-resistant forests composed of the naturally dominant species, ponderosa pine. Non-fire treatments under moderate climate change were forecast to become dense and susceptible to severe wildfire, with a shift to dominance by sprouting species. Current U.S. forest management requires modeling of future scenarios but does not mandate consideration of climate change effects. However, this study showed substantial differences in model outputs depending on climate and management actions. Managers should incorporate climate change into the process of analyzing the environmental effects of alternative actions.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A little R love for my non-R friends…

Source: http://xkcd.com/1064/ There's even a package to make your figured in XKCD fashion! How awesome is that!??!??! http://stackoverflow.com/questions/12675147/how-can-we-make-xkcd-style-graphs-in-r

Source: http://xkcd.com/1064/
There’s even a package to make your figures in XKCD fashion! How awesome is that!??!??! http://stackoverflow.com/questions/12675147/how-can-we-make-xkcd-style-graphs-in-r

***Update*** Maxwell Joseph 20+ R tutorials to YouTube for a new undergraduate course in Ecology and Evolutionary Biology at CU developed by Andrew Martin and Brett Melbourne, which are a nice place to start…***End Update***

The other day, a colleague/friend sent me a message asking how to go about learning R and specifically asked about online resources to help her lessen the learning curve. Since I knew she is a SAS user, I had some specific ideas of where I would point her, but I though I’d also post the guts of my response here…

So you’re interested in converting to R? That’s great and you won’t regret it…. As a former SAS user (and a general “new” R user), I usually suggest the following:

Many people recommend the R in Nutshell book from O’Reilly. It’s good, but it’s mostly a rehashing of available online help files: http://oreilly.com/catalog/9780596801717

Perhaps the best compilation of online video/tutorial type resources for learning R is this collection compiled by Jeromy Anglim: http://jeromyanglim.blogspot.com/2010/05/videos-on-data-analysis-with-r.html

As a former SAS user, I also suggest you start with Muenchen’s book  from Springer: http://www.springer.com/statistics/computational+statistics/book/978-1-4614-0684-6 He has an accompanying website with examples too that’s pretty useful: http://r4stats.com/examples/

It’s not to hard to find preview copies of the above mentioned texts online, but if you find them useful you really should support the authors and purchase a copy. As you progress, you’ll quickly outgrow any single book and then I suggest using the true power of R, the available online community and resources. Here are a few suggestions:

  • The R Project homepage. It really should be bookmarked. This is the place to come for official news from the R Project, plus links to documentation, mailing lists, and the official R FAQs
  • StackOverflow. Have a question about R? Search for questions tagged with “r” and you’ll probably find an answer. If not, post your question and I guarantee you’ll have an answer before you know it….
  • R bloggers. This is the first “news” feed I check every morning. It’s my go to for news, tips and articles related to R and is basically  a blog aggregator for posts from dozens of R bloggers, including the awesome work form the team at Revolution Analytics
  • #rstats on Twitter. This is pretty self explanatory (in 140 characters, no less). Just search for the #rstats hastag
  • If you find yourself still looking and desire some offline reading, the R Project has an extensive list of R books, as does the R Programming Language tag on Amazon.com

Hope this helps!

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Prezi on introducing spatial statistics

Last semester, I provided a couple of guest lectures in Margaret Moore’s Landscape Ecology class on spatial statistics. Spatial statistics, tools that hold a special place in my heart, are commonly used for understanding data distributed in a space where positions and distance have meaning; and are highly useful tools in forestry and ecology. This prezi is meant to be a brief introduction, and is expanded upon in my 599 class.
I had completely forgotten to publish the prezi, so here it is…. I hope some of you find it useful.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

New publication resulting from work that began in Nepal

Some recent work, done almost exclusively through Facebook exchanges on my end, has been published in the the most recent issue of International Journal of Research. This work started in while in Nepal, and largetly consisted of helping my co-authors with analyses and interpretations.While this work is not in my usual focus area, I feel like it’s a good example of how collaborations can stem from the most unlike of circumstances.

S. Nepal, B.R. Ojha, A.J. Sánchez Meador, S.P. Gaire, and C. Shilpakar. 2014. Effect of gamma rays on germination and photosynthetic pigments of maize (Zea mays L.) inbreds. International Journal of Research. 1(5): 511-545. Download

Here’s the abstract:

This investigation was carried out to determine the effects of gamma radiation on germination and photosynthetic pigments of two maize inbred lines (RML-17 and RML-32). The pure dry seeds were irradiated with variable dosages (200, 250, 300 and 350 Gy) at the rate of 65cGy/min from 60Co source. The results showed that there was a significant decreasing effect of the gamma rays on the final germination percentage (FGP) but the rate of germination was not significantly affected by radiation dosages. However, a decreasing trend was observed in general for the germination rate. The higher dose (350Gy) of gamma rays was found to have the maximum inhibitory effect on FGP for both inbreeds (31.2% for line RML-17 and 33.3% for RMl-32)The inhibitory effect of gamma rays was seen for the photosynthetic pigments especially, the chlorophyll-a [minimum at 350 Gy( 6.25mg/gm Fw) for Rml-32]The non-irradiated samples in both inbreed exhibited higher chlorophyll-a content(11.056gmg/gmFW for RMl-17 and 11.74mg/gmFw for RML-32). The effect of gamma rays on chlorophyll-b content was no significant but a decreasing effect was seen in higher radiation dosages. The total chlorophyll content was found significantly affected by dosage for line RMl-32,it was found maximum (21.25gm/mgFw) for non-irradiated sample with the minimum total chlorophyll content occurring at 350Gy(13.47mg/gmFW) furthermore, the concentration of chlorophyll-a was higher than chlorophyll-b in both irradiated and non-irradiated plants except at 350Gy for line RML-32 where chlorophyll-b7(7.21mg/gmFW) was found maximum compared to chlorophyll-a (6.25mg/gmFW). The overall effect of the gamma rays was inhibitory for all the traits under the study.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Final thoughts on my F2F assignment in Nepal

When I decided to volunteer with Winrock International’s Farmer-to-Farmer program, I wasn’t sure what I was getting myself into, and while I was assured by a fellow volunteer that I would be taken care of, I never anticipated how great this opportunity could be. Upon arrival and throughout my time in Nepal, Winrock’s small (maybe 5 people) but dedicated team made me feel right at home, making it easier to focus on my task at hand – increasing the data handling and analysis capacity of young, experienced, faculties and selected post-graduate students through the application of R.

2014_04_16_07_50

Chhan and family invited me to have dinner at their home while in Kathmandu. It was, hands down, the best meal I ate while in Nepal.

My primary contacts in Nepal were Dr. Vrigu Duwadi and Mr. Chhan Bhattachan with Winrock, and Dr. Mohan Sharma, Professor and Continuing Education Center Director with the Agriculture and Forestry University of Nepal. I can’t say enough good things about these individuals. All of my activities while on assignment were coordinated through this team (with the invaluable inclusion of Krishna, our driver) to assure that we were delivering information that was pertinent to the audience and that facilities and logistics essential to the success of the training were available. In addition, all of these individuals had a hand in making sure I also got to tour campus, visit cultural and natural resources sites, and generally ensure that I received the best Nepali experience possible (which I think comes natural to them). I have to mention a special thanks to my friend, Chhan, who spent almost every waking hour with me and was largely responsible for how I experienced Nepal. You a competent statistician, an intellectual, and an exceptional host.

To some degree, I approached this assignment expecting very little, and being totally prepared to “wing it” if need be. Everything I read about traveling to Nepal was that most non-Nepali visit to trek in the Himalayas, but it’s generally not the place you just up and decide to visit. Essentially, most people traveling here, plan, save an prepare for months. This was a little unnerving for me, but I think it added to my experience and alleviated many preconceptions or unreal expectations.

Given all of these factors, I’ll summarize my time in Nepal by listing a few things I learned on this trip (in no order of importance and they’re not all serious):

  1. We, as Americans, should be grateful that we have clean water and a dependable power supply. We should also be thankful we have food security, transportation safety regulations, and a well-developed sanitation system.
  2. I’ve said this before – that a good driver is worth their weight in gold – but the exchange rate has gone up after my trip to Nepal and especially during my time in Kathmandu. A good driver is worth 6x their weight in gold.
  3. I really do love American food (and craft beer) and missed it greatly. I’m especially grew tired of lagers and missed American ports and brown ales. I also missed eating raw greens…
  4. Kathmandu, and some of the larger cities like Bharatpur are extremely polluted. It doesn’t take away from its beauty, they’re just polluted. People here at home have been asking what it was like to see the Himalayas, and I have to explain that due to the smog, I never saw them. Not once. It’s sad, but the Nepali people’s sewage and waste infrastructure has failed to keep up with their urban expansion, leaving them with a serious problem. Crossing a Bagmati river in Kathmandu revealed piles of floating garbage (not a new issue) and the lack of pollution or emission standards is readily apparent as mini- and micro-buses, bikes, and all forms of vehicles constantly pump black fumes into the atmosphere.
  5. Gender equality (albeit still a work in progress) is a beautiful thing and it’s good to seen Nepal making positive strides in this arena
  6. It’s impossible to talk about the effects of disturbances in mixed-conifer systems when the other party’s talking the importance of increased crop yields to feed the hungry or integrated pest management to reduce the impact of pesticides on human health (think DDT concerns in the US, circa 1940s only with humans). Some of the professionals I spoke with expressed interests in deforestation and land degradation, largely anthropogenic in nature, so I was able to see how my work might apply there… but it was a stretch. I also saw and read about numerous wildfires that were burning in community forests and near-by National Parks, but few seemed to think it was an “interesting” issue.
  7. The United States of America is not as cool as it thinks it is… we have no native monkeys for crying out loud! Two words – Rhesus macaque
  8. I loved “having tea” and now see how it facilitates conversation and idea sharing. I wasn’t prepared for the fact that “having tea” doesn’t mean you’ll actually be drinking tea. It could be eating dal, having cookies and coffee, or any variety of thing. Essentially, it’s a break and an excuse to chat.
  9. Apparently, everyone outside of Kathmandu has a water buffalo or two. I don’t think they “own” them.
  10. The widespread use the internet and availability of information at a moment’s notice has changed our lives forever. Not everyone has this luxury. They might have smart phones and access to the internet, but I don’t think everyone uses it to empower and educate themselves quite like the I (we?) do. I hope I’m wrong on this one… My time in Nepal assured me that the people there have the intellectual capacity, but the resource limitations limit how they might achieve success.
  11. Facebook really has made the world smaller. I think my friend list doubled after this trip and I’ve corresponded with several participants over pictures, data, analysis, and all sorts of things.
  12. In a country where more than 70 percent of the population depends on agriculture for its livelihood, Nepal has done a superb job of recognizing the importance of community management and conservation of its forests. Bravo!
  13. Nepal’s flag is the only national flag in the world that is not rectangular in shape and is considered to be the most mathematical flag. Hell yeah! Go Math!
    The Nepali love football, but they LOVE cricket!

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail