Sampling trip to the North Kaibab RD

In my last post, I put up a time lapse video from the Centennial Forest here near Flagstaff where my crew was sampling ponderosa pine for biomass and volume as part of a larger, national FIA-funded project where NAU’s Wood Science and Forest Biometrics lab is working with Federal and State cooperators to collect tree information for national scale biomass and carbon stock estimators for forests of the United States.

Anyway, later that same week, Phil Radtke and a crew from Virginia Tech’s Department of Forest Resources and Environmental Conservation arrived and after working a day here in town we all headed up to the North Kaibab to do some serious sampling. The crew visited Fire Point, had milkshkaes at Jacob Lake, and sampled 18 trees of various species. More information on this protect can be found here: http://www.legacytreedata.org

So here’s the time lapse of Phil and crew processing a couple trees…about five hours worth of work in just under two minutes!


Andrew Sánchez Meador on Vimeo.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Time lapse video of whole tree destructive sampling

Through a 1-min time-lapse of 5 hrs of work, this video shows what the process of felling, processing, scaling and weighing a tree looks like …

Video from Andrew Sanchez Meador on Vimeo.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Flagstaff – Then and Now

I’ve been working on a repeat aerial photography project with NAU’s Cline Library (this is the second part, the first part was (here) and below is an additional proof of concept element for a proposed Hanks Internship that I’m hoping to find a student to work on this fall. The images on the left are based off 0.5m resampled orthoimages for Flagstaff taken in 1959 by Andre M. Faure. The images on the right are corresponding (same resolution, same location) but from 2014. The viewer is based off of Jan Pieter Waagmeester’s Leaflet.Sync plugin and the images (1959 & 2014) are served up by Mapbox.

1959 2014

Click here to open the display as fullscreen!

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Flagstaff in 3D – circa 1959 via A.M. Faure and Cline Library

The following is something I produced as a proof of concept for a proposed Hanks Internship with Cline Library this fall (if you know of any students, please send them my way!). The scene was produced from 122 aerial photographs take by Andre M. Faure in 1959 and covers the majority of Flagstaff, AZ. Faure was a city planner for Tucson from 1941 to 1968. Prior to his arrival in Tucson, Faure served as a planning consultant in Connecticut and a town planner in New Jersey. He worked with the City of Flagstaff and Coconino County on various projects in the 1940s and 1950s. Some of which are aerial photographs of Flagstaff, Williams and Sedona used by Faure for city planning in 1959, and they are available for online viewing in the Colorado Plateau Digital Archives.

Flagstaff – 1959

The Dorothy T. and James J. Hanks Cline Library Endowment supports Northern Arizona University students for research in repeat photography. A primary goal is to locate and document camera stations of photographs held by Special Collections and Archives, with emphasis on images from the Colorado Plateau. Cline Library Hanks Scholars enhance the library’s photographic collections by increasing knowledge and discovery in the natural or social sciences. Hanks Scholars are given a unique opportunity to develop an appreciation of the value of historic photographs and repeat photography. NAU’s Special Collections and Archives is the official repository for the James J. Hanks Collection.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

“Early View” is like Christmas!

I’m extremely happy to say that the following papers are now out in early view – the first two papers are the results of Eryn Schineder’s and Kyle Rodman’s thesis work. For those who may not know, Eryn’s work focused on spatial patterns and reference conditions at the Barney Springs site south of Flagstaff, a pure ponderosa pine site on limestone soils that has managed to avoid being harvested. Truly a unique system to study… Kyle’s work also focused on spatial patterns and reference conditions, but in dry mixed-conifer sites along the Mogollon Rim. He presents a variety of reference attributes that will be interesting and applicable to many of you currently working in dry mixed-conifer forests (especially this findings regarding long-term changes in species composition). I’m am really proud of these two and both works are significant contributions to our knowledge regarding HRV and long-term vegetation dynamics. In case you’re wondering, Eryn and Kyle are both currently pursuing PhDs – Eryn with Andrew Larson at Univ. of Montana and Kyle at Univ. of Colorado at Boulder with Tom Veblen.

Lastly, the third paper presents an idea that Daniel Laughlin, Rob Strahan, Dave Huffman and I have been developing for a while now. In this paper we present a functional (species trait-based) approach to restoring resilient ecosystems in light of changing environmental conditions and explore it’s application in dry mixed-conifer forests (study sites at Black Mesa and on the north rim of Grand Canyon NP). Really exciting work that I’m happy to have been a part of!!!

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Using rLiDAR and FUSION to delineate individual trees through canopy height model segmentation

example3Let’s face it. Light detection and ranging (LiDAR) is awesome and it’s completely arrived. Currently, the best sources for free nationwide LiDAR datasets are the United States Interagency Elevation Inventory, USGS Center for LIDAR Information Coordination and Knowledge, and NSF’s OpenTopography. Just as there are quite a few sources for datasets, your choices are equally diverse when it comes to tools for processing LiDAR data and for the detection and delineation of individual trees from LiDAR (see Jakubowski et al 2013 for a nice review). Personally, I use a combination of FUSION, Global Mapper’s LiDAR Module, LiForest’s implementation of Li et al’s 2012 point cloud segmentation method, Swetnam and Falk’s 2014 variable area local maxima algorithm (implemented in MatLab), and the local maximum with a fixed window size algorithm implemented in rLiDAR by Carlos Alberto Silva*.

The following is a worked example using a forested LiDAR dataset, FUSION to build a CHM (called form R) and processing the resulting output using rLiDAR*. You need to install FUSION from here, and I also assume it’s install in “C:/FUSION/”. I hope you find the script useful…

# First things first, we need to set up some directories to keep the raw data 
# separate from that produced by the analysis. I basically have an input (.las) 
# and output (.las, .dtm, etc) directory in a dropbox folder.     
mainDir <- "C:/Dropbox/LiDAR_Prj"
inDir <- "inputDirectory"
outDir<- "outputDirectory"
dir.create(file.path(mainDir, outDir), showWarnings = FALSE)

# First, we process the data in FUSION

# Read in the .las file and use FUSION to produce a .las file of points that 
# approximate the ground's surface (bare-earth points). 
# http://forsys.cfr.washington.edu/fusion/FUSION_manual.pdf#page=94&zoom=auto,70,720
system(paste(file.path("C:","Fusion", "groundfilter.exe"),
       "/gparam:0 /wparam:1 /tolerance:1 /iterations:10",
       file.path(mainDir, outDir, "Example_GroundPts.las"),
       1,
       file.path(mainDir, inDir, "Example.las"),
       sep=" "))

# Next we use ridSurfaceCreate to compute the elevation of each grid cell using the 
# average elevation of all points within the cell. Check the manual for arguments and uasge 
# http://forsys.cfr.washington.edu/fusion/FUSION_manual.pdf#page=88&zoom=auto,70,720 
system(paste(file.path("C:","Fusion", "gridsurfacecreate.exe"),
       file.path(mainDir, outDir, "Example.dtm"),
       "1 M M 1 12 2 2",
       file.path(mainDir, outDir, "Example_GroundPts.las"),
       sep=" "))
	   
# Next we use CanopyModel to create a canopy surface model using a LIDAR point cloud. 
# By default, the algorithm used by CanopyModel assigns the elevation of the highest return within 
# each grid cell to the grid cell center.	   
#http://forsys.cfr.washington.edu/fusion/FUSION_manual.pdf#page=32&zoom=auto,70,720
system(paste(file.path("C:","Fusion", "canopymodel.exe"),
       paste("/ground:",file.path(mainDir, outDir, "Example.dtm"), sep=""),
       file.path(mainDir, outDir, "Example.dtm"),
       "1 M M 1 12 2 2",
       file.path(mainDir, inDir, "Example.las"),
       sep=" "))

# Lastly, we use DTM2ASCII to convert the data stored in the PLANS DTM format into ASCII raster
# an file. Such files can be imported into GIS software such as ArcGIS or QGIS.
# http://forsys.cfr.washington.edu/fusion/FUSION_manual.pdf#page=88&zoom=auto,70,720
system(paste(file.path("C:","Fusion", "dtm2ascii.exe"),
       file.path(mainDir, outDir, "Example.dtm"),
       file.path(mainDir, outDir, "Example.asc"),
       sep=" "))

# Second, we process the resulting CHM in rLiDAR

#install.packages("rLiDAR", type="source")
#install.packages("raster", dependencies = TRUE)
library(rLiDAR)
library(raster)
library(rgeos)

# Import the LiDAR-derived CHM file that we just made in the above section and plot it
chm<-raster(file.path(mainDir, outDir, "Example.asc"))
plot(chm)

Here’s the resulting canopy height model
Rplot02

# If we want, rLiDAR can smooth the CHM using a Gaussian filter
# Set the window size (ws)
ws<-3 # dimension 3x3
# Set the filter type
filter<-"Gaussian"
# filter<-"mean"
# Set the sigma value for the Gaussian filter
sigma<-0.5
sCHM<-CHMsmoothing(chm, filter, ws, sigma) 
plot(sCHM)

#########################
# Edited - Have a look at the comments section in this post to see the discussion that triggered this fix... basically, before we can call FindTreesCHM(), it needs to be
# replaced with an edited, custom version. Why? - well as of 04/06/2016 the FindTreesCHM funciton has a couple of bugs - namely one form a call to SpatialPoints() with specifying the projection
# and one in the call to colnames() (really, it's cbind(), but who's counting) 
FindTreesCHM<-function(chm, fws = 5, minht = 1.37) 
{
    if (class(chm)[1] != "RasterLayer") {
        chm <- raster(chm)
    }
    if (class(fws) != "numeric") {
        stop("The fws parameter is invalid. It is not a numeric input")
    }
    if (class(minht) != "numeric") {
        stop("The minht parameter is invalid. It is not a numeric input")
    }
    w <- matrix(c(rep(1, fws * fws)), nrow = fws, ncol = fws)
    chm[chm < minht] <- NA
    f <- function(chm) max(chm)
    rlocalmax <- focal(chm, fun = f, w = w, pad = TRUE, padValue = NA)
    setNull <- chm == rlocalmax
    XYmax <- SpatialPoints(xyFromCell(setNull, Which(setNull == 
        1, cells = TRUE)), proj4string = crs(chm))                # Edited
    htExtract <- over(XYmax, as(chm, "SpatialGridDataFrame"))
    treeList <- cbind(coordinates(XYmax), htExtract)              # Edited
    colnames(treeList) <- c("x", "y", "height")
    return(treeList)
}
#########################

# Setting the fixed windo size (fws)
fws<-3 # dimention 3x3
# Set the specified height above ground for the detection break
minht<-2.0
# Create the individual tree detection list and summarize it
loc<-FindTreesCHM(sCHM, fws, minht)
summary(loc)

# Set the maxcrown parameter - maximum individual tree crown radius expected
maxcrown=10.0
# Set the exclusion parameter - A single value from 0 to 1 that represents the % of pixel
# exclusion. E.g. a value of 0.5 will exclude all of the pixels for a single tree that has 
#a height value of less than 50% of the maximum height from the same tree. Default value is 0.3.
exclusion=0.1
# Compute canopy areas for the individual tree detections
canopy<-ForestCAS(sCHM, loc, maxcrown, exclusion)

# Retrieve the boundary for individual tree detection and canopy area calculation
boundaryTrees<-canopy[[1]]

# Retrieve the list of individual trees detected for canopy area calculation
canopyList<-canopy[[2]] # list of ground-projected areas of individual tree canopies
summary(canopyList)     # summary
canopyList$crad<-sqrt(canopyList$ca/pi) # Compute the corresponding crown radii

# Write the output to a CSV file. This will make bringing it into ArcGIS or QGIS by others easier
write.csv(canopyList, file.path(mainDir, outDir, "Example_Out.csv"))

# Plot the results in ggplot

# Let's convert the tree results into spatial points to be used in ggplot
library(sp)
XY<-SpatialPoints(canopyList[,1:2])    # Spatial points
XY<-data.frame(XY) # Converted to a dataframe
# We're not dealing with lat and long, so let's rename the columns
names(XY)[names(XY)=="longitude"]<-"x"
names(XY)[names(XY)=="latitude"]<-"y"

# Rasters are a little problematic, so convert the values into rows in a dataframe with 
# the corresponding information
CHMdf <- rasterToPoints(sCHM); CHMdf <- data.frame(CHMdf)
colnames(CHMdf) <- c("X","Y","Ht")
# Build the breaks for plotting
b.chm <- seq(0,50,10)

# Plotting the individual tree canopy boundary over the CHM
library(ggplot2)
ggplot(CHMdf) +
  geom_raster(data=CHMdf,aes(X,Y,fill=Ht)) +
  scale_fill_gradientn(name="Canopy Height",colours = terrain.colors(length(b.chm))[length(b.chm):1],breaks=b.chm) +
  geom_polygon(data = fortify(boundaryTrees), aes(x=long, y=lat,
                                                  group = group),colour='black', fill='transparent')+
  geom_point(data=XY, aes(x=x, y=y), color="black", shape=3, size=0.5)+
  scale_shape_discrete(name = "Tree Locations", labels=c("Tree Centroid","")) + 
  scale_alpha(range = c(0, 0.5)) +
  ggtitle("Location map showing individual trees and associated crown areas \n segmented from a LiDAR-derived canopy height model") +
  coord_equal() + theme_bw()

You can download the script from here and here’s the resulting figure.

Rplot01* – As per Carlos suggestion in the comments, here’s how one might cite rLiDAR – “Silva, C.A., Crookston, N.L., Hudak, A.T., and Vierling, L.A. 2015. rLiDAR: An R package for reading, processing
and visualizing LiDAR (Light Detection and Ranging) data, version 0.1, accessed Oct. 15 2015.”

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

It’s been a productive year thus far…

I’ve been out of pocket for awhile on this blog, but it’s fir a good reason. I’ve been writing my butt off! Below are four manuscripts published in the last few months, all of which I was a part of… I’ll like the work speak for itself.

Taylor The Economics of Ecological Restoration and Hazardous Fuel Reduction Treatments in the Ponderosa Pine Forest Ecosystem
by M H Taylor, A J Sanchez Meador, Y S Kim, K Rollins, and H Will
Abstract: In this article, we develop a simulation model of the benefits and costs of managing the ponderosa pine forest ecosystem in the southwestern United States. Using the model, we evaluate and compare the economic benefits and costs of ecological restoration and hazardous fuel reduction treatments. Both treatment approaches increase the expected number of low-severity wildfires, which can promote postfire rehabilitation. Hazardous fuel reduction treatments are likely to reduce expected wildfire suppression costs, but not enough to offset the costs of implementing treatments. Conversely, ecological restoration treatments do not necessarily reduce expected wild-fire suppression costs but fully restore the ecosystem in more than half of the simulation runs, which lowers the need for future fire suppression and reduces the chance of conversion to nonforest, alternative stable states. We find that the choice between hazardous fuel reduction and ecological treatments will depend on the management objective being pursued, as well as on site-specific factors such as the wildfire return interval and the economic value of biomass removed.
 Stoddard Five-year post-restoration conditions and simulated climate-change trajectories in a warm/dry mixed-conifer forest, southwestern Colorado, USA
by M T Stoddard, A J Sánchez Meador, P Z Fulé, and J E Korb
Abstract: Some warm/dry mixed-conifer forests are at increasing risk of uncharacteristically large, high-severity fires. As a result, managers have begun ecological restoration efforts using treatments such as mechanical thinning and prescribed fire. Empirical information on the long-term impacts of these treatments is limited, especially in light of potential climate change. We assessed changes in forest structure and composition five-years following three alternative restoration treatments in a warm/dry mixed-conifer forest: (1) thin/burn, (2) prescribe burn, and (3) control. We used the Climate-Forest Vegetation Simulator (Climate-FVS) model to quantify potential forest trajectories under alternative climate scenarios. Five years following treatments, changes in forest structure were similar to initial post-treatment conditions, with thin/burn being the only treatment to shift and maintain forest structure and composition within historical reference conditions. By 2013, the thin/burn had reduced basal area (11.3 m2 ha-1) and tree density (117.2 tree ha-1) by 56% and 79% respectively, compared to pre-treatment values. In the burn, basal area (20.5 m2 ha-1) and tree density (316.6 tree ha-1) was reduced by 20% and 35% respectively, from 2002 to 2013. Mortality of large ponderosa pine trees (the most fire-resistant species) throughout the duration of the experiment, averaged 6% in the burn compared to 16% in the thin/burn treatment. Changes five years following treatments were largely due to increases in sprouting species. Shrub and sapling densities were approximately two to three times higher (respectively) in the thin/burn compared to burn and control and dominated by sprouting oak and aspen. Under climate simulations, the thin/burn was more resilient in maintaining forest conditions compared to burn and control which approached meager forest conditions (3–4 m2 ha-1). These results indicate that restoration treatment that include both thinning and burning can maintain forest integrity over the next few decades.
 Tuten Ecological restoration and fine-scale forest structure regulation in southwestern ponderosa pine forests
by M C. Tuten, A J Sánchez Meador, and P Z. Fulé
Abstract: Fine-scale forest patterns are an important component of forest ecosystem complexity and spatial pattern objectives are an increasingly common component of contemporary silviculture prescriptions in dry fire-adapted forests of North America. Despite their importance, questions remain regarding the assessment of silvicultural treatments designed to meet spatial objectives. We initiated a replicated silvicultural assessment of two forest management approaches commonly applied in dense ponderosa pine forests of the Southwest United States: historical evidence-based ecological restoration guidelines (ERG) and northern goshawk (Accipiter gentilis) foraging area management recommendations (GMR). We compared stand-level characteristics, global tree location point patterns and tree group-level attributes resulting from the marking of these approaches to current forest conditions and patterns of historical forest remnants in six, 2.02 ha stem mapped plots. We also assessed group-level Vegetative Structural Stage (VSS; a classification of fine-scale forest structural development used to regulate fine-scale spatial patterns in these forests). ERG and GMR-based treatments significantly reduced densities and basal area from the current condition, but did not significantly differ in density from historical forest remnant estimates. GMR-based treatments retained greater stand level basal area than ERG-based treatments, primarily in large, 28–48 cm tree diameter classes. GMR-based treatments approximated global tree location point patterns of forest remnants better than ERG-based treatments, primarily due to a 5–6 m minimum spacing of residual trees, but also likely due to specific aspects of ERG-based marking techniques. Despite this difference, both treatments resulted in group-level characteristics similar to those exhibited by historical forest remnants. Both treatments significantly altered group-level VSS area and reduced variation of tree diameters within classified VSS groups.
 Outzs Post-fire ponderosa pine regeneration with and without planting in Arizona and New Mexico
by J Ouzts, T Kolb, D Huffman, A J Sánchez Meador
Abstract: Forest fires are increasing in size and severity globally, yet the roles of natural and artificial regeneration in promoting forest recovery are poorly understood. Post-fire regeneration of ponderosa pine (Pinus ponderosa, Lawson and C. Lawson) in the southwestern U.S. is slow, episodic, and difficult to predict. Planting of ponderosa pine after wildfire may accelerate reforestation, but little is known about survival of plantings and the amount of post-fire natural regeneration. We compared ponderosa pine regeneration between paired planted and unplanted plots at eight sites in Arizona and New Mexico that recently (2002– 2005) burned severely. Two sites had no natural regeneration and no survival of planted seedlings. Seedling presence increased with number of years since burning across all plots, was positively associated with forb and litter cover on planted plots, and was positively associated with litter cover on unplanted plots. Survival of planted seedlings, measured five to eight years after planting, averaged 25% (SE = 8) and varied from 0% to 70% across sites resulting in seedling densities of 0–521 trees ha-1. Based on a projected 44% survival of seedlings to mature trees and target density of mature trees determined by historical range of variability and ecological restoration principles, four of eight sites have a seedling density in planted plots (125–240 ha-1) that will produce a density of mature trees (55–106 ha-1) close to desired levels, whereas seedlings are currently deficient at three planted sites, and in surplus at one site, which had abundant natural regeneration. Natural regeneration in unplanted plots during the first decade after burning produced seedling densities inconsistent with desired numbers of mature trees. Natural regeneration in unplanted plots produced less than 33 seedlings ha-1 at seven of eight sites, but produced 1433 seedlings ha-1 at one high-elevation site that supported a more mesic vegetation community before burning than the other sites. Our results show that current practices for planting ponderosa pine after severe fires in Arizona and New Mexico produce desired numbers of seedlings in approximately half of all projects, whereas natural regeneration rarely does within the first decade after burning.

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

A little shameless press…

0407151049a

Last month’s Journal of Forestry and the Forestry Source Newspaper featuring our article “Implications of diameter caps on multiple forest resource responses in the context of 4FRI: Results from the Forest Vegetation Simulator” and an interview with Dr. Sanchez Meador.

 

Last month, the Forestry Source (the Society of American Foresters’ newspaper) decided to feature one of our recently published articles and the resulting news article included an interview with yours truly. I’ll be the first to admin that while I’m a huge extrovert, I hate giving interviews. The editor did a great job and I’m happy with the way the article turned out. If you’d like to read the new article you can download it here and if you like to read manuscript, which appeared in the Journal of Forestry that same month, you can find it here.

Here’s the citation for the manuscript: Sánchez Meador, A.J., Waring, K.M., and E.L. Kalies. 2015. Implications of diameter caps on multiple forest resource responses in the context of 4FRI: Results from the Forest Vegetation Simulator. Journal of Forestry 113(2) 219-230.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

My script to install the “top” R packages

Here’s a script that I use to query the CRAN package download logs and figure out what packages are the “top” packages being used/downloaded. It borrows heavily from this post, which is badass… User be wary, as it download all of the logs for the specified date and creates a data.table to house this information (if you specify a large timeframe, this thing will be HUGE). To get around that, I randomly r sample the data.table for half of the entries and at the end, I’ve got a section to install these packages. However, it’s currently commented out.

Geek+1

## Inspired and heavily dependent on the code Felix Schönbrodt
## http://www.nicebread.de/finally-tracking-cran-packages-downloads/

## ======================================================================
## Step 1: Parameterize the script with dates and the number of packages
## that we're interested in...
## ======================================================================

# My advice would be to set this to a day or a week. If you do 6-months like I've
# done here you better have the memory to support it!
start <- as.Date('2014-09-01')
end <- as.Date('2015-02-28')

# How many "top" packages are we interested in?
top.x <- 20

## ======================================================================
## Step 2: Download all log files for each week
## ======================================================================

# Here's an easy way to get all the URLs in R
all_days <- seq(start, end, by = 'day')

# If we were to look we'd  see a strong weekly pattern in the the downloads,
# with Saturday and Sunday having much fewer downloads than other days. This is
# not surprising since we know that the countries which use R don't work these
# days. Let's just look at MWF to be safe...
weekdays(all_days)
days.To.Keep<- c("Monday", "Wednesday", "Friday")
all_days <- subset(all_days, weekdays(all_days) %in% days.To.Keep)
weekdays(all_days)

year <- as.POSIXlt(all_days)$year + 1900
urls <- paste0('http://cran-logs.rstudio.com/', year, '/', all_days, '.csv.gz')

# only download the files you don't have:
missing_files <- setdiff(as.character(all_days), tools::file_path_sans_ext(dir("CRANlogs"), TRUE))

dir.create("CRANlogs")
for (i in 1:length(missing_files)) {
  print(paste0(i, "/", length(missing_files)))
  download.file(urls[i], paste0('CRANlogs/', missing_files[i], '.csv.gz'))
}

## ======================================================================
## Step 3: Load single data files into one big data.table and then clean
## up the files (delete them) once we're done
## ======================================================================

file_list <- list.files("CRANlogs", full.names=TRUE)

logs <- list()
for (file in file_list) {
  print(paste("Reading", file, "..."))
  logs[[file]] <- read.table(file, header = TRUE, sep = ",", quote = "\"",
                             dec = ".", fill = TRUE, comment.char = "", as.is=TRUE)
}

# rbind all of the files together
library(data.table)
dat <- rbindlist(logs)
# logs will likely be huge, so unless you have memory for days, we best delete it
# and free up that memmory
#rm(logs); gc(verbose=T);

#Let's make this data.table smaller, to save memory, and randomly sample half of it
dat<-dat[sample(nrow(dat), ceiling(0.5*nrow(dat))), ]

# define the remaining variable types
dat[, date:=as.Date(date)]
dat[, package:=factor(package)]
dat[, week:=strftime(as.POSIXlt(date),format="%Y-%W")]

# set the key
setkey(dat, package, date, week)

# Delete the files and thier directory (gots to keep our shit clean!!!!)
# Just comment this out if you don't want to delete the files (i.e., you
# might want them for later use)
#unlink("CRANlogs", recursive = TRUE) 

## ======================================================================
## Step 4: Analyze it!
## ======================================================================

library(ggplot2)
library(plyr)

# Overall downloads of packages
d1 <- dat[, length(week), by=package]
d1 <- d1[order(-V1), ]

# Build a vector of package names, to be  used later for install.packages
package.names<-as.character(d1$package[1:top.x])

# plot 1: Compare downloads of "top" packages on a weekly basis
agg1 <- dat[J(package.names), length(unique(ip_id)), by=c("week", "package")]

ggplot(agg1, aes(x=week, y=V1*2, color=package, group=package)) + geom_line(size=1) +
  ylab("Downloads") + theme_bw() +
  theme(axis.text.x  = element_text(angle=90, vjust=0.5))

## ======================================================================
## Step 5: Install them all (plus their dependencies)!
## ======================================================================

# Uncomment this line if you want to install all of the "top" packages
# install.packages(package.names,dep=TRUE)
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

A couple of new wildfire-related publications

Over the past year I have had the opportunity to work a several wildfire-related projects, two of which are now available from their publishers. The first focuses on the effectiveness of fuel treatments following the Wallow  (2011) fire and the second focuses on long-term forest dynamics under alternative climate and management scenarios following the Rodeo-Chediski (2002) fire. These were both great projects and I think collectively they provide quite a bit of insight into the abilities of managers and agencies to mitigate wildfire effects (during and after) and highlight the effects treatments have on resiliency and given different climate scenarios.

Screenshot 2014-10-02 07.12.24 Amy E. M. Waltz, Michael T. Stoddard, Elizabeth L. Kalies, Judith D. Springer, David W. Huffman, and A.J. Sánchez Meador. 2014. Effectiveness of fuel reduction treatments: assessing metrics of forest resiliency and wildfire severity after the Wallow Fire, AZ. Forest Ecology and Management 334(15): 43-52. http://dx.doi.org/10.1016/j.foreco.2014.08.026

Abstract: Landscape-scale wildfire has occurred in higher frequencies across the planet. Fuel reduction treatments to fire-adapted systems have been shown to reduce the impact to human values-at-risk. However, few studies have examined if these treatments contribute to ecosystem resilience, or the capacity of a system to absorb perturbation and return to a similar set of structures or processes. We defined short-term metrics of resiliency to test the hypothesis that fuel reduction treatments in mixed conifer forests increased a fire-adapted system’s resiliency to uncharacteristically severe wildfire. In addition, we tested the hypothesis that fuel reduction treatments reduced burn severity, thereby increasing protection for adjacent human communities. We examined a mixed conifer forested landscape in the southwestern U.S. that was burned by a landscape-scale “mega-fire” in 2011; fuel reduction treatments had been established around communities in the 10 years prior to the fire. Fire effects were highly variable in both treated and untreated forests. However, analysis of resiliency metrics showed that: (a) treated units retained a higher proportion of large trees and had post-fire tree densities within the natural range of variability; (b) the understory herbaceous community had significantly higher cover of native grasses in the treated units, but no significant differences in nonnative cover between treated and untreated units; and (c) high-severity patch sizes were significantly larger in untreated stands and covered a larger proportion of the landscape than historical reference conditions. Fire severity, as defined by overstory mortality and basal area loss, was significantly lower in treated units; on average, trees killed per hectare in untreated units was six times the number of trees killed in treated units. Fuel reduction treatments simultaneously reduced fire severity and enhanced short-term metrics of ecosystem resiliency to uncharacteristically severe fire.

 Screenshot 2014-10-02 07.12.15 Alicia Azpeleta Tarancón, Peter Z. Fulé, Kristen L. Shive, Carolyn H. Sieg, Andrew Sánchez Meador, and Barbara Strom 2014. Simulating post-wildfire forest trajectories under alternative climate and management scenarios. Ecological Applications 24:1626–1637. http://dx.doi.org/10.1890/13-1787.1

Abstract: Post-fire predictions of forest recovery under future climate change and management actions are necessary for forest managers to make decisions about treatments. We applied the Climate-Forest Vegetation Simulator (Climate-FVS), a new version of a widely used forest management model, to compare alternative climate and management scenarios in a severely burned multispecies forest of Arizona, USA. The incorporation of seven combinations of General Circulation Models (GCM) and emissions scenarios altered long-term (100 years) predictions of future forest condition compared to a No Climate Change (NCC) scenario, which forecast a gradual increase to high levels of forest density and carbon stock. In contrast, emissions scenarios that included continued high greenhouse gas releases led to near-complete deforestation by 2111. GCM-emissions scenario combinations that were less severe reduced forest structure and carbon stock relative to NCC. Fuel reduction treatments that had been applied prior to the severe wildfire did have persistent effects, especially under NCC, but were overwhelmed by increasingly severe climate change. We tested six management strategies aimed at sustaining future forests: prescribed burning at 5, 10, or 20-year intervals, thinning 40% or 60% of stand basal area, and no treatment. Severe climate change led to deforestation under all management regimes, but important differences emerged under the moderate scenarios: treatments that included regular prescribed burning fostered low density, wildfire-resistant forests composed of the naturally dominant species, ponderosa pine. Non-fire treatments under moderate climate change were forecast to become dense and susceptible to severe wildfire, with a shift to dominance by sprouting species. Current U.S. forest management requires modeling of future scenarios but does not mandate consideration of climate change effects. However, this study showed substantial differences in model outputs depending on climate and management actions. Managers should incorporate climate change into the process of analyzing the environmental effects of alternative actions.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail