paleofire package highlighted in the PAGES Magazine

The paleofire package has been highlighted in the April issue of the PAGES Magazine :

PAGES_April2014Vannière B., O. Blarquez, J. Marlon, A.-L. Daniau and M. Power. 2014. Multi-Scale Analyses of fire-climate- Vegetation Interactions on Millennial Scales. PAGES Magazine, Volume 22, Page 40.

The article presents the workshop of the Global Paleofire Working Group held in Frasne (France) on 2-6 October 2013 that was supported by PAGES. The pdf of the article can be downloaded here or directly from the PAGES website.

New paper: Disentangling the trajectories of alpha, beta and gamma plant diversity of North American boreal ecoregions since 15,500 years

I am  pleased to announce the publication of a new study in Frontiers in Ecology and Evolution with Christopher Carcaillet, Thibaut Frejaville and Yves Bergeron. The paper is freely available online here.

Assessment of biodiversity in a changing world is a key issue and studies on the processes and factors influencing its history at relevant time scales are needed. In this study, we analysed temporal trends of plant diversity using fossil pollen records from the North American boreal forest-taiga biome (NABT). We selected 205 pollen records spanning the last 15,500 years. Diversity was decomposed into α and γ richness, and β diversity, using Shannon entropy indices. We investigated temporal and spatial patterns of β diversity by decomposing it into independent turnover (variation in taxonomic composition due to species replacements) and nestedness (variation due to species loss) components. The palynological diversity of the NABT biome experienced major rearrangements during the Lateglacial and early Holocene in response to major climatic shifts. The β nestedness likely reflected plant immigration processes and generally peaked before the β turnover value, which mirrors spatial and temporal community sorting related to environmental conditions and specific habitat constraints. Palynological diversity was generally maximal during the Lateglacial and the early Holocene and decreased progressively during the Holocene. These results are discussed according to macro-ecological processes, such as immigration, disturbances and environmental fluctuations, with climate most notably as the main ecological driver at millennial scales.

Introducing the paleofire package

The new R package, paleofire, has been released on CRAN. The package is dedicated to the analysis and synthesis of charcoal series contained in the Global Charcoal Database (GCD) to reconstruct past biomass burning.

paleofire is an initiative of the Global Paleofire Working Group core team, whose aim is to encourage the use of sedimentary charcoal series to develop regional to global synthesis of paleofire activity, and to ease access to the GCD data through a common research framework that will be useful to the paleofire research community. Currently, paleofire features are organized into three different parts related to (i) site selection and charcoal series extraction from GCD data; (ii) charcoal data transformation and homogenization; and (iii) charcoal series compositing and syntheses.

Installation:

paleofire is available from both CRAN website and GitHub

To install the official release (v1.0, 01/2014) from CRAN you just need to type this line at the R prompt:

 install.packages("paleofire")

To install the development version of paleofire from the GitHub repository the devtools package is required: on Windows platform the Rtools.exe program is required in order to install the devtools package. Rtools.exe can be downloaded for a specific R version on http://cran.r-project.org/bin/windows/Rtools/

Once devtools is installed type the following lines at R prompt to install paleofire:

library(devtools) 
install_github("paleofire","paleofire") 
library(paleofire)

To test everything is working you can plot a map of all charcoal records included in the Global Charcoal Database v03:

plot(pfSiteSel()) 

For details and examples about paleofire please refer to the included manual.

More examples of the capabilities of paleofire to come in the near future…

Calculate pixel values at increasing radius

ndvi_classes.m function could be used to calculate the number and percentage of pixels at increasing radius from a lake following certain criterion’s. The function was originally developed to count the number of pixels falling in specific NDVI classes around lakes but may be useful for different proxies and situations. The function use georeferenced tiff images as input. For details please refer to Aleman et al. (2013) or to the included help. The image used in the example can be downloaded here.

The function:

function [classe,classe_per]=ndvi_classes(ImageName,critere1,critere2)

% INPUT:  ImageName is the name of the geotiff image for imput
%         critere1 (and 2) criterion for classes evaluation
% 
% OUTPUT: classe: vector of NDVI classes at increasing radius from
%                 the lake center (number of pixel falling in class)
%         classe_per: Same as classe but in percentage of pixels
% EXAMPLE: Using the given Image 'classif_GBL_94.tif'
%{
        [classe classe_per]=ndvi_classes('classif_GBL_94.tif','>2','<4');
        %Plot the classe as a function of distance in km 
        %(1pixel=0.0301km, input 200 pixels when asked)
        plot((1:200)*0.0301,classe_per,'k-')
        ylabel('% pixels >2 and <4')
        xlabel('Distance from lake center (km)')
%}         
% Blarquez Olivier 2012
% blarquez@gmail.com

[Imge,dat] = geotiffread(ImageName);

imagesc(double(Imge))

select= input('Select point interactively? Y/N: ', 's');
if select=='Y'
title('Please double clic on lake center:')
p = impoint(gca,[]);
p = wait(p);
p=round(p);
elseif select=='N' 
valstring = input('Enter latitude and longitude: ', 's');
valparts = regexp(valstring, '[ ,]', 'split');
coords = str2double(valparts);
[~,p(2)]=min(abs(linspace(dat.Latlim(1),dat.Latlim(2),dat.RasterSize(1))-...
    coords(1)));
[~,p(1)]=min(abs(linspace(dat.Lonlim(1),dat.Lonlim(2),dat.RasterSize(2))-...
    coords(2)));
end

radius = input('Radius in pixels to be evaluated: ');
Imge(:,((p(1)+(radius)):end))=[];
Imge(:, 1:(p(1)-(radius)))=[];
Imge(((p(2)+(radius)):end),:)=[];
Imge(1:(p(2)-(radius)),:)=[];

n=length(Imge);

classe=zeros(radius,1);
classe_per=zeros(radius,1);
for r=1:radius
    warning('off','all')
imshow(Imge)
%imagesc(double(Imge))

hEllipse = imellipse(gca,[(n/2-(r)) n/2-(r) r*2 r*2]);
maskImage = hEllipse.createMask();
imshow(maskImage);

maskedImge = Imge .* cast(maskImage,class(Imge));
%imshow(maskedImge);
maskedImge(maskedImge==0)=NaN;
pixList=reshape(maskedImge,[],1);
pixList(isnan(pixList),:)=[];
 
classe(r,1)=length(pixList(eval(['pixList',critere1]) & ...
     eval(['pixList',critere2]),:));
classe_per(r,1)=classe(r,1)./length(pixList(:,1));
end
close all
    warning('on','all')
end

Paleofire reconstruction based on an ensemble-member strategy

Matlab codes and data associated with the manuscript: Blarquez O., M. P. Girardin, B. Leys, A. A. Ali, J. C. Aleman, Y. Bergeron and C. Carcaillet. 2013. Paleofire reconstruction based on an ensemble-member strategy applied to sedimentary charcoal. Geophysical Research Letters 40: 2667–2672, are available here.

Preprocessed data for Lac à la Pessière and Lago perso used in the paper also available:
PESSIERE.mat.zip
PERSO.mat.zip

Replicated rarefied richness

This example is intended to present the methods develloped for the paper Blarquez et al. (2013). We will use the macroremain record of the Lac du Loup a small subalpine lake with continuous macoremain counts for the last 11 750 years. More details on this site can be found in Blarquez et al. (2010).

First load the macroremain data:

dataMacro=read.csv("http://blarquez.com/public/code/dataMacro.csv")
# Macroremain counts
paramMacro=read.csv("http://blarquez.com/public/code/paramMacro.csv")
# Associated depths, ages and sample volumes
dataMacro[is.na(dataMacro)]=0
# Set missing values to zero

We now caluculates the median resolution of the record and the interpolated ages at which the influx will be calculated:

resMed=median(diff(paramMacro[,3]))
Ages=seq(-50, 11750, resMed)

We reconstruct the influx matrix using the pretreatment function which is the R implementation of the CharAnalysis CharPretreatment.m function originally develloped by P. Higuera and available at https://sites.google.com/site/charanalysis
Requires a matrix as input with the following columns:
CmTop CmBot AgeTop AgeBot Volume (params)
A serie from which to calculate accumulation rates (serie)

source("http://blarquez.com/public/code/pretreatment.R")
infMacro=matrix(ncol=length(dataMacro[1,]),nrow=length(Ages))
for (i in 1:length(dataMacro[1,])){
 + infMacro[,i]=c(pretreatment(params=paramMacro,serie=dataMacro[,i],
 + yrInterp=resMed,first=-50,last=11750,
 + plotit=F,Int=F)$accI)
 + }

Calculate samples macroremain sums and the minimun influx sum:

S=rowSums(infMacro)
nMin=min(na.omit(S[1:length(S)-1]))

We exclude the last sample (not calculated because of unknown accumulation rate) and samples with NA (because of vol==0):

del=which(S==0 | is.na(S))
infMacro=infMacro[-del,]
Ages=Ages[-del]
S=S[-del]

We calculate a proportion matrix which is used to replicate the rarefaction procedure:

S_=matrix(ncol=ncol(infMacro),nrow=nrow(infMacro))
for (i in 1:length(infMacro[1,])){
 + S_[,i]=c(S)
 + }
propMat=infMacro/S_

Replicate rarefaction with min influx n=1,2,…,500.
We will use the rarefy function from the package vegan:

library(vegan)
rare_=matrix(nrow=nrow(infMacro),ncol=500)
for (n in 1:500){
 + infN=ceiling(S_*n/nMin*propMat)
 + rare_[,n]=c(rarefy(infN,sample=n))
 + }

And finnally calculates the 90 percent confidence intervals and plot the rarefied richness using a 500 years locally weighted scatter plot smoother:

CI_=t(apply(rare_, 1, quantile, probs = c(0.05, 0.5, .95), na.rm = TRUE))
span=500/resMed/length(Ages)
plot(Ages,lowess(CI_[,2], f =span)$y,type="l",ylab="E[Tn]",ylim=c(min(CI_),
 + max(CI_)),xlab="Age",main="Rarefied Richness",xlim=c(12000,-100))
lines(Ages,lowess(CI_[,1], f =span)$y,type="l",lty=2)
lines(Ages,lowess(CI_[,3], f =span)$y,type="l",lty=2)

Screen Shot 2014-02-09 at 12.41.53 PM
References:
Blarquez O., Finsinger W. and C. Carcaillet. 2013. Assessing paleo-biodiversity using low proxy influx. PLoS ONE 8(4): e65852
Blarquez O., Carcaillet C., Mourier B., Bremond L. and O. Radakovitch. 2010. Trees in the subalpine belt since 11 700 cal. BP : origin, expansion and alteration of the modern forest. The Holocene 20 : 139-146. DOI : 10.1177/0959683609348857

Download pdf vignette

Superposed Epoch Analysis (SEA)

SEA.m function for Superposed Epoch Analysis (SEA) with confidence intervals estimated using block bootstrap procedure, the method is freely inspired from Adams et al. (2003), for an example of using SEA in palaeoecology see Blarquez and Carcaillet (2010). For details please refer to included help.

Example:

Create a random time serie with noise, define some events and plot them along with the time series:

t=1:500;
d=2.5*sin(2*pi*t/100)+1.5*sin(2*pi*t/41)+1*sin(2*pi*t/21);
T=(d+2*randn(size(d))).';
figure()
plot(t,T,'g')
ev=[50,93,131,175,214,257,297,337,381,428,470].';
hold on
plot(ev,repmat(4,length(ev),1),'v')

Screen Shot 2014-02-09 at 11.49.50 AM

Then we perform a SEA using a time window of 20 years before and 20 years after each event. ‘nbboot’ argument is used to define the number of circular block bootsrtap replicates with an automatically calculated block length (‘b’ equal 0) following Adams et al. (2003) procedure.


[SEA_means,SEA_prctiles]=SEA(T,ev,20,20,'prctilesIN',[5,95],'nbboot',9999,'b',0)

Screen Shot 2014-02-09 at 11.54.30 AM

Proxy response to the randomly generated events appears significant at the 95% confidence levels during events occurrence (at a lag close to zero).

Paleofire frequencies

KDffreq.m function calculates palaeo-fire frequency using a Gaussian kernel and computes bootstrapped confidence intervals arround fire frequency. Require the date of fire events as input variable. Associated with this function analyst may use cvKD.mfunction to estimate optimal kernel bandwidth by max log likelihood. Here’s an example:

Generate some fire dates for the last 8000 years:

x=randi(8000,40,1);
plot(x,1,'+','col','red')
set(gca,'xdir','reverse')
xlabel('Time (cal yrs. BP)')

Screen Shot 2014-02-09 at 12.17.27 PM

Use the cvKD.m function to calculate the optimal bandwidth for the kernel:

cvKD(x)
ans = 724.3434

Then we estimate the long term fire frequency using the Gaussian kernel density procedure:

 [ffreq]=KDffreq(x,'up',0,'lo',8000,'bandwidth',700,'nbboot',100,'alpha',0.05 );
 

Screen Shot 2014-02-09 at 12.28.47 PM