Clumped Isotope Data Archive - ClumpDB in EarthChem

06 Apr 2021

Hi all,

As part of Petersen et al., 2019 (G^3), we developed a clumped isotope data template and database to encourage more long-lasting and standardized data archiving. The database is called ClumpDB and is run by NSF-funded long-term repository EarthChem.

Recently, the ClumpDB website has been updated and it now contains much clearer instructions for how to submit your data. In general, you can begin the submission process when your paper is in revision or provisionally accepted, and ClumpDB will hold the data release until the paper is published. Submitted datasets receive unique doi #'s which can be included in data availability statements at the end of a paper to link the two in perpetuity.

The template we developed is available here ("Download Current Template") and is designed to be flexible for different types of clumped isotope studies. This template is evolving as we apply it to more different types of data, but defines the minimum level of data that needs to be included for future reprocessing efforts (d45-d49 level data for samples, standards, plus ref gas composition, etc). If you have suggestions for how this can be improved or more useful, please reach out.

This post is both to advertise this database and encourage its use, and also to seek out a volunteer to be a backup point-of-contact person (I am the primary), to answer both user and EarthChem moderator questions. Please email me (sierravp@umich.edu) if you are interested in being this seconary point-of-contact and we can add your info to the website.

Hope everyone is doing well and staying healthy,

-Sierra

Mathieu Daëron's picture
Mathieu Daëron
Offline
Joined: Sep 17 2014
Sierra, thanks for the

Sierra, thanks for the update. I have a question. From reading the blurb on the ClumpDB it's not imediately obvious how to provide "session/window" information when using a sliding window approach for standardization.

Have people taken the time to think about how best to report such information? E.g.: report for each unknown replicate all anchor replicates to be used for correction of this particular replicate (tedious); report the first and last anchor analyses to consider; report "batches" or "sub-sessions" and specify the use of N adjacent sub-sessions; specify sliding window "width" eitehr in terms of time or in terms of sequence of analyses...

In my experience this is the most subjective issue we have to deal with when reprocessing data and its effect are frequently not tiny.

Cheers,  – Mathieu

sierra
Offline
Joined: Aug 28 2014
Good comment Mathieu. How

Good comment Mathieu.

How this has been done previously is to provide a chronological numbering for all replicates (column "Analysis ID"), then report the beginning and end of the moving window using these replicate numbers (columns ARF_ID2 and ARF_ID3), combined with the column RefYN ("used in ARF?") to indicate WHICH replicates within that window were used for correction (i.e. ETH standards, gases, combination). The other ARF_ID column (ARF_ID1) would be an indicator of an overall session that might be a multi-month period between power outages/maintenance. An alternative would be to use a start and end date in ARF_ID2/_ID3.

Also, this template is designed to flexible. So if your scheme of correction needs something different, you can create new columns for extra information that might be helpful. You can always add more columns. The columns highlighted in green are the "required" ones that every study should report. White columns are things you likely want to report, but may not apply to all studies (e.g. "Formation temperature" - useful for calibration studies but not for unknowns).

Overall, a good way to figure out how to use this is to look at previously submitted datasets as an example if you can't figure out what one column is supposed to show.

-Sierra

===================
Sierra V. Petersen
Assistant Professor
University of Michigan
sierravp@umich.edu