Hands On: Working with DaRUS

Aufgaben für den HandsOn-Workshop mit dem Datenrepositorium der Universität Stuttgart

How to Start



Dataverse: Container of data sets

Dataset: One or more datafiles, described with metadata

Metadata: Describing information about one or more data files like author, project, used methods, parameters. Functions as search criteria and documentation of the dataset.

First Steps: Overview and Search

  1. Go to DaRUS (https://darus.uni-stuttgart.de) and enter Darus with the “Explore DaRUS”-Button
  2. Make yourself comfortable with the main view of DaRUS:
    1. Filter the results: Show only Dataverses or only Datasets
    2. Filter by author, subject or publication year
    3. How many datasets from Earth and Environmental Sciences are published in 2020?
  1. Find specific datasets in DaRUS by using the search field or the advanced search:
    1. How many data files entail the data set “Trained ANN Parameters for Physics-inspired Artificial Neural Network for Dynamic System”?
    2. How many versions are available for the dataset “FLP Telemetry Data” from the Flying Laptop Project?
    3. Under which license is the “RePlay-DH Process Metadata Schema” published?
    4. Which version of the Octopus Reconstruction Software was used in the dataset with the doi 10.18419/darus-682?



Login to DaRUS via your home institution. 

If you are not able to login, choose "Universität Stuttgart" as the institution

use fn102478 and the password provided in the workshop as login credentials.

Adding Datasets

  1. Go to the playground dataverse (Test-Dataverse for Trainings)
  2. Add a new dataset to the dataverse: Fill in the mandatory fields “Title”, “Author”, “Contact”, “Description”, “Subject” and save the dataset.
    • Who could be an appropriate dataset contact in 2, 5 or 10 years? Add this person(s) to the contact block.
  3. Edit the metadata for your dataset and add as many information to the metadata as you know. Please list all metadata fields, that are unclear to you.
    1. What is the most important information that someone has to know to find and understand your dataset? Do you find metadata categories for this information?
    2. What is the best way for you to add documentation to a dataset: a readme-file, metadata, a link to a publication?
    3. Add the citation of a “fake” publication into the related publication-field.
    4. Try to document the software or instruments you use in your research
    5. Try to document the methods you use and add at least one parameter
    6. Try to document the variables of your data (if appropriate)
  4. Add at least two different files to the dataset
    1. Upload the file/s over the Web-Interface
    2. Tag one of your files as data and another as documentation
    3. Add an additional description to your data
    4. Use the path field to create a directory hierarchy

Adding data via API (optional)

Try to add an additional file to your dataset via the API per curl:

  1. Create an API Token (http://guides.dataverse.org/en/5.12.1/user/account.html#how-to-create-your-api-token)
  2. Use the Add-Data-Endpoint (http://guides.dataverse.org/en/5.12.1/api/native-api.html#add-file-api) to add a file to your dataset.


Have you ever published data bevor? Have you shared data within your group or with external partners?

Think about embedding data management in your research process:

  • Is there a common guideline in your working group for the naming of files, for data structure and data documentation?
  • When would be a good time to document data? During or shortly after data generation? During analysis and visualization? Shortly bevor or after the publication of the results?
  • Are there information about the data already there is some (semi)-structured form, e.g. (electronic) lab books, input or log files or readme-files? How could this information be transferred into structured metadata?


To the top of the page