DaRUS is based on the OpenSource Software DataVerse and offers university groups (institutes, working groups, SFBs, projects) the possibility to maintain their own data universes with their own search criteria and description options. The data sets are described in such a way that they are easy to find and share. An API offers the possibility to automate the upload and the access to the data. The data sets do not have to be published, but can be easily quoted and made available to the public with a DOI. In this way, not only can the requirements of funding organizations or journals be met, but important research results within a research group can also be made visible and usable in the long term. In addition to the production system, DemoDaRUS offers a test environment in which the functionalities of DaRUS can be experimented with.
Currently DaRUS is in productive pilot operation. If you would like to use DaRUS for your working group or project, please contact the FoKUS-Team.
Maintenance window Thursday 8.pm - 10 p.m
During this time window there may be short interruptions in service.
Frequenty Asked Questions
The web interface of DaRUS can be found at https://darus.uni-stuttgart.de . As a non-registered user you will see all published datasets and data areas (Dataverses).
The DemoDaRUS test system can be found at https://demodarus.izus.uni-stuttgart.de/. The test system can only be accessed within the University of Stuttgart.
Please contact us (see contact box below). If a dataverse already exists for your institute, we can tell you the local administrator. In the other case, we will create a data area (Dataverse) for your institute after a consultative meeting, which you can then manage yourself. For this, there has to be a local admin with the role "DaRUS-Administrator" in the Uni-Admin-Portal. Contents of the consultation are: Functions and configuration possibilities of DaRUS, rights and duties as DaRUS admin, application goals, data types, formats and volumes, description categories, automation.
Login is done on the DaRUS homepage via Shibboleth. Click on LogIn in the top right corner and select "Universität Stuttgart" as Institution. You will then be redirected to a login page where you can login with your AC account. In addition to the published datasets and dataverses, you now also see all data and dataverses to which you have rights.
DaRUS enables you to make your data Findable, Accessible, Interoperable and Reusable (FAIR). All data uploaded on DaRUS gets an DOI as a persistent identifier, a license (CC-BY by default) and can be described with an extensive set of metadata, organized in metadata blocks. The metadata blocks on DaRUS allow for information on citation and general description, context (project, funding, relations to other datasets and information), the research process with the methods and tools used, the object of study with the variables and parameters captured, as well as technical information on the use and documentation of software and code. A minimal set of metadata (title, description, author, contact, subject) is technically enforced by the system. Further metadata can be configured by local administrators according to discipline specific requirements. The metadata blocks base on metadata standards like DDI, DataCite, EngMeta and CodeMeta. All metadata is indexed and can be used to find and filter datasets via search facets, full text and advanced search services.
All data published on DaRUS is findable via B2FIND, OpenAIRE and the Google Dataset Search. All metadata is exposed in standard formats like DataCite, DublinCore and schema.org via an OAI-PMH interface.
The quality of published datasets is assured via a publication workflow. Before publication, each dataset undergoes a content-related and a formal quality check, in which it is checked for completeness, comprehensibility and reproducibility. Authors are encouraged to use open data formats.
The data is stored in an object storage system (NetApp StorageGRID) and simultaneously mirrored to two geographically separate locations. The hard disks are configured as so-called dynamic disk pools (DDP). A key feature is that all data is striped across all disks involved. This means that the simultaneous failure of up to 3 hard disks can be handled without data loss.
All uploads and downloads are automatically encrypted with TLS1.2 or better. Stored data is encrypted by the system with AES-256 by default and automatically decrypted when accessed by an authorized application.
If you want to upload large amounts of data to DaRUS on a regular basis, please contact your local administrator or, as an administrator, the DaRUS team. We can then activate direct upload to the data backend for your dataverse.
More information about the different ways to upload files can be found under BigData.
In your dataverse, you can decide for yourself which users have which rights to the data records.. You can create user groups and assign roles to individual users or user groups. In order to assign rights to a user or to add a user to a group, this user must first log in to DaRUS via Shibboleth.
In addition, you can configure which search and description categories (metadata) should be available for your research data and which must be filled in optionally or mandatory. You can also create templates that prefill frequently used metadata.
Each dataverse has its own rights management. Rights are not inherited from a higher-level dataverse to lower-level dataverse. You need admin rights for a dataverse to be able to grant rights to other users or user groups. Rights can be assigned at the level of dataverses or records.
First, you need a dataverse for your project. If your institute already has a dataverse, the local admin (with the role DaRUS-Administrator in the Uni-Admin-Portal) can create a dataverse for you and give you admin rights for this dataverse.
Afterwards all project partners have to register once with DaRUS via Shibboleth so that they appear as users in the system. Then you can give the partners the desired rights to the project data averse.
In order for external partners to be able to authenticate themselves with DaRUS, the respective institution must be a member of the EduGAIN network and has to submit DaRUS four different attributes (eppn, givenName, sn, mail) to allow the creation of new user account. If there are any problems, please contact the FoKUS team.
Each data set is assigned a DOI (Digital Object Identifier) as ID during creation. You must publish the data record so that it can be permanently accessed by the public under this DOI. The data record goes through a publication workflow in which it is first checked from a scientific point of view and then from a formal point of view. After release, the DOI is registered and the data record is released to the public.
Allow at least one week for the publishing process. After the internal content control it usually takes about 3 working days until you get feedback from the DaRUS team. Until the actual release and publication of the data set, a revision may be necessary.
DaRUS offers different degrees of publicity:
- Published with DOI without restriction: Both the description (metadata) and the data itself are fully accessible to the public. Access to the data set is counted, but no information about the users is collected. (Application example: OpenData)
- Published with DOI with guestbook: The metadata is fully accessible, but users must complete a questionnaire before downloading the actual files. The data is made accessible independently of the answers to the questionnaire (application example: need for information about subsequent users of data).
- Published with DOI with file protection: The metadata is fully accessible, the files can only be requested and must be explicitly released by an administrator. (Application example: Restriction to scientific use of data relevant to data protection)
- Unpublished with private URL: A so-called private URL can be created for an unpublished record. Anyone in possession of this URL can access the record. (Application example: Access to data for reviewers)
- Unpublished: Unpublished datasets can only be accessed by users who have been explicitly granted rights to the Dataverse they contain. (Application example: sharing data in the project context)
Contact your local administrator and refer them to these FAQs.
When assigning rights to a dataverse, they are synchronized with the index with a time delay. To trigger the synchronization manually, change the metadata configuration of the data verses in question.
Frequently asked questions about data sets
You can specify a file path for each file when uploading. In addition to the table view, DaRUS then offers an alternative tree view (Table) in which the files are displayed organised in the file paths.
Alternatively, you can upload several files in a zip file. This file is automatically unpacked by DaRUS, whereby the folder structure within the zip file is retained.
More information on this can be found in the Dataverse user guide.
All metadata fields that have a multiline text box as an input element can be designed with the help of HTML tags.
For example, you can mark text paragraphs with the help of a <p> tag, make links directly clickable with a <a> tag or design enumerations with <ul> and <li> elements.
<p>First text paragraph.</p>
<p>Second text paragraph with a <a href="http://link.to.target">link</a>.</p>
<li>Bullet element 1</li>.
DaRUS, or Dataverse, attempts to automatically read tabular data (e.g. CSV, TSV, STATA, SAV, R) and convert it to an internal archive format. After the data has been successfully read in, there is a preview function on the respective file page that displays the content.
For this to work with CSV files, they must be in a certain format:
- The columns must be separated by commas (no semicolons, no tabs, no spaces).
- The file starts with a header line with unique column headers.
- All rows have the same number of column.
- Table cells contents containing commas are surrounded by quotation marks.
Some metadata fields (currently the Keyword and Topic Classification fields from the Citation Metadata Block) allow the desired term to be associated with a link from a controlled vocabulary. Ideally, the term is thus clearly semantically defined and can be linked to other research outputs.
The desired term is entered in the subfield "Term" (with each individual word capitalised for consistency), the name of the vocabulary in short form (e.g. WikiData, LCSH) in the field "Vocabulary" and the term URI (which is usually also a link to the documentation of this term) in the field "Vocabulary URL".
For example, if the keyword "molecular dynamics simulation" is to be linked to the corresponding entry in WikiData, the term 'Molecular Dynamics Simulation' is entered in the Term field, "WikiData" in the Vocabulary field and the link "https://www.wikidata.org/wiki/Q901663" in the Vocabulary URL field.
Vocabularies can be controlled vocabularies (such as the Loterre Vocabularies), ontologies (such as EDAM, WikiData), classifications (such as the Library of Congress Subject Headings, Mathematical Subject Classification or Physics Subject Headings) or also norm data from the GND.
Various platforms help to search across multiple vocabularies and ontologies, such as:
- Ontobee: search platform across OBO ontologies (mainly from the life sciences).
- Terminology services for engineering (NFDI4Ing) and chemistry (NFDI4Chem)
- Linked Open Vocabularies