Welcome to the Benchmark provider user journey

In this space we will try to describe the functionalities provided by DataBench to help in the integration of benchmarks from the most technical point of view, the creation of Ansible playbooks to run the benchmarks as well as the reusable modules created for this

Adding benchmark

User journey

  The DataBench Toolbox offers the possibility to external registered users with the role of Benchmark Providers to add new benchmarks to the Benchmarks catalogue. The user simply has to fill in a set of forms describing the benchmark and providing a set of features (tags) defined in the Toolbox DB data model and described in deliverable D3.1.

  In order to facilitate this process a form for suggesting benchmarks has been provided for users to describe their benchmark in simple terms and send this information to the Toolbox administrators. Once this description is done, an editorial flow is followed. The Benchmark Provider should contact via email (email provided in the Toolbox website) with the Administrator user. The Administrator will revise the provided content, request for more info if needed, and eventually approve it. The benchmark is then indexed and searchable in the Toolbox web interface. Other users in the system will then be able to look for the benchmark in the catalogue and follow the description and instructions from the provider to use it.

If you are from a benchmarking organization we encourage you to revise if your benchmarks are listed in the Toolbox and provide any extra information (updates or new benchmarks) via the form listed above, accessible from the “Benchmarks” option of the menu of the Toolbox too. You can find our current list of benchmarking organizations here . Make sure you are on the list or if the information we have is accurate or up to date. If you would like to suggest any changes, please fill in the Suggest Knowledge Nugget option under the Knowledge Nuggets menu. You will be prompted with a form to make your suggestion of changes. Make sure you provide the pointers to your organization or the link to the existing nuggets you would like to update in the Toolbox. The administrator will check the info and update it.

Integrate benchmark

User journey

The steps to be followed by a Benchmark Provider to design and prepare the benchmark with the necessary playbooks for the automation from the Toolbox are described in detail in section 3.1 of Deliverable D3.4

Summarizing:
The integrated benchmarks in DataBench are designed to be executed on external infrastructure, being in a cloud provider or in specific in-house hosts. In some cases, organizations are not prone to run the benchmarks in a public environment, but rather in a close one isolated from the Internet for security, privacy or even commercial/IP protection reasons. To overcome this constraint, the Toolbox is designed to support automation using Ansible Playbooks split in 2 parts: A part executed locally, and the actual run of the benchmark executed in the target host. For clarification purposes, the local host is the machine where Ansible is executed and the target host is the machine where the system to be benchmarked is located.

The common steps that every playbook should have to be integrated are:

  • Load variables files.
  • Install required software (git, pip...) necessary to run the benchmark.
  • Get the benchmark code (from Github, scp...).
  • Ensure that the path to store the benchmark results exists.
  • Configure the benchmark.
  • Run the benchmark and get the results back
The naming convention and some of the reusable components available to help in the integration are explained in Deliverable D3.4

After the completion of the Ansible playbook, the Benchmark Provider should open a pull request to the project git and contact via email with the email provided in the Toolbox for the Administrator user to start the process of integrating the playbook in the Toolbox.