Skip to content
Snippets Groups Projects
Commit dde4ff0b authored by Millian Poquet's avatar Millian Poquet
Browse files

artifact guide: fix TODOs

parent 5c9f7f17
No related branches found
No related tags found
No related merge requests found
#import "@preview/big-todo:0.2.0": * #import "@preview/big-todo:0.2.0": *
#import "@preview/showybox:2.0.1": showybox #import "@preview/showybox:2.0.1": showybox
#let artifact-code-git-repo = "https://framagit.org/batsim/artifact-europar24-lightweight-power-pred-sched"
#let artifact-code-git-repo-clone-url = "https://framagit.org/batsim/artifact-europar24-lightweight-power-pred-sched.git"
#let artifact-code-sh-permalink = "https://archive.softwareheritage.org/swh:1:dir:3f36d9f296decdda45884b4c02e0c909de477a12;origin=https://framagit.org/batsim/artifact-europar24-lightweight-power-pred-sched.git;visit=swh:1:snp:848722d78a9cf7824c99a4f76ca737cb3d7e91c8;anchor=swh:1:rev:5c9f7f17c96c2c44d108e238b9dc5bde0bd5a62e"
#let artifact-code-git-commit = "5c9f7f17c96c2c44d108e238b9dc5bde0bd5a62e"
#let zenodo-doi = "10.5281/zenodo.11173632"
#let zenodo-url = "https://doi.org/" + zenodo-doi
#set page( #set page(
paper: "a4", paper: "a4",
margin: 2cm, margin: 2cm,
...@@ -48,6 +55,7 @@ ...@@ -48,6 +55,7 @@
#show link: x => underline(offset: 0.5mm, stroke: .25mm, text(weight: "semibold", fill: blue, x)) #show link: x => underline(offset: 0.5mm, stroke: .25mm, text(weight: "semibold", fill: blue, x))
#let url(x) = link(x)[#raw(x)] #let url(x) = link(x)[#raw(x)]
#let todo = todo.with(inline: true) #let todo = todo.with(inline: true)
#let tododanilo(x) = todo("DANILO: " + x)
#[ #[
#line(length:100%, stroke: .5mm) #line(length:100%, stroke: .5mm)
...@@ -64,12 +72,12 @@ ...@@ -64,12 +72,12 @@
*Conference.* Euro-Par 2024\ *Conference.* Euro-Par 2024\
*Article.* Light-weight prediction for improving energy consumption in HPC platforms\ *Article.* Light-weight prediction for improving energy consumption in HPC platforms\
*Quick links*. *Quick links*.
#set list(marker: none, body-indent: 5mm) #set list(marker: none, body-indent: 3.5mm)
- Preprint PDF on HAL. #url("https://hal.science/hal-04566184") - Preprint PDF on HAL. #url("https://hal.science/hal-04566184")
- Artifact data on Zenodo. #todo([zenodo link]) - Artifact data on Zenodo. #url(zenodo-url)
- Artifact code on Software Heritage. #todo([software heritage link])
- Artifact code on a GitLab instance. #todo([git link])
- Artifact Nix binary cache. #url("https://lightweight-pred-sched-europar24.cachix.org") - Artifact Nix binary cache. #url("https://lightweight-pred-sched-europar24.cachix.org")
- Artifact code Git repository. #link(artifact-code-git-repo)[Framagit]
- Artifact code permalink. #link(artifact-code-sh-permalink)[Software Heritage]
#line(length:100%, stroke: .5mm) #line(length:100%, stroke: .5mm)
] ]
...@@ -79,15 +87,18 @@ ...@@ -79,15 +87,18 @@
This document shows how to reproduce the experimental sections (6.2 to 6.5) of article @lightpredenergy. This document shows how to reproduce the experimental sections (6.2 to 6.5) of article @lightpredenergy.
We hope that this document is enough to reproduce the whole experiments from scratch. We hope that this document is enough to reproduce the whole experiments from scratch.
However, as reproducing the exact analyses and experiments conducted by the authors requires to download and store lots of input trace data (#box([$tilde.eq$ 300 Go)]) and to do some heavy computations, However, as reproducing the exact analyses and experiments conducted by the authors requires to download and store lots of input trace data (#box([$tilde.eq$ 300 Go)]) and to do some heavy computations,
various intermediate and final results have been cached and made available on #todo[Zenodo] to enable the reproduction of only subparts of the experimental pipeline. In particular, the final analyses of the article are done in standalone notebooks whose input data is available and small. various intermediate and final results have been cached and made available on #link(zenodo-url)[Zenodo] to enable the reproduction of only subparts of the experimental pipeline. In particular, the final analyses of the article are done in standalone notebooks whose input data is available and small.
Unless otherwise specified, all commands shown in this document are expressed in #link("https://en.wikipedia.org/wiki/Bourne_shell")[`sh`] and are thus compatible with `bash` and `zsh`. Every command that takes a significant amount of time, storage or bandwidth have its overhead given in the second part of the box. Unless otherwise specified, execution times have been obtained on a powerful computation node that uses 2x Intel Xeon Gold 6130. Unless otherwise specified, all commands shown in this document are expressed in #link("https://en.wikipedia.org/wiki/Bourne_shell")[`sh`] and are thus compatible with `bash` and `zsh`.
A #link("https://en.wikipedia.org/wiki/MD5")[MD5 hash] is given for the output files that we are think are important, and all these files can directly be downloaded on #todo[zenodo]. The disk/bandwidth/computation overhead of commands is specified in the footer part of each command box, and significant overheads are #emph-overhead[emphasized].
Unless otherwise specified, execution times have been obtained on a powerful computation node that uses 2x Intel Xeon Gold 6130.
A #link("https://en.wikipedia.org/wiki/MD5")[MD5 hash] is given for the output files that we are think are important,
and all these files can be downloaded on #link(zenodo-url)[Zenodo].
The MD5 hashes have been computed by #link("https://www.gnu.org/software/coreutils/")[GNU coreutils]'s `md5sum` command. The MD5 hashes have been computed by #link("https://www.gnu.org/software/coreutils/")[GNU coreutils]'s `md5sum` command.
#fullbox(footer: [Time: 00:00:01.])[ #fullbox(footer: [Time: 00:00:01.])[
```sh ```sh
echo 'All commands should look like this' echo 'Commands should look like this'
echo 'Example command' > /tmp/example-output echo 'Example command' > /tmp/example-output
sleep 1 sleep 1
``` ```
...@@ -98,9 +109,9 @@ The MD5 hashes have been computed by #link("https://www.gnu.org/software/coreuti ...@@ -98,9 +109,9 @@ The MD5 hashes have been computed by #link("https://www.gnu.org/software/coreuti
= Getting Started Guide = Getting Started Guide
All the software environments required to reproduce the analyses and experiments of article @lightpredenergy are open source and have been packaged with #link("https://nixos.org/", [Nix]). All the software environments required to reproduce the analyses and experiments of article @lightpredenergy are open source and have been packaged with #link("https://nixos.org/", [Nix]).
Nix can build the *full* software stack needed for this experiment as long as source code remains available. As we also put our source code on #link("https://www.softwareheritage.org/")[Software Heritage] we hope that this artifact will have long-term longevity. For the sake of this artifact reviewers' quality of life, we have set up a binary cache with precompiled versions of the software used in the experiments. Nix can build the *full* software stack needed for this experiment as long as source code remains available. As we also put most of the source code needed by this artifact on #link("https://www.softwareheritage.org/")[Software Heritage] we hope that this artifact will have long-term longevity. For the sake of this artifact reviewers' quality of life, we have set up a binary cache with precompiled versions of the software used in the experiments.
No special hardware is required to reproduce our work. We think that our Nix environments will work on future Nix versions, but for the sake of traceability we stress that we have used Nix 2.18.0 installed either by #link("https://archive.softwareheritage.org/swh:1:rev:b5b47f1ea628ecaad5f2d95580ed393832b36dc8;origin=https://github.com/DavHau/nix-portable;visit=swh:1:snp:318694dfdf0449f0a95b20aab7e8370cff809a66")[nix-portable 0.10.0] or directly available via NixOS using channel `23-11`. No special hardware is required to reproduce our work. Our Nix environments are likely to work on future Nix versions, but for the sake of traceability we stress that we have used Nix 2.18.0 installed either by #link("https://archive.softwareheritage.org/swh:1:rev:b5b47f1ea628ecaad5f2d95580ed393832b36dc8;origin=https://github.com/DavHau/nix-portable;visit=swh:1:snp:318694dfdf0449f0a95b20aab7e8370cff809a66")[nix-portable 0.10.0] or directly available via NixOS using channel `23-11`.
Our software environments likely work on all platforms supported by Nix (Linux on `i686`/`x86_64`/`aarch64` and MacOS on `x86_64`/`aarch64` as of 2024-05-07) but we have only tested them on Linux on `x86_64`. More precisely, we have used the #link("https://www.grid5000.fr/w/Grenoble:Hardware#dahu")[Dahu Grid'5000 cluster] (Dell PowerEdge C6420, 2x Intel Xeon Gold 6130, 192 GiB of RAM) on the default operating system available on Grid'5000 as of 2024-05-07 (Debian `5.10.209-2` using Linux kernel `5.10.0-28-amd64`). Our software environments likely work on all platforms supported by Nix (Linux on `i686`/`x86_64`/`aarch64` and MacOS on `x86_64`/`aarch64` as of 2024-05-07) but we have only tested them on Linux on `x86_64`. More precisely, we have used the #link("https://www.grid5000.fr/w/Grenoble:Hardware#dahu")[Dahu Grid'5000 cluster] (Dell PowerEdge C6420, 2x Intel Xeon Gold 6130, 192 GiB of RAM) on the default operating system available on Grid'5000 as of 2024-05-07 (Debian `5.10.209-2` using Linux kernel `5.10.0-28-amd64`).
== Install Nix == Install Nix
...@@ -177,16 +188,13 @@ Please note that the way to enable flakes depend on whether you are on NixOS or ...@@ -177,16 +188,13 @@ Please note that the way to enable flakes depend on whether you are on NixOS or
- Launching `nix build 'github:nixos/nixpkgs?ref=23.11#hello'` should create a `result` symbolic link in your current directory. Then, launching `./result/bin/hello` should print `Hello, world!`. - Launching `nix build 'github:nixos/nixpkgs?ref=23.11#hello'` should create a `result` symbolic link in your current directory. Then, launching `./result/bin/hello` should print `Hello, world!`.
- Launching `nix shell 'github:nixos/nixpkgs?ref=23.11#hello' --command hello` should print `Hello, world!`. - Launching `nix shell 'github:nixos/nixpkgs?ref=23.11#hello' --command hello` should print `Hello, world!`.
== Using our Nix binary cache == Using our Nix binary cache (optional)
*This step is completely optional*. Using our binary cache is recommended as it enables to download precompiled versions of our software environments instead of building them on your own machine.
Using our binary cache is however recommended
as it enables to download precompiled versions of our software environments instead of building them on your own machine.
Our cache has the following properties. Our cache has the following properties.
- URL is #url("https://lightweight-pred-sched-europar24.cachix.org") - URL. #url("https://lightweight-pred-sched-europar24.cachix.org")
- Public key is #text(size: 9pt, raw("lightweight-pred-sched-europar24.cachix.org-1:dHsm8geVskEOsZIjzXtVCmPvh0L2zwTlLm8V4QoJdgI=")) - Public key. #text(size: 9.5pt, raw("lightweight-pred-sched-europar24.cachix.org-1:dHsm8geVskEOsZIjzXtVCmPvh0L2zwTlLm8V4QoJdgI="))
Once again, we recommend to use #link("https://nixos.wiki/wiki/Binary_Cache#Using_a_binary_cache")[up-to-date documentation on using a Nix binary cache] Once again, we recommend to use #link("https://nixos.wiki/wiki/Binary_Cache#Using_a_binary_cache")[up-to-date documentation on using a Nix binary cache], but instructions are given below on how to use our cache as of 2024-05-07.
but show how to use our cache as of 2024-05-07.
If *you are using NixOS*, you must edit your Nix#underline[OS] configuration file to add our cache URL in the `nix.settings.substituters` array, If *you are using NixOS*, you must edit your Nix#underline[OS] configuration file to add our cache URL in the `nix.settings.substituters` array,
and our cache public key in the `nix.settings.trusted-public-keys` array. and our cache public key in the `nix.settings.trusted-public-keys` array.
...@@ -197,7 +205,7 @@ and our cache public key in the `trusted-public-keys` array. Please find below a ...@@ -197,7 +205,7 @@ and our cache public key in the `trusted-public-keys` array. Please find below a
#{ #{
set text(size: 9pt) set text(size: 9pt)
show raw.where(block: true): it => [ show raw: it => [
#let nlines = it.lines.len() #let nlines = it.lines.len()
#table(columns: (auto, auto), align: (right, left), inset: 0.0em, gutter: 0.5em, stroke: none, #table(columns: (auto, auto), align: (right, left), inset: 0.0em, gutter: 0.5em, stroke: none,
..it.lines.enumerate().map(((i, line)) => (math.mono(text(gray)[#(i + 1)]), line)).flatten() ..it.lines.enumerate().map(((i, line)) => (math.mono(text(gray)[#(i + 1)]), line)).flatten()
...@@ -211,12 +219,24 @@ and our cache public key in the `trusted-public-keys` array. Please find below a ...@@ -211,12 +219,24 @@ and our cache public key in the `trusted-public-keys` array. Please find below a
] ]
} }
== Version traceability == Version traceability and quick Nix flake explanation
All the software versions we use are *fully and purely defined* in the `flake.nix` and `flake.lock` files, All the software versions use in this artifact are *fully and purely defined* thanks to Nix flakes.
and in the `flake.nix` and `flake.lock` of the flake inputs (recursively). More concretely, #link(artifact-code-git-repo)[our artifact Git repository] at commit #raw(artifact-code-git-commit)
defines how to build and use the software environments used to reproduce all our work.
These environments are named _shells_ in Nix terminology.
Nix builds software in isolated (filesystem, network...) sandboxes to remove most sources of non-determinism, and forces inputs (source code, dependencies) to have well-defined versions (well defined content hash and version control commit).
Our artifact Git repository *directly* defines how the scripts used to reproduce Article @lightpredenergy should be built,
as the source code of these scripts is inside our artifact Git repository.
Software that we manage but whose source code is stored in another repository (_e.g._, the scheduler implementation used in our scheduling experiment, the Batsim simulator...) define how they should be built in their own Git repository.
Software that we do not manage but that we use is either imported from the repository of the software itself if they use flakes (_e.g._, Typst), imported from #link("https://github.com/NixOS/nixpkgs")[nixpkgs] if possible (_e.g._, gzip), or otherwise defined in our artifact Git repository (_e.g._, Python's fastparquet library).
Nix flakes enable to link together several Nix software descriptions that are distributed in different repositories.
This is done by (recursively) tracing the _inputs_ (Flake dependencies) needed by the main flake of our artifact Git repository.
Consequently, the flake of our artifact Git repository *indirectly* defines all the softwares needed and their versions.
For the sake of traceability, here are the versions that we think are the most important. For the sake of traceability, here are the software versions that we think are the most important.
- This artifact Git repository commit #todo[git repo commit] - #link(artifact-code-git-repo)[Our artifact Git repository] commit #raw(artifact-code-git-commit)
- Nix 2.18.0 - Nix 2.18.0
- Nixpkgs commit `057f9aecfb71c4437d2b27d3323df7f93c010b7e` - Nixpkgs commit `057f9aecfb71c4437d2b27d3323df7f93c010b7e`
- NUR-kapack commit `4d8ca88fd8a0a2287ee5c023877f14d53d4854c1` - NUR-kapack commit `4d8ca88fd8a0a2287ee5c023877f14d53d4854c1`
...@@ -233,44 +253,54 @@ For the sake of traceability, here are the versions that we think are the most i ...@@ -233,44 +253,54 @@ For the sake of traceability, here are the versions that we think are the most i
- Typst commit `21c78abd6eecd0f6b3208405c7513be3bbd8991c` (after 0.11.0) - Typst commit `21c78abd6eecd0f6b3208405c7513be3bbd8991c` (after 0.11.0)
= Step-by-Step Instructions = Step-by-Step Instructions
All the scripts strongly related to the experiments of Article @lightpredenergy are available on #todo[git repo]. All the scripts strongly related to the experiments of Article @lightpredenergy are available on
The repository can be cloned with the following command. #link(artifact-code-git-repo)[the Framagit GitLab instance], and on
This repository might be updated, so for traceability please note that this guide expects the artifacts commit to be #todo[git repo commit]. #link(artifact-code-sh-permalink)[Software Heritage] for long-term longevity.
The repository can be cloned with the following commands.
The repository is explicitly set to the commit we have tested to write this artifact overview.
Please note that updating the repository may be useful -- _e.g._,
if errors have been found and fixed,
or if other parts of the experimental pipeline have been added.
#fullbox[ #fullbox[
```sh #set text(size: 9pt)
git clone #let code = "git clone ARTIFACT-CLONE-URL artifact-repo
cd ????? cd artifact-repo
git checkout ???? git checkout ARTIFACT-COMMIT"
``` #raw(
#todo[git repo] lang: "sh",
code.replace("ARTIFACT-CLONE-URL", artifact-code-git-repo-clone-url)
.replace("ARTIFACT-COMMIT", artifact-code-git-commit)
)
] ]
*All commands* below should be executed from the *root of the cloned Git repository*. *All commands* below should be executed from the *root of the cloned Git repository*.
The step-by-step instructions of this document can be used in several ways depending on *your* goals. The step-by-step instructions of this document can be used in several ways *depending on your goal*.
+ You can *check* the final analyses (code + plots) done in Article @lightpredenergy by reading the provided pre-rendered notebooks. + You can *check* the final analyses (code + plots) done in Article @lightpredenergy by reading the provided pre-rendered notebooks available on #link(zenodo-url)[Zenodo].
+ You can *reproduce* the *final analyses* by first downloading the provided aggregated results the experiments, and then by running the notebooks yourself. + You can *reproduce* the *final analyses* by first downloading the provided aggregated results of the experiments from #link(zenodo-url)[Zenodo], and then by running the notebooks yourself.
Notebooks are editable so you can freely modify the analyses done, or add your own. This enables you to *edit* our notebooks before running them, so that you can to modify the analyses done or add your own.
- Refer to #todo[link to Danilo's notebook section] for the machine learning experiment. // - Refer to #todo[link to Danilo's notebook section] for the machine learning experiment.
- Refer to @sec-analyze-simu-campaign-outputs for the scheduling experiment. - Refer to @sec-analyze-simu-campaign-results for instructions to analyze the results of the scheduling experiment.
+ You can *reproduce* our *experimental campaigns* by downloading the provided input files, and then by running the experiment yourself. + You can *reproduce* our *experimental campaigns* by downloading the provided experiment input files from #link(zenodo-url)[Zenodo],
and then by running the experiment yourself.
This can enable you to make sure that our experiment can be reproduced with the *exact same parameters and configuration*. This can enable you to make sure that our experiment can be reproduced with the *exact same parameters and configuration*.
- Refer to #todo[link to Danilo's expe section?] for the machine learning experiment. //- Refer to #todo[link to Danilo's expe section?] for the machine learning experiment.
- Refer to @sec-run-simu-campaign for the scheduling experiment. - Refer to @sec-run-simu-campaign for instructions to reproduce the scheduling experiment.
+ You can reproduce our *experimental campaigns* by downloading original traces of the Marconi100, + You can *fully reproduce* our *experimental campaigns* by downloading original traces of the Marconi100,
by generating the experimental campaigns parameters yourself (enabling you to hacking provided command-line parameters or provided code), by generating the experimental campaigns parameters yourself (enabling you to hack provided command-line parameters or provided code),
and then by running the experiment yourself. and then by running the experiment yourself.
You can follow all steps below in this case, You can follow all steps below in this case,
but *please do note that this option is disk/bandwidth/computation-intensive.* but *please do note that this is disk/bandwidth/computation-intensive.*
== Trace analysis #todo[remove section?]
== Job power prediction <sec-job-power-pred> == Job power prediction <sec-job-power-pred>
#todo[Millian to Danilo : comment regénérer les tarballs ?] How to reproduce the power predictions of jobs has not been written yet.
The expected output data of this section has however been stored on #link(zenodo-url)[Zenodo].
#fullbox[ //#tododanilo[how to reproduce this experiment?]
#todo[Millian to Danilo : commandes pour regénérer les tarballs]
#fullbox[
#filehashes(( #filehashes((
"fdcc47998a7e998abde325162833b23e", "power_pred_users_allmethods_max.tar.gz", "fdcc47998a7e998abde325162833b23e", "power_pred_users_allmethods_max.tar.gz",
"954f782a75c9a5b21c53a95c0218e220", "power_pred_users_allmethods_mean.tar.gz", "954f782a75c9a5b21c53a95c0218e220", "power_pred_users_allmethods_mean.tar.gz",
...@@ -279,7 +309,7 @@ The step-by-step instructions of this document can be used in several ways depen ...@@ -279,7 +309,7 @@ The step-by-step instructions of this document can be used in several ways depen
== Analysis and modeling of the power behavior of Marconi100 nodes == Analysis and modeling of the power behavior of Marconi100 nodes
=== Get power and job Marconi100 traces on your disk <sec-m100-power-job-traces> === Get power and job Marconi100 traces on your disk <sec-m100-power-job-traces>
This section downloads parts of the Marconi100 trace as archives from Zenodo, checks that the archives have the right content (via a md5 checksum), extracts the data needed by later stages of the pipeline (node power usage traces, jobs information traces), then removes unneeded extracted files and the downloaded archives. This section downloads parts of the Marconi100 trace as archives from #link("https://gitlab.com/ecs-lab/exadata")[the ExaData Zenodo files], checks that the archives have the right content (via MD5 checksums), extracts the data needed by later stages of the pipeline (node power usage traces, jobs information traces), then finally removes unneeded extracted files and the downloaded archives.
#fullbox(footer:[#emph-overhead[Download+temporary disk: 254 Go.] Final disk: 928 Mo. #emph-overhead[Time: 00:40:00.]])[ #fullbox(footer:[#emph-overhead[Download+temporary disk: 254 Go.] Final disk: 928 Mo. #emph-overhead[Time: 00:40:00.]])[
```sh ```sh
...@@ -313,7 +343,7 @@ This section downloads parts of the Marconi100 trace as archives from Zenodo, ch ...@@ -313,7 +343,7 @@ This section downloads parts of the Marconi100 trace as archives from Zenodo, ch
The following command traverses all the Marconi100 power traces and counts how many times each node was at each power value. The following command traverses all the Marconi100 power traces and counts how many times each node was at each power value.
Required input files. Required input files.
- All parquet files outputted by @sec-m100-power-job-traces. - All power parquet files outputted by @sec-m100-power-job-traces.
#fullbox(footer:[Disk: 1 Mo. Time: 00:03:00.])[ #fullbox(footer:[Disk: 1 Mo. Time: 00:03:00.])[
```sh ```sh
...@@ -366,7 +396,8 @@ Required input files. ...@@ -366,7 +396,8 @@ Required input files.
)) ))
] ]
==== Generate simulation instances <sec-gen-simu-instances> ==== Generate simulation instances <sec-gen-simu-instances>
The following commands generate workload parameters (_i.e._, when each workload should start and end). The start points are taken randomly during the 2022 M100 trace. The following commands generate workload parameters (_i.e._, when each workload should start and end), taking start points at random during the 2022 M100 trace.
Simulation instances are then generated from the workload parameters.
Required input files. Required input files.
- `expe-sched/m100-platform.xml` (output of @sec-gen-sg-platform). - `expe-sched/m100-platform.xml` (output of @sec-gen-sg-platform).
...@@ -387,7 +418,7 @@ Required input files. ...@@ -387,7 +418,7 @@ Required input files.
] ]
==== Merge job power predictions and jobs information into a single file ==== Merge job power predictions and jobs information into a single file
The job power predictions (as outputted by @sec-job-power-pred) are two archives that we assume are on your disk in the `./user-power-predictions` directory. The job power predictions (outputs of @sec-job-power-pred, available on #link(zenodo-url)[Zenodo]) are two archives that we assume are on your disk in the `./user-power-predictions` directory.
These archives contain gzipped files for each user. These archives contain gzipped files for each user.
To make things more convenient for the generation of simulation inputs, all the job power prediction files are merged into a single file with the following commands. To make things more convenient for the generation of simulation inputs, all the job power prediction files are merged into a single file with the following commands.
#fullbox(footer: [Temporary disk: 519 Mo. Final disk: 25 Mo. Time: 00:00:30.])[ #fullbox(footer: [Temporary disk: 519 Mo. Final disk: 25 Mo. Time: 00:00:30.])[
...@@ -430,7 +461,7 @@ Similarly, Marconi100 job traces are also merged into a single file. ...@@ -430,7 +461,7 @@ Similarly, Marconi100 job traces are also merged into a single file.
==== Generate workloads <sec-gen-workloads> ==== Generate workloads <sec-gen-workloads>
The following command generates all the workloads needed by the simulation. The following command generates all the workloads needed by the simulation.
*This step is very long, even while using all cores of powerful computation nodes!* *This step is very long, even while using all the cores of a powerful computation node!*
#fullbox(footer: [#emph-overhead[Disk: 1.4 Go. Time: 05:30:00.]])[ #fullbox(footer: [#emph-overhead[Disk: 1.4 Go. Time: 05:30:00.]])[
```sh ```sh
...@@ -441,7 +472,7 @@ The following command generates all the workloads needed by the simulation. ...@@ -441,7 +472,7 @@ The following command generates all the workloads needed by the simulation.
-o /tmp/wlds -o /tmp/wlds
``` ```
Output should be the `/tmp/wlds` directory, with should contain 1.4 Go of files: Output should be the `/tmp/wlds` directory, with should contain 1.4 Go of files.
- 30 Batsim workload files -- _e.g._, `/tmp/wlds/wload_delay_5536006.json` - 30 Batsim workload files -- _e.g._, `/tmp/wlds/wload_delay_5536006.json`
- 30 unused `input_watts` files -- _e.g._, `/tmp/wlds/wload_delay_5536006_input_watts.csv` - 30 unused `input_watts` files -- _e.g._, `/tmp/wlds/wload_delay_5536006_input_watts.csv`
- 1 directory per replayed job in `/tmp/wlds/jobs/` (total of 121544 jobs) - 1 directory per replayed job in `/tmp/wlds/jobs/` (total of 121544 jobs)
...@@ -459,8 +490,9 @@ Required input files. ...@@ -459,8 +490,9 @@ Required input files.
- `expe-sched/m100-platform.xml`, the SimGrid platform file (output of @sec-gen-sg-platform). - `expe-sched/m100-platform.xml`, the SimGrid platform file (output of @sec-gen-sg-platform).
- `expe-sched/simu-instances.json`, the set of simulation instances (output of @sec-gen-simu-instances). - `expe-sched/simu-instances.json`, the set of simulation instances (output of @sec-gen-simu-instances).
- The `/tmp/wlds` directory (#emph-overhead[1.4 Go]) that contains all the workload files (output of @sec-gen-workloads). - The `/tmp/wlds` directory (#emph-overhead[1.4 Go]) that contains all the workload files (output of @sec-gen-workloads).
You can *download the file* `workloads.tar.xz` on #todo[zenodo], and then *extract it* into `/tmp/` via a command such as the following:
`tar xf workloads.tar.xz --directory=/tmp/` Please note that all input files can be downloaded from #link(zenodo-url)[Zenodo] if you have not generated them yourself.
In particular to populate the `/tmp/wlds`directory you can *download file* `workloads.tar.xz` and then *extract it* into `/tmp/` via a command such as the following. `tar xf workloads.tar.xz --directory=/tmp/`
#fullbox(footer: [#emph-overhead[Disk: 7.6 Go.] Time: 00:06:00.])[ #fullbox(footer: [#emph-overhead[Disk: 7.6 Go.] Time: 00:06:00.])[
```sh ```sh
...@@ -478,7 +510,7 @@ Required input files. ...@@ -478,7 +510,7 @@ Required input files.
)) ))
] ]
=== Analyze the simulation campaign outputs <sec-analyze-simu-campaign-outputs> === Analyze the simulation campaign results <sec-analyze-simu-campaign-results>
The following command runs a notebook that analyze the aggregated results of the simulation campaign, and outputs Figure 4 and Figure 5 of Article @lightpredenergy. The following command runs a notebook that analyze the aggregated results of the simulation campaign, and outputs Figure 4 and Figure 5 of Article @lightpredenergy.
Required input files. Required input files.
...@@ -497,4 +529,3 @@ Required input files. ...@@ -497,4 +529,3 @@ Required input files.
] ]
#bibliography("artifact-bib.yml") #bibliography("artifact-bib.yml")
#todo_outline
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment