From 417bb23a9dfe7ef170195d49189ab9aeed62428a Mon Sep 17 00:00:00 2001
From: Millian Poquet <millian.poquet@irit.fr>
Date: Fri, 10 May 2024 01:25:39 +0200
Subject: [PATCH] artifact guide: clearer steps, todo--

---
 artifact-overview.typ | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/artifact-overview.typ b/artifact-overview.typ
index a807b5f..7559ce0 100644
--- a/artifact-overview.typ
+++ b/artifact-overview.typ
@@ -252,14 +252,17 @@ The step-by-step instructions of this document can be used in several ways depen
 + You can *check* the final analyses (code + plots) done in Article @lightpredenergy by reading the provided pre-rendered notebooks.
 + You can *reproduce* the *final analyses* by first downloading the provided aggregated results the experiments, and then by running the notebooks yourself.
   Notebooks are editable so you can freely modify the analyses done, or add your own.
+  - Refer to #todo[link to Danilo's notebook section] for the machine learning experiment.
+  - Refer to @sec-analyze-simu-campaign-outputs for the scheduling experiment.
 + You can *reproduce* our *experimental campaigns* by downloading the provided input files, and then by running the experiment yourself.
   This can enable you to make sure that our experiment can be reproduced with the *exact same parameters and configuration*.
+  - Refer to #todo[link to Danilo's expe section?] for the machine learning experiment.
+  - Refer to @sec-run-simu-campaign for the scheduling experiment.
 + You can reproduce our *experimental campaigns* by downloading original traces of the Marconi100,
   by generating the experimental campaigns parameters yourself (enabling you to hacking provided command-line parameters or provided code),
-  and then by running the experiment yourself.\
-  *Please note that this option is disk/bandwidth/computation-intensive.*
-
-The following instructions detail how to reproduce our work from scratch if done in order (goal 4).
+  and then by running the experiment yourself.
+  You can follow all steps below in this case,
+  but *please do note that this option is disk/bandwidth/computation-intensive.*
 
 == Trace analysis #todo[remove section?]
 == Job power prediction <sec-job-power-pred>
@@ -455,8 +458,9 @@ please refer to `expe-sched/simu-instances.json` for the mapping from unique sim
 Required input files.
 - `expe-sched/m100-platform.xml`, the SimGrid platform file (output of @sec-gen-sg-platform).
 - `expe-sched/simu-instances.json`, the set of simulation instances (output of @sec-gen-simu-instances).
-- The `/tmp/wlds` directory (#emph-overhead[1.4 Go]) that contains all the workload files (output of @sec-gen-workloads).\
-  #todo[zenodo workloads]
+- The `/tmp/wlds` directory (#emph-overhead[1.4 Go]) that contains all the workload files (output of @sec-gen-workloads).
+  You can *download the file* `workloads.tar.xz` on #todo[zenodo], and then *extract it* into `/tmp/` via a command such as the following:
+  `tar xf workloads.tar.xz --directory=/tmp/`
 
 #fullbox(footer: [#emph-overhead[Disk: 7.6 Go.] Time: 00:06:00.])[
   ```sh
@@ -474,7 +478,7 @@ Required input files.
   ))
 ]
 
-=== Analyze the simulation campaign outputs
+=== Analyze the simulation campaign outputs <sec-analyze-simu-campaign-outputs>
 The following command runs a notebook that analyze the aggregated results of the simulation campaign, and outputs Figure 4 and Figure 5 of Article @lightpredenergy.
 
 Required input files.
-- 
GitLab