diff --git a/README.md b/README.md index 09859c89a5e0f60e5573aafe1dcfdba6161e00d9..adfb5b901494cdcf5ae6ef1dddea9a47bdb6b2bf 100644 --- a/README.md +++ b/README.md @@ -10,9 +10,9 @@ It is a fork from demand-response-user - `scripts/run_expe.sh` shell script which launch experiments when workload is prepared. - `scripts/compute_stat.sh` shell script which compute stats when the experiments finished. - `default.nix` nix file given all the necessary dependencies. -- `campaign3.py`: Python script preparing and launching in parrallel the experiments. -Each experiment corresponds to one instance of `instance3.py`. -- `analyse_campaign3.ipynb`: Jupyter notebook analysing the results. +- `campaign.py`: Python script preparing and launching in parrallel the experiments. +Each experiment corresponds to one instance of `instance.py`. +- `analyse_campaign.ipynb`: Jupyter notebook analysing the results. ## Steps to reproduce the experiments @@ -33,8 +33,8 @@ The main software used (and configured in the file `default.nix`) are: - python3, pandas, jupyter, matplotlib etc. for the data analysis -All the dependencies of the project are given in the default.nix file to -install nix you can launch the command +All the dependencies of the project are given in the `default.nix` file. +If you do not have Nix installed on your machine, you can install it with ```bash scripts/install_nix.sh ``` @@ -42,36 +42,40 @@ It might be required to type other commands for the nix command to be available in the current shell. In this case it will be indicated by the prompt of the nix installation. - -To go into the experiments environment you can now type : +Open a shell with all the dependencies managed: ```bash nix-shell -A exp_env --pure ``` -This will compile and install all the dependencies needed for the experiments -it can take some times (in our case, it took 6 minutes) + +This will compile and install all the dependencies needed for the experiments. +It can take some time (in our case, it took 6 minutes). + ### 2. Prepare input workload -Inside the nix-shell use the following command inside project directory : +Inside the nix-shell, run the following script to download (from the [Parallel Workloads Archive](https://www.cs.huji.ac.il/labs/parallel/workload/)) and filter the input workload used in the experiments: + ```bash scripts/prepare_workload.sh ``` ### 3. Launch the campaign -By default the run_expe scripts only run one seed by experiment -to do the 30 experiments you have to modify `--nb_replicat ` argument -it should look like this : +By default the `run_expe` scripts only run one seed by experiment +to do the 30 experiments you have to modify `--nb_replicat` argument +it should look like this: + ```bash -python3 campaign3.py --nb-replicat 30 --expe-range -1 --window-mode 8 --nb-days 164 \ +python3 campaign.py --nb-replicat 30 --expe-range -1 --window-mode 8 --nb-days 164 \ --json-behavior behavior_file/big_effort.json behavior_file/low_effort.json behavior_file/max_effort.json behavior_file/medium_effort.json \ --compress-mode --production-file data_energy/energy_trace_sizing_solar.csv data_energy/energy_trace_sizing_solar.csv ``` -As every experiments can take up to 20 GB of RAM, + +As every experiment can take up to 20 GB of RAM, you might be limited by the memory of your system. When you are running the experiments you can limit the number of parallel run using `--threads n` command-line argument with `n` the maximum of experiments to run in parallel. By default, it uses every physical cores available. -Once you have done all the previous step, +Once you have done all the previous steps, launch the bash script in the nix-shell : ```bash scripts/run_expe.sh @@ -83,6 +87,7 @@ Rigid finished, Now monolithic behavior It means that the program has finished computing the simulation and is now computing stat on the obtained value. You can stop the program now if you only want raw simulation results + ### 4. Generate the metrics The tagged `experiments-version` version forget some metrics in computation. After the experiments, to compute the metrics we have to switch to tag @@ -92,13 +97,13 @@ using the command : scripts/compute_stat.sh ``` The stat will be computed and place in the shell directory with the names -`campaign3_metrics.csv` and `campaign3_metrics_relative.csv`. +`campaign_metrics.csv` and `campaign_metrics_relative.csv`. It will likely differ from the one provided in result_big_expe. In our reproduction we noticed a relative difference of 0.5% in energy related metrics and 2% in user behaviors related metrics ### 5. Generate the graph -To generate the graph you can launch the notebook `analyse_campaign3.ipynb`. +To generate the graph you can launch the notebook `analyse_campaign.ipynb`. You will have to change the variable `RAW_DATA_DIR` and `OUT_DIR` to match with your setup. @@ -121,7 +126,7 @@ is the energy trace of the energy produced by DataZero2 sizing algorithm ## Advanced options Inside the nix shell exp_env, launch the commands : ```bash -python3 campaign3.py --help +python3 campaign.py --help ``` You will get a details of every possible arguments. You will have to at least give the following arguments : diff --git a/analyse_campaign3.ipynb b/analyse_campaign.ipynb similarity index 100% rename from analyse_campaign3.ipynb rename to analyse_campaign.ipynb diff --git a/campaign3.py b/campaign.py similarity index 99% rename from campaign3.py rename to campaign.py index ec4ae8ce08ad491e0c9164ee1709d564c4151b63..1785755b3594219bf461f3b419f3a44bec5cf1a6 100755 --- a/campaign3.py +++ b/campaign.py @@ -11,7 +11,7 @@ import concurrent.futures from scripts.util import WL_DIR, ROOT_DIR, ExpeDict, create_dir_rec_if_needed, generate_dict_from_json from scripts.generate_file import save_dict_to_json -from instance3 import start_instance, prepare_input_data, generate_windows_dict, \ +from instance import start_instance, prepare_input_data, generate_windows_dict, \ compress_expe_result, user_type_to_behavior from scripts.stat_tools import plot_exec_time, plot_queue_load from compute_metrics_campaign3 import compute_metrics_all_expe_parr diff --git a/compute_metrics_campaign3.py b/compute_metrics_campaign.py similarity index 100% rename from compute_metrics_campaign3.py rename to compute_metrics_campaign.py diff --git a/instance3.py b/instance.py similarity index 100% rename from instance3.py rename to instance.py