"- we remove from the workload the jobs with an **execution time greater than one day**\n",
"- we remove from the workload the jobs with a **number of requested cores greater than 16**\n",
"\n",
"To do so, we use a homemade SWF parser. "
"To do so, we use a the home-made SWF parser `swf_moulinette.py`:"
]
},
{
...
...
@@ -204,7 +204,7 @@
"## Platform\n",
"According to the system specifications given in the [corresponding page in Parallel Workload Archive](https://www.cs.huji.ac.il/labs/parallel/workload/l_metacentrum2/index.html): from June 1st 2014 to Nov 30th 2014 there is no change in the platform for the clusters considered in our study (<16 cores). There is a total of **6304 cores**.(1)\n",
"\n",
"We build a platform file adapted to the remaining workload. We see above that the second selection cut 73.7\\% of core-hours from the original workload. We choose to make an homogeneous with 16-core nodes. To have a coherent number of nodes, we count:\n",
"We build a platform file adapted to the remaining workload. We see above that the second selection cuts 73.7\\% of core-hours from the original workload. We choose to make an homogeneous cluster with 16-core nodes. To have a coherent number of nodes, we count:\n",
"(1) clusters decomissionned before or comissionned after the 6-month period have been removed: $8+480+160+1792+256+576+88+416+108+168+752+112+588+48+152+160+192+24+224 = 6304$"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "north-meeting",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
...
...
@@ -243,7 +235,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.9"
"version": "3.9.6"
}
},
"nbformat": 4,
...
...
%% Cell type:markdown id:forced-resolution tags:
# Downloading and preparing the workload and platform
## Workload
We use the reconverted log `METACENTRUM-2013-3.swf` available on [Parallel Workload Archive](https://www.cs.huji.ac.il/labs/parallel/workload/l_metacentrum2/index.html).
It is a 2-year-long trace from MetaCentrum, the national grid of the Czech republic. As mentionned in the [original paper releasing the log](https://www.cs.huji.ac.il/~feit/parsched/jsspp15/p5-klusacek.pdf), the platform is **very heterogeneous** and underwent majors changes during the logging period. For the purpose of our study, we perform the following selection.
First:
- we remove from the workload all the clusters whose nodes have **more than 16 cores**
- we truncate the workload to keep only 6 month (June to November 2014) where no major change was performed in the infrastructure (no cluster < 16 cores added nor removed, no reconfiguration in the scheduling system)
Second:
- we remove from the workload the jobs with an **execution time greater than one day**
- we remove from the workload the jobs with a **number of requested cores greater than 16**
To do so, we use a homemade SWF parser.
To do so, we use a the home-made SWF parser`swf_moulinette.py`:
%% Cell type:code id:ff40dcdd tags:
``` python
# First selection
# Create a swf with only the selected clusters and the 6 selected months
fromtimeimport*
begin_trace=1356994806# according to original SWF header
jun1_unix_time,nov30_unix_time=mktime(strptime('Sun Jun 1 00:00:00 2014')),mktime(strptime('Sun Nov 30 23:59:59 2014'))
--keep_only="nb_res <= 16 and run_time <= 24*3600"
```
%% Output
Processing swf line 100000
Processing swf line 200000
Processing swf line 300000
Processing swf line 400000
Processing swf line 500000
Processing swf line 600000
Processing swf line 700000
Processing swf line 800000
Processing swf line 900000
Processing swf line 1000000
Processing swf line 1100000
Processing swf line 1200000
Processing swf line 1300000
Processing swf line 1400000
Processing swf line 1500000
Processing swf line 1600000
-------------------
End parsing
Total 1604201 jobs and 546 users have been created.
Total number of core-hours: 4785357
44828 valid jobs were not selected (keep_only) for 13437365 core-hour
Jobs not selected: 2.7% in number, 73.7% in core-hour
0 out of 1649030 lines in the file did not match the swf format
1 jobs were not valid
%% Cell type:markdown id:afde35e8 tags:
## Platform
According to the system specifications given in the [corresponding page in Parallel Workload Archive](https://www.cs.huji.ac.il/labs/parallel/workload/l_metacentrum2/index.html): from June 1st 2014 to Nov 30th 2014 there is no change in the platform for the clusters considered in our study (<16 cores). There is a total of **6304 cores**.(1)
We build a platform file adapted to the remaining workload. We see above that the second selection cut 73.7\% of core-hours from the original workload. We choose to make an homogeneous with 16-core nodes. To have a coherent number of nodes, we count:
We build a platform file adapted to the remaining workload. We see above that the second selection cuts 73.7\% of core-hours from the original workload. We choose to make an homogeneous cluster with 16-core nodes. To have a coherent number of nodes, we count:
The corresponding SimGrid platform file can be found in `platform/average_metacentrum.xml`.
(1) clusters decomissionned before or comissionned after the 6-month period have been removed: $8+480+160+1792+256+576+88+416+108+168+752+112+588+48+152+160+192+24+224 = 6304$
# Exploration of different user levers for demand response in data centers
## List of experiments
- TODO
# Characterization of different user behaviors for demand response in data centers
This repository contains the scripts and files needed to reproduce the experiments presented in the paper "Characterization of different user behaviors for demand response in data centers" submitted to [Euro-Par conference 2022](https://2022.euro-par.org/).
## Install
Clone the repository
## Description of the main files
-`0_prepare_workload.ipynb`: Jupyter notebook downloading and preparing the workload trace used in the experiments
-`campaign2.py`: Python script preparing and launching in parallel the 105 experiments. Each experiment corresponds to one instance of `instance2.py`.
-`analyse_campaign2.ipynb`: Jupyter notebook analysing the results and plotting the graphs shown in the article
(campaign1 is another set of experiements not discussed in the article. We kept it here but beware that some path might be broken...)
For reproductibility, all the dependancies for these experiments and their version are managed with `nix` package manager. To install `nix`:
## Steps to reproduce
You will need ~10GB disk space for the inputs, outputs and dependencies.
### 1. Install
For the sake reproductibility, all the dependancies for these experiments and their version (release tag or commit number) are managed with the package manager Nix. If you don't have it on your machine, the following command should install it. Otherwise, please refer to [their documentation](https://nixos.org/download.html).
```bash
curl -L https://nixos.org/nix/install | sh
```
The main software used (and configured in the file `default.nix`) are:
-[batsim](https://batsim.org/) for the infrastructure simulation
-[batmen](https://gitlab.irit.fr/sepia-pub/mael/batmen): our set of schedulers for batsim and plugin to simulate users
- python3, .. . TODO for the data analysis
## Start the simulation environment
TODO
``` bash
nix-shell --pure ./env_nix.nix --show-trace#start the shell
jupyter notebook #open jupyter notebook
-[Batsim](https://batsim.org/) and [SimGrid](https://simgrid.org/) for the infrastructure simulation
-[Batmen](https://gitlab.irit.fr/sepia-pub/mael/batmen): our set of schedulers for batsim and plugin to simulate users
- python3, pandas, jupyter, matplotlib etc. for the data analysis
Enter a shell with all dependencies managed. This will take some time to download and compile the first time you launch it, but then all the environment is cached for future use.
```bash
nix-shell -A exp_env
```
### 2. Prepare input workload
Inside the nix shell, start a notebook and follow the steps presented in `0_prepare_workload.ipynb`:
```bash
jupyter notebook 0_prepare_workload.ipynb
```
### 3. Launch the campaign
Still inside the nix shell, launch the python script. It took about 2 hours for us to execute in parallel on a 16-core Intel Xeon E5-2630 v3 machine.