MarketPlace
Marketplace distributed scheduling of cloud data centers.
This repository shows how to run the experiments and analysis of the article "Cooperative vs. non-cooperative marketplace for computing jobs to be run on geo-distributed data centers only supplied by renewable energies".
The experimental campaign can be re-executed as follows:
- Run the experiments;
- Put the results of the experiments inside XPs folder;
- Re-run the jupyters notebook for each analysis.
Directory organization
This directory has the following folders:
- Analysis: All notebooks for the article's analysis of each article's section;
- ExecutionScripts: The scripts to run the experiments;
- GenerateExperiments: Python classes for managing the experiments;
- GenerateInstance: Python classes for creating the instances (jobs, services, and power);
- GenerateMetrics: Python classes for organizing the metrics;
- GenerateProblem: Python classes for solving the problem using LP;
- GenerateVisualisation: Python classes for creating some graphs;
- Heuristic: Python classes for solving the problem using heuristic;
- RenewableData: Data and scripts to generate renewable production from real traces;
- XPs: The results of the experiments.
Installing the environment
It is also possible to install all dependencies using nix. The file default.nix provides all dependencies (see buildInputs variable inside this file).
To install it using nix, execute the following command:
nix-shell
Besides the dependencies, we executed our experiments using Gurobi. So, it is necessary to have a Gurobi license to have the same results. It is possible to execute it using another solver, by modifying the file GenerateExperiments/experiments.py when it calls prob.solve(). However, in this case, you can have different results (mainly concerning execution time).
The list of dependencies are at least:
- pandas
- numpy
- matplotlib
- jupyter
- ipython
- pulp
- colorama
- termcolor
- tabulate
- seaborn
- cbc
- gurobi
Time window
We defined 72 time steps. Imaging that each time step is 1 hour, this corresponds to a three-day experiment.
Workload
We defined three workloads' size:
- Large: This workload has 238 jobs and 238 services. This demand corresponds to ~30% (30.010502047110858%) of all servers on all the time;
- Medium: This workload has 159 jobs and 159 services. This demand corresponds to ~20% (19.994626128183818%) of all servers on all the time;
- Small: This workload has 80 jobs and 80 services. This demand corresponds to ~10% (10.073523014606495%) of all servers on all the time.
We create 100 different workloads for each size. So, we have 100 large, 100 medium, and 100 small workloads.
Renewable energy production
We propose different renewable energy productions. The file renewable.ipynb presents all productions.
Experimental campaigns
We have the following experimental campaigns:
- MILP and Heuristic comparison: Comparison between MILP and heuristic coop. The power production comes from bell generation. This is the Section 7.1 of the article;
- Centralized and distributed comparison: We compare the executions of a centralized data center with a distributed approach. This is the Section 7.2 of the article;
- Non-cooperative and cooperative comparison: We compare non-cooperative and cooperative approaches under different power productions. This is the Section 7.3 of the article (and subsections 7.3.1, 7.3.2, and 7.3.3).
Running the experiments
From the root folder (with nix activated), execute the following commands
python ExecutionScripts/1_milp.py
python ExecutionScripts/2_exps_bell.py
python ExecutionScripts/3_exps_amplitude.py
python ExecutionScripts/4_exps_real.py
This can take time to execute everything. For this reason, we make available the experiments outputs in Zenodo.
Analyzing the results
Folder Analysis shows all the analysis of each section of the article.