diff --git "a/notebooks/TP10_m2LiTL_transformers_explicabilit\303\251__2425_SUJET.ipynb" "b/notebooks/TP10_m2LiTL_transformers_explicabilit\303\251__2425_SUJET.ipynb" new file mode 100644 index 0000000000000000000000000000000000000000..e2f2a34ea799d6cc7d038206e6d381ccad05f630 --- /dev/null +++ "b/notebooks/TP10_m2LiTL_transformers_explicabilit\303\251__2425_SUJET.ipynb" @@ -0,0 +1,933 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "-bb49S7B50eh" + }, + "source": [ + "# TP 10: Transformers, explicabilité et biais\n", + "\n", + "Dans cette séance, nous verrons comment analyser les prédictions du modèle pour comprendre les résultats/analyser les erreurs et chercher les biais éventuels du modèle lié aux données d'entrainement (de la tâche ou du modèle préentrainé)\n", + "\n", + "Nous nous intéresserons encore à la tâche d'analyse de sentiments, sur les données françaises AlloCine et anglaises IMDB.\n", + "Il s'agit d'une tâche de classification de séquences de mots.\n", + "Nous nous appuierons sur la librairie HuggingFace et les modèles de langue Transformer (i.e. BERT). \n", + "- https://huggingface.co/ : une librairie de NLP open-source qui offre une API très riche pour utiliser différentes architectures et différents modèles pour les problèmes classiques de classification, sequence tagging, generation ... N'hésitez pas à parcourir les démos et modèles existants : https://huggingface.co/tasks/text-classification\n", + "- Un assez grand nombre de jeux de données est aussi accessible directement via l'API, pour le texte ou l'image notamment cf les jeux de données https://huggingface.co/datasets et la doc pour gérer ces données : https://huggingface.co/docs/datasets/index\n", + "\n", + "Le code ci-dessous vous permet d'installer : \n", + "- le module *transformers*, qui contient les modèles de langue https://pypi.org/project/transformers/\n", + "- le module *transformers_interpret* : un outil pour l'explicabilité des modèles (qui fonctionne avec le module précédent) https://pypi.org/project/transformers-interpret/\n", + "- la librairie de datasets pour accéder à des jeux de données\n", + "- la librairie *evaluate* : utilisée pour évaluer et comparer des modèles https://pypi.org/project/evaluate/" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "9UoSnFV250el" + }, + "outputs": [], + "source": [ + "!pip install -U transformers\n", + "!pip install accelerate -U\n", + "!pip install transformers_interpret\n", + "!pip install datasets\n", + "!pip install evaluate\n", + "#%pip install -U sklearn" + ] + }, + { + "cell_type": "markdown", + "source": [ + "Finally, if the installation is successful, we can import the transformers library:" + ], + "metadata": { + "id": "StClx_Hh9PDm" + } + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ZBQcA9Ol50en" + }, + "outputs": [], + "source": [ + "import transformers\n", + "from transformers_interpret import SequenceClassificationExplainer, TokenClassificationExplainer\n", + "from datasets import load_dataset\n", + "import evaluate\n", + "import numpy as np\n", + "import sklearn" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "3TIXCS5P50en" + }, + "outputs": [], + "source": [ + "from transformers import AutoModelForSequenceClassification, AutoTokenizer\n", + "from transformers import AutoModelForTokenClassification" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "vCLf1g8z50ep" + }, + "outputs": [], + "source": [ + "import pandas as pds\n", + "from tqdm import tqdm" + ] + }, + { + "cell_type": "markdown", + "source": [ + "# Part 1: Transformers pipeline\n", + "\n" + ], + "metadata": { + "id": "uGZBOXpTXA72" + } + }, + { + "cell_type": "code", + "source": [ + "from transformers import pipeline" + ], + "metadata": { + "id": "Od8TVRnQJ8TH" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "### 1.1 Fill-mask: identifying biases\n", + "\n", + "Un modèle pré-entraîné type BERT est un modèle de langue construit avec une tâche spécifique, non supervisée, permettant d'apprendre des associations entre les mots, et donc des représentations des mots dépendantes de leur contexte.\n", + "Dans le cas de ce modèle, l'apprentissage se fait en masquant un certain nombre de mots que le modèle doit apprendre à retrouver.\n", + "\n", + "On peut tester la capacité de ce modèle à deviner un mot manquant dans une phrase.\n", + "Dans HuggingFace, des pipelines permettent d'exécuter certaines tâches comme celle-ci très facilement, cf le code ci-dessous.\n", + "\n", + "https://huggingface.co/docs/transformers/main_classes/pipelines" + ], + "metadata": { + "id": "mgZLir27AJhe" + } + }, + { + "cell_type": "markdown", + "source": [ + "#### ▶▶ **Exercice : fill-mask** \n", + "- Faire tourner le code ci-dessous et vérifier que vous comprenez la sortie affichée.\n", + "- Est-ce que les sorties proposées font sens à vos yeux ?" + ], + "metadata": { + "id": "HwRF_nyRiH2I" + } + }, + { + "cell_type": "code", + "source": [ + "# Chosing the pre-trained model\n", + "# - distilBERT: specific, faster and lighter version of BERT\n", + "# - base vs large\n", + "# - uncased: ignore upper case\n", + "base_model = \"distilbert-base-uncased\"" + ], + "metadata": { + "id": "DztvpOSXNIrx" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "unmasker = pipeline('fill-mask', model=base_model)\n", + "unmasker(\"Hello I'm a [MASK] model.\")" + ], + "metadata": { + "id": "Rz3VKNRWxZVK" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "### 1.2 Biais dans les données\n", + "\n", + "Comme identifié dans la littérature, ces modèles contiennent des biais dépendants de leurs données d'entraînement.\n", + "\n", + "- Article e.g. *The Woman Worked as a Babysitter: On Biases in Language Generation*, Sheng et al, EMNLP, 2019 https://aclanthology.org/D19-1339/\n", + "\n", + "#### ▶▶ Exercice : Identifier les biais\n", + "\n", + "Ajoutez des tests pour identifier des biais en vous inspirant des exemples ci-dessous : quel type de biais pouvez-vous identifier ?\n", + "\n" + ], + "metadata": { + "id": "txdDbcvAiYGv" + } + }, + { + "cell_type": "code", + "source": [ + "unmasker(\"The woman worked as a [MASK].\")" + ], + "metadata": { + "id": "McGZfdLFVfd7" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "unmasker(\"The man with a college degree worked as a [MASK].\")" + ], + "metadata": { + "id": "djn2WiRdi-vL" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "#### --- CORRECTION" + ], + "metadata": { + "id": "95TRIipye0aF" + } + }, + { + "cell_type": "code", + "source": [], + "metadata": { + "id": "hpeu_rHdY1Bo" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "# Part 2 - Transfert / fine-tuning : analyse de sentiment\n", + "\n", + "Comme vu dans le TP précédent, entrainez / fine-tunez un modèle de classification de sentiments à partir des données du corpus IMDb." + ], + "metadata": { + "id": "HUx1kHH8eUjE" + } + }, + { + "cell_type": "markdown", + "source": [ + "### 2.1 Charger un modèle pré-entraîné : DistilBERT\n", + "\n", + "Définir un tokenizer et chargez un modèle pour la tâche de classification de séquences. Vous utiliserez le modèle de base pré-entraîné DistilBERT.\n", + "\n", + "- distilBERT: https://huggingface.co/distilbert-base-uncased\n", + "- Les *Auto Classes*: https://huggingface.co/docs/transformers/model_doc/auto\n", + "- Les Tokenizer dans HuggingFace: https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/tokenizer\n", + "- *Bert tokenizer*: https://huggingface.co/docs/transformers/v4.25.1/en/model_doc/bert#transformers.BertTokenizer\n", + "- Classe *PreTrainedTokenizerFast*: https://huggingface.co/docs/transformers/v4.25.1/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast\n" + ], + "metadata": { + "id": "c40x3RDbB3Qo" + } + }, + { + "cell_type": "markdown", + "source": [ + "---------\n", + "SOLUTION" + ], + "metadata": { + "id": "v80gjCzARYqh" + } + }, + { + "cell_type": "code", + "source": [], + "metadata": { + "id": "IBY-P4iiRddM" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "### 2.2 Load new data for transfer\n", + "\n", + "On charge ici l'ensemble de données IMDB." + ], + "metadata": { + "id": "8lt8MjqYIZCl" + } + }, + { + "cell_type": "markdown", + "source": [ + "---------\n", + "SOLUTION" + ], + "metadata": { + "id": "sdac6kcTSNFi" + } + }, + { + "cell_type": "code", + "source": [], + "metadata": { + "id": "7B8LTmuJSNFk" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "### 2.3 Tokenization des données\n", + "\n", + "Tokenizer les données à l'aide de la fonction ci-après." + ], + "metadata": { + "id": "SbjUad2-tecl" + } + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "-Kj0bW3_50et" + }, + "outputs": [], + "source": [ + "def tokenize_function(examples):\n", + " return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)" + ] + }, + { + "cell_type": "markdown", + "source": [ + "---------\n", + "SOLUTION" + ], + "metadata": { + "id": "UmG9HWXZSeaK" + } + }, + { + "cell_type": "code", + "source": [], + "metadata": { + "id": "eGpk8DnfShQ7" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "## 2.5 Entraînement / Fine-tuning\n", + "\n", + "▶▶ Définir la configuration d'entraînement (*TrainingArguments*) avec une batch size de 4 et 5 epochs." + ], + "metadata": { + "id": "HYws35k8xCq0" + } + }, + { + "cell_type": "code", + "source": [ + "from transformers import TrainingArguments, Trainer\n", + "from transformers.utils import logging\n", + "\n", + "logging.set_verbosity_error()\n", + "\n", + "metric = evaluate.load(\"accuracy\")\n", + "\n", + "def compute_metrics(eval_pred):\n", + " logits, labels = eval_pred\n", + " predictions = np.argmax(logits, axis=-1)\n", + " return metric.compute(predictions=predictions, references=labels)\n", + "\n", + "# training_args = ..." + ], + "metadata": { + "id": "6F38e50_Su6G" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "---------\n", + "SOLUTION" + ], + "metadata": { + "id": "x0FNMImhSu6D" + } + }, + { + "cell_type": "code", + "source": [], + "metadata": { + "id": "_MHYOi2mZSNk" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "### Trainer\n", + "\n", + "▶▶ Définir le *Trainer* et lancer l'entraînement sur les sous-ensembles définis ci-après.\n", + "\n", + "https://huggingface.co/docs/transformers/main_classes/trainer" + ], + "metadata": { + "id": "8FEJYEhDxoCp" + } + }, + { + "cell_type": "markdown", + "source": [ + "On va sélectionner un sous-ensemble des données ici, pour que l'entraînement soit un peu moins long." + ], + "metadata": { + "id": "4QUvGEbOvRTH" + } + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Dgfoqbx950eu" + }, + "outputs": [], + "source": [ + "small_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(1000))\n", + "small_eval_dataset = tokenized_datasets[\"test\"].shuffle(seed=42).select(range(100))" + ] + }, + { + "cell_type": "markdown", + "source": [ + "---------\n", + "SOLUTION" + ], + "metadata": { + "id": "f2ba3SdeTS_V" + } + }, + { + "cell_type": "code", + "source": [], + "metadata": { + "id": "s_60B32WTS_Y" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "### Evaluation\n", + "\n", + "▶▶ On affiche finalement le score du modèle sur l'ensemble d'évaluation." + ], + "metadata": { + "id": "VaBD1-jaoR3w" + } + }, + { + "cell_type": "code", + "source": [ + "if training_args.do_eval:\n", + " metrics = trainer.evaluate(eval_dataset=small_eval_dataset)\n", + " print(metrics)" + ], + "metadata": { + "id": "3IdSk-1XHiVK" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "La fonction ci-après affiche les erreurs du modèle." + ], + "metadata": { + "id": "JBsYrp1_ZvXt" + } + }, + { + "cell_type": "code", + "source": [ + "if training_args.do_eval:\n", + " prob_labels,_,_ = trainer.predict( test_dataset=small_eval_dataset)\n", + " pred_labels = [ np.argmax(logits, axis=-1) for logits in prob_labels ]\n", + " #print( pred_labels)\n", + " gold_labels = [ inst[\"label\"] for inst in small_eval_dataset]\n", + "\n", + " for i in range( len( small_eval_dataset ) ):\n", + " #print(pred_labels[i], gold_labels[i])\n", + " if pred_labels[i] != gold_labels[i]:\n", + " print(i, gold_labels[i], pred_labels[i], small_eval_dataset[i][\"text\"] )" + ], + "metadata": { + "id": "_OEjBBoJZvkM" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kj5C4zon50ey" + }, + "source": [ + "# Part 3 - Interprétabilité\n", + "\n", + "Dans cette partie nous allons tester une méthode \"d'attribution\" qui observe certains valeurs du modèle pour repérer les parties importantes de l'input dans la décision du modèle.\n", + "\n", + "Nous utiliserons le package *transformers_interpret*, qui est une surcouche de la librairie plus générale *captum*.\n", + "\n", + "- Captum library: https://captum.ai/" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "rKUWY_xh50ey" + }, + "source": [ + "## 3.1 Classification de phrases: sentiment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "L_90kDt150ey" + }, + "outputs": [], + "source": [ + "# pour utiliser un modèle existant répertorié sur huggingface.co\n", + "#model_name = \"distilbert-base-uncased-finetuned-sst-2-english\"\n", + "#model = AutoModelForSequenceClassification.from_pretrained(model_name)\n", + "#tokenizer = AutoTokenizer.from_pretrained(model_name)" + ] + }, + { + "cell_type": "markdown", + "source": [ + "### ▶▶ Exercice : Afficher les attributions pour un exemple correctement prédit\n", + "\n", + "Utiliser le *cls_explainer* défini ci-dessous pour afficher les attributions pour chaque mot pour :\n", + "- un exemple correctement prédit (récupérer un exemple à partir de son indice à partir de l'exercice précédent)\n", + "- un exemple correspondant à une erreur du modèle\n", + "\n", + "Utilisez eégalement la fonction de visualisation des attributions.\n", + "\n", + "Aidez-vous de l'exemple sur cette page : https://pypi.org/project/transformers-interpret/" + ], + "metadata": { + "id": "xUh2_lqxho0n" + } + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "6EbVZpow50ez" + }, + "outputs": [], + "source": [ + "cls_explainer = SequenceClassificationExplainer(\n", + " model,\n", + " tokenizer)" + ] + }, + { + "cell_type": "markdown", + "source": [ + "---------\n", + "SOLUTION" + ], + "metadata": { + "id": "2UHYc10giO8p" + } + }, + { + "cell_type": "code", + "source": [ + "# récupérer un exemple / le texte correctement predit\n" + ], + "metadata": { + "id": "hRnH27AOiFp1" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "# Recuperer les attributions\n", + "# word_attributions = ...\n" + ], + "metadata": { + "id": "E-lWGvF45gcJ" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "GxCWlucU50ez" + }, + "outputs": [], + "source": [ + "# Afficher les attributions\n" + ] + }, + { + "cell_type": "markdown", + "source": [ + "### Visualisation\n", + "\n", + "Le code ci-après vous permet de visualiser les attributions pour un exemple." + ], + "metadata": { + "id": "LLYt2uH7pUuX" + } + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "0mmp7RCi50e0" + }, + "outputs": [], + "source": [ + "table = pds.DataFrame(word_attributions,columns=[\"tokens\",\"score\"])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "GP_QnEAf50e0" + }, + "outputs": [], + "source": [ + "table.iloc[::-1].plot(x=\"tokens\",y=\"score\",kind=\"barh\",figsize=(15,15))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "5I9SdaWY50e0" + }, + "outputs": [], + "source": [ + "html = cls_explainer.visualize()" + ] + }, + { + "cell_type": "markdown", + "source": [ + "### ▶▶ Exercice : Afficher les attributions pour un exemple mal prédit\n", + "\n", + "Recommencer les étapes précédentes pour un exemple correspondant à une erreur du système." + ], + "metadata": { + "id": "IMOLP2uCpf2V" + } + }, + { + "cell_type": "markdown", + "source": [ + "-----------\n", + "SOLUTION" + ], + "metadata": { + "id": "uNKwytwuaGVa" + } + }, + { + "cell_type": "code", + "source": [], + "metadata": { + "id": "jcuY25soaGhI" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "source": [ + "### ▶▶ Exercice : chercher les termes corrélés à chaque classe\n", + "\n", + "Le code suivant permet de chercher les termes corrélés à chaque classe, via les étpaes suivantes :\n", + "- Appliquer le modèle appris sur l'éval de imdb\n", + "- Appliquer l'interprétation sur un ensemble d'instances (100 puis 1000) et relever les termes avec les attributions les plus fortes, dans un sens ou dans l'autre. Réduisez la taille des phrases des reviews à 30 tokens.\n", + "- Trouvez les éventuels biais du jeu de données\n", + "\n" + ], + "metadata": { + "id": "23--_RYHjq-e" + } + }, + { + "cell_type": "code", + "source": [ + "def get_topk(attributions,k=5,threshold=None):\n", + " \"\"\"recup des k tokens les plus positifs + k tokens les plus négatifs\"\"\"\n", + " table = pds.DataFrame(word_attributions,columns=[\"tokens\",\"score\"])\n", + " high = table.nlargest(k,\"score\")\n", + " low = table.nsmallest(k,\"score\")\n", + " return high,low" + ], + "metadata": { + "id": "G4cN9FVNumeH" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "get_topk(word_attributions)" + ], + "metadata": { + "id": "waGGZz-3wSVg" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "def cut_sentence(sent,threshold):\n", + " toks = sent.split()[:threshold]\n", + " return \" \".join(toks)\n", + "\n", + "one = small_eval_dataset[0][\"text\"]\n", + "cut_sentence(one,50)" + ], + "metadata": { + "id": "EjKCi-pvxN_a" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "maxseqlength = 30\n", + "small_eval_dataset_text = [cut_sentence(one[\"text\"],maxseqlength) for one in small_eval_dataset]" + ], + "metadata": { + "id": "CzVhne2S5typ" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "all_pos = []\n", + "all_neg = []\n", + "\n", + "for sentence in tqdm(small_eval_dataset_text[:100]):\n", + " word_attributions = cls_explainer(sentence)\n", + " label = cls_explainer.predicted_class_name\n", + " high,low = get_topk(word_attributions)\n", + " if label == \"LABEL_1\":\n", + " all_pos.append(high)\n", + " else:\n", + " all_neg.append(high)\n" + ], + "metadata": { + "id": "hP28_7GwuC23" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "df_high = pds.concat(all_pos)\n", + "df_low = pds.concat(all_neg)\n", + "df_high" + ], + "metadata": { + "id": "Kp0V1zKl6TNo" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "df_high_avg = df_high.groupby(\"tokens\").mean()\n", + "df_low_avg = df_low.groupby(\"tokens\").mean()" + ], + "metadata": { + "id": "becpniSG6jDr" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "df_high_avg.nlargest(20,\"score\")" + ], + "metadata": { + "id": "BRh3UR5Y61nX" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "df_low_avg.nlargest(20,\"score\")" + ], + "metadata": { + "id": "szxynupe7LBV" + }, + "execution_count": null, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-XRoTAe-50e1" + }, + "source": [ + "## 3.2 Classification de tokens : entités nommées" + ] + }, + { + "cell_type": "markdown", + "source": [ + "### ▶▶ Exercice : Explication de modèle de reconnaissance d'entités nommées\n", + "\n", + "On définit ci-dessous un modèle de reconnaissance d'entités nommées.\n", + "Utilisez l'outil d'explicabilité pour une tâche de classification de token, et affichez les attributions pour un exemple." + ], + "metadata": { + "id": "jwvarY88mHD4" + } + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "s881ijMF50e1" + }, + "outputs": [], + "source": [ + "model_name = 'dslim/bert-base-NER'\n", + "model = AutoModelForTokenClassification.from_pretrained(model_name)\n", + "tokenizer = AutoTokenizer.from_pretrained(model_name)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "gIBA1tnP50e1" + }, + "outputs": [], + "source": [ + "ner_explainer = TokenClassificationExplainer(model=model, tokenizer=tokenizer)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "D7f05OD550e1" + }, + "outputs": [], + "source": [ + "instance = \"New-York City is a place full of celebrities, like Donald Trump.\"" + ] + }, + { + "cell_type": "markdown", + "source": [ + "------\n", + "SOLUTION" + ], + "metadata": { + "id": "v2SnhLchauK4" + } + }, + { + "cell_type": "code", + "source": [], + "metadata": { + "id": "UxDnS0T-auVe" + }, + "execution_count": null, + "outputs": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "visual", + "language": "python", + "name": "visual" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.5" + }, + "colab": { + "provenance": [], + "collapsed_sections": [ + "-XRoTAe-50e1" + ], + "toc_visible": true + }, + "accelerator": "GPU", + "gpuClass": "standard" + }, + "nbformat": 4, + "nbformat_minor": 0 +} \ No newline at end of file diff --git a/slides/MasterLiTL_2425_Course6_280125.pdf b/slides/MasterLiTL_2425_Course6_280125.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8359ea18fb79a1fc99a763b555c3464cda2faf72 Binary files /dev/null and b/slides/MasterLiTL_2425_Course6_280125.pdf differ