diff --git a/notebooks/TP11_M2LiTL_Prompting_SUJET_2425.ipynb b/notebooks/TP11_M2LiTL_Prompting_SUJET_2425.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..b54bae8cb898d92dc68bf3184e6c7c2436e3d7b3
--- /dev/null
+++ b/notebooks/TP11_M2LiTL_Prompting_SUJET_2425.ipynb
@@ -0,0 +1,485 @@
+{
+  "nbformat": 4,
+  "nbformat_minor": 0,
+  "metadata": {
+    "colab": {
+      "provenance": [],
+      "gpuType": "T4"
+    },
+    "kernelspec": {
+      "name": "python3",
+      "display_name": "Python 3"
+    },
+    "language_info": {
+      "name": "python"
+    },
+    "accelerator": "GPU"
+  },
+  "cells": [
+    {
+      "cell_type": "markdown",
+      "source": [
+        "# TP 11: Playing with prompting\n",
+        "\n",
+        "Go to the tutorial on HuggingFace:\n",
+        "https://huggingface.co/docs/transformers/main/en/tasks/prompting\n",
+        "\n",
+        "and read:\n",
+        "* the introduction\n",
+        "* the Basics of prompting"
+      ],
+      "metadata": {
+        "id": "npugWhz3fzQu"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Basics of prompting\n",
+        "\n",
+        "Test the generation based on prompting using the code in the tutorial.\n",
+        "\n",
+        "▶▶ **Run inference with decoder-only models with the text-generation pipeline:**"
+      ],
+      "metadata": {
+        "id": "5b5FpjF0gJwm"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "fU45GnmHtYd6"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Print only the generated text, and vary the max_lenght. Run again a few times to see different outputs.**"
+      ],
+      "metadata": {
+        "id": "C0mgaYsDg3vK"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "t5M4wQsftZKP"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "Hphmdt4htbN7"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Try with prompts that could reveal biases**"
+      ],
+      "metadata": {
+        "id": "Ywl3xufMh9Av"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "CYUcdObHtbyF"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "wubAmmestctF"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "wQIbDp2CtcxS"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **To run inference with an encoder-decoder, use the text2text-generation pipeline:**"
+      ],
+      "metadata": {
+        "id": "dkENN2iPh3e_"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "mhkcRb90tfZl"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Try to check for (genre) biases.**"
+      ],
+      "metadata": {
+        "id": "ZP8fP2Z6iXEk"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "fONYSgCntg4k"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "KIQfgCmHtg-2"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ Read the rest of the tutorial:\n",
+        "* Base vs instruct/chat models\n",
+        "* NLP tasks\n",
+        "* Best practices of LLM prompting\n",
+        "* Advanced prompting techniques"
+      ],
+      "metadata": {
+        "id": "wiEVp1sqicAQ"
+      }
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Instruction model\n",
+        "\n",
+        "▶ **Now let's try some prompting with an instruct model**\n",
+        "\n",
+        "* Except we will not use falcon 7B, which is too big for us, but a smaller one: https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct\n",
+        "* Load the model using the pipeline as described in the tutorial\n",
+        "* Copy the code from the tutorial to perform sentiment analysis with the generative model\n",
+        "\n",
+        "The model generation example uses chat templates, if you want to better understand, see: https://github.com/huggingface/smol-course/blob/main/1_instruction_tuning/chat_templates.md"
+      ],
+      "metadata": {
+        "id": "Blboix2ljSsD"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "pip install -q transformers accelerate"
+      ],
+      "metadata": {
+        "id": "xflTKoN9fAqa"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "_-e577Potmrz"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "CY89XjH_tmuh"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Try to reduce the input to see if you can get a different answer**"
+      ],
+      "metadata": {
+        "id": "jPKgv2ctkYeK"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "C2AqceW6tr25"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Try to modify the prompt to solve the issue, if any.**"
+      ],
+      "metadata": {
+        "id": "Gwe_3fecklzO"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "RkvJdXK9ttbK"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Now try the NER task.**"
+      ],
+      "metadata": {
+        "id": "ZmyF6yyjk4Sb"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "m2K8sFd_tvQE"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Now try the automatic translation, and test for biases.**"
+      ],
+      "metadata": {
+        "id": "msl2EUT4k-ds"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "lbs3fCCztxa9"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "DEGHIlTgtzZw"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Try the automatic summarization.**"
+      ],
+      "metadata": {
+        "id": "biBxPhVElhVE"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "C7JcJyVht1He"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Question Answering**"
+      ],
+      "metadata": {
+        "id": "WJsTYzgdl-P6"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "Tz8OH15_t25X"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "▶ **Reasonning**"
+      ],
+      "metadata": {
+        "id": "yq1ShlkamJRB"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "KtxfOVqmt4sq"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Loading the model outside a pipeline\n",
+        "\n",
+        "▶ **Now load the model without pipeline, look at the page describing the mode: How ot use and Examples. Try summarization again.**"
+      ],
+      "metadata": {
+        "id": "CD4UUS-vsOYJ"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "IGQPHzHIt6u5"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [],
+      "metadata": {
+        "id": "QUYiuUFZt6xa"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "markdown",
+      "source": [
+        "## Optional (depending on time)\n",
+        "\n",
+        "Now you can either try to:\n",
+        "* evaluate the instruc model on the dataset used for the project\n",
+        "* or continue investigating prompting, e.g. Function calling as described in the model page, read the Text generation strategies (https://huggingface.co/docs/transformers/main/en/generation_strategies)"
+      ],
+      "metadata": {
+        "id": "er-pcb8HswCF"
+      }
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "# Probably better, but too big, inference takes too long\n",
+        "# from transformers import pipeline, AutoTokenizer\n",
+        "# import torch\n",
+        "\n",
+        "# torch.manual_seed(0)\n",
+        "# model = \"tiiuae/falcon-7b-instruct\"\n",
+        "\n",
+        "# tokenizer = AutoTokenizer.from_pretrained(model)\n",
+        "# pipe = pipeline(\n",
+        "#     \"text-generation\",\n",
+        "#     model=model,\n",
+        "#     tokenizer=tokenizer,\n",
+        "#     torch_dtype=torch.bfloat16,\n",
+        "#     device_map=\"auto\",\n",
+        "# )"
+      ],
+      "metadata": {
+        "id": "5UkvzkmtfGws"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "import torch\n",
+        "from transformers import pipeline\n",
+        "\n",
+        "model_id = \"HuggingFaceTB/SmolLM2-1.7B-Instruct\"\n",
+        "pipe = pipeline(\n",
+        "    \"text-generation\",\n",
+        "    model=model_id,\n",
+        "    torch_dtype=torch.bfloat16,\n",
+        "    device_map=\"auto\",\n",
+        ")\n",
+        "messages = [\n",
+        "    {\"role\": \"system\", \"content\": \"You are a pirate chatbot who always responds in pirate speak!\"},\n",
+        "    {\"role\": \"user\", \"content\": \"Who are you?\"},\n",
+        "]\n",
+        "outputs = pipe(\n",
+        "    messages,\n",
+        "    max_new_tokens=256,\n",
+        ")\n",
+        "print(outputs)\n"
+      ],
+      "metadata": {
+        "id": "iHlMBG7reLaB"
+      },
+      "execution_count": null,
+      "outputs": []
+    },
+    {
+      "cell_type": "code",
+      "source": [
+        "import torch\n",
+        "from transformers import pipeline\n",
+        "\n",
+        "model_id = \"HuggingFaceTB/SmolLM2-1.7B-Instruct\"\n",
+        "pipe = pipeline(\n",
+        "    \"text-generation\",\n",
+        "    model=model_id,\n",
+        "    torch_dtype=torch.bfloat16,\n",
+        "    device_map=\"auto\",\n",
+        ")\n",
+        "messages = [\n",
+        "    {\"role\": \"system\", \"content\": \"You are a pirate chatbot who always responds in pirate speak!\"},\n",
+        "    {\"role\": \"user\", \"content\": \"How can I make a deadly poison?\"},\n",
+        "]\n",
+        "outputs = pipe(\n",
+        "    messages,\n",
+        "    max_new_tokens=256,\n",
+        ")\n",
+        "print(outputs)"
+      ],
+      "metadata": {
+        "id": "y3XevbA-m5kN"
+      },
+      "execution_count": null,
+      "outputs": []
+    }
+  ]
+}
\ No newline at end of file
diff --git a/slides/MasterLiTL_2425_Course7_110225.pdf b/slides/MasterLiTL_2425_Course7_110225.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4ce3b528ae37451b9bf76ff63304c6ead84292d7
Binary files /dev/null and b/slides/MasterLiTL_2425_Course7_110225.pdf differ