--- license: cc-by-4.0 language: - fra task_categories: - summarization --- # Description Dataframe containing 949 French books in `txt` format. More precisely : - the `titre` column contains the title of the book - the `auteur` column contains the author's name and dates of birth and death (if you want to filter the texts to keep only those from a given century to the present day; for example, Cligès and Erec et Enide are in very early French) - the `résumé_wiki` column contains a summary of the book from French Wikipedia - the `résumé_wiki_en` column contains a summary of the book from English Wikipedia - the `résumé_autre` column contains a summary of the book from a site other than Wikipedia (the license for this column is very uncertain, which is why we put this dataset behind a gate) - the `texte` column contains texts - the `nb_words` column contains an estimate of the number of words per text (a simple `.split(" ")`; again if you want to filter) All available summaries (`resumé_wiki` + `résumé_wiki_en` + `résumé_autre`) represent 1513 lines. Assuming an average word length of 1.3 to 1.5 tokens, we have : - Estimated number of texts with at least 2048 tokens: between 940 and 945 - Estimated number of texts with at least 4096 tokens: between 871 and 879 - Estimated number of texts with at least 8192 tokens: between 833 and 844 - Estimated number of texts with at least 16384 tokens: between 729 and 750 - Estimated number of texts with at least 32768 tokens: between 591 and 634 - Estimated number of texts with at least 65536 tokens: between 409 and 444 - Estimated number of texts with at least 131072 tokens: between 202 and 230 - Estimated number of texts with at least 262144 tokens: between 53 and 81