Before using the service, please read the preliminary information containing a description of steps that enable access to the CLARIN-PL developer interface.
Hatespeech allows you to detect hate speech in selected texts.
The service requires you to choose from two available models:
Hatespeech can be run:
The service can be run in the Windows system with default values using the following LPMN query:: ['any2txt',{'hatespeech':{'model_type':'conformity','user_annotations':[0,0,0,0,0,0]}}]
.
[['any2txt',{'hatespeech':{'model_type':'conformity','user_annotations':[0,1,0,0,1,0]}}]]
- input data in the form of a compressed directory (.zip)model_type
- defines the model selection (required):
baseline
- general, not personalizedconformity
- personalized. Allows customization of the profile by indicating the value for user_annotations
. You can also choose from predefined profiles:
[0,0,0,0,0,0]
- low sensitivity[0,1,0,0,1,0]
- medium sensitivity[1,1,1,1,1,1]
- easily offended Note:
The CLARIN-PL website for the personalized model offers a choice between predefined profiles only. Configuring the values for user_annotations
is possible, among others, in the LPMN Client service.
A text file.
inoffensive
, offensive
values
type
:pie
In Colab: Hatespeech - Hate speech detection
Kamil Kanclerz, Alicja Figas, Marcin Gruza, Tomasz Kajdanowicz, Jan Kocon, Daria Puchalska, Przemyslaw Kazienko (2021) "Controversy and Conformity: from Generalized to Personalized Aggressiveness Detection", Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Volume 1: Long Papers.
(C) CLARIN-PL