With regard to the duties and responsibilities of digital platforms and online service providers, especially in relation to algorithmic moderation, dissemination of divisive content that becomes viral and lack of transparency in the promotion of certain messages, terms of use based on international human rights standards could be envisaged, with gradual responses favouring alternatives to content removal in less serious cases. In this regard, cooperation with independent authorities, transparency systems on content moderation, and effective reporting mechanisms to counter potentially discriminatory content could be useful, even in the absence of generalised obligations, while maintaining technological neutrality and freedom of debate.