Well, Google to the rescue. They're testing a system called Perspective that they say will be able to spot problem posts and either flag them or disallow them or let the poster re-think them before posting. It doesn't seem to be quite ready for wide distribution yet - their own example doesn't flag "Your a socialist snowflake!" - but some sites working with them on it are Wikipedia, the New York Times, the Guardian and the Economist.
I'm thinking it might be useful for forums - especially if Google is the one developing it, because Google's opinion of your site is important in terms of being found and in the various definitions of "value" that potential visitors and advertisers might use. Or it could just improve forum conversations by letting a poster know whether his post is likely to be taken well or not. Google's algorithm hates forums, so I'm hoping it might help with that by raising the quality of conversations, if that's possible. Hard to tell on that one since users don't usually have a stake in the site and very often couldn't care less about such things.
Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.
https://blog.google/topics/machine-lear ... ersations/
Sounds like it could make comments sections more interesting. Or more entertaining, depending on which end you set the slider on.