Your account is under investigation or was found in violation of the Medium Rules

A May story published in October

Eric Platon
5 min readOct 11, 2020

Perhaps you have never seen this message. It appears at the top of your screen on the Medium web site, some time, just like that.

And that is it. No explanation but the arcane rules. And days pass. For some weeks, months — more? At this point I am just days in, maybe, and many others have been farther, deciding to leave the platform altogether. People seems to be just fine with the rules, and even praise the Medium team. I am part of these people. However many of the same people are just asking what the root cause is. I am also part of these people.

Behind the oddly chosen terms “investigation” and “violation” usually dedicated to offenders, there should be some “moderation system”. Moderation gently enforces fair play. We all know what moderation is in sports. Well, it is the same here, applied to contents going through the platform. However “open minded” the platform is, content moderation is so soft it can easily get out of hands.

Moderation

Moderation seems really necessary on the Internet. In 2015, I joined a tiny company that wanted to automate content moderation with natural language processing (NLP is famous now). We built the first version of the system in 3 months. It could filter hate speech automatically 80–90% of the time (depending on the topic matter, and language). Beyond this first performance, it was hard. Very hard. We ran out of money before we saw a number above 93% (to my knowledge). In the meantime the Mozilla Foundation, and later Google, entered the segment (they wanted like us to deliver a service, which differs from Facebook, Twitter, and perhaps Medium, who understandably work for themselves).

Very hard. I believe all actors on hate speech filtering will explain how hard it can be. Haters can be straight and blunt. They can also be rather sophisticated and subtle. Creativity has no colour. You can create beauty and ugliness at will. Automating the detection of “creative hate” is challenging. Beyond “local” subtleties, hate speech can build up along discussion threads. We saw difficult escalations, where aggressors passively corner their victims in rethoric traps. A stumbling challenge for modern NLP and machine learning, even today.

Very hard, but nicely wrapping a cold algorithm can do marvels, or be just plain respectful. In our hate speech filter, automated scoring coupled with simple monitoring rules made sure third-party natural intelligence was able to override the system — allowing the latter to learn from its mistakes, by the way. So if a post scored very high on the hate ladder, say for the use of derogatory terms, it was blocked and marked so. The author was free to edit. But a lower score, closer to the gray zone, was temporarily blocked to give a chance to human reviewers for overriding the system. This protocol was far from satisfactory, as it could be gamed by reviewers, yet it was a start with contained loose ends (as the next generation was coming).

Medium Moderation

Very hard to Medium, too. Yet “flagged” (paying) users are seemingly left alone in their ignorance. I cannot claim my situation is frequent — first time to see and even hear about this dreaded banner — yet several people have asked why they are under sudden “investigation”. This question is asked for over 2 years at least (per the Medium Rules’s comment thread). No comment from Medium. No pointer. No email on my side with explanation or guidance. No transparency in the process.

If you want your platform to be perceived as trustworthy when practicing moderation, you must be clear, even transparent. You must treat your users fairly, even the blatantly hateful ones. Failing to do so, you either fall into oblivion when you did not get enough traction, or you are bound “too big to fail” to spend the rest of your life surfing a wave of criticism compensated by young new comers starving for splinters of exposure. This last version of self is less shiny than it sounds.

Perhaps my situation is an algorithm “glitch”. You know, like the one allowing Milton to get his salary for five years after being fired. People report the same situation for several years already… Do we have to wait for five whole years?

Mea Culpa

Treated like others in the Medium Rule’s comment thread, I will probably also leave and concentrate to the other media I use. I write a lot but publish little, so it feels like I matter little, including my yearly payments.

Scanning the few posts I wrote in the past, I really do not see the problem. Sources are cited; photos are sided with visible credentials; my name is visible, verified, and I even pay for the platform future. How to prove “innocence” in this “investigation” ? How to fix any “violation” ?

If this post does appear on the Medium site (and people can find it), it should be a good sign. It should be, without being end of story. This post attempts at constructive criticism, but who knows how it will get perceived by Medium people, or their algorithms, or anyone else? Experience teaches us how to communicate in conductive ways, but we all make mistakes. That is why writing and reviewing are hard and time consuming. That is why cash-strapped (or round-trapped?) companies want to automate always more, at the cost of fairness. Moderation is fine and welcomed, until its opacity makes it not.

I will be watching how this story progresses. Who knows? Not unlike poor Milton, 2 and 5 look alike. A fix may be passing QA in the company right now. As Bob would say: “We always like to avoid confrontation, whenever possible. Problem is solved from your end.”

Epilogue

It took some time, but I got an answer from Medium. Zero explanation. Just they restored my account “normal” status. Fortunate this may not be logged and used in insurance or credit scores?

--

--