What’s with all the content warnings on Mastodon?

On the surface, Mastodon isn’t all that different from Twitter.

The decentralized platform has its own name for posts (it says “toots” instead of “tweets”), its own like and repost functions, even its own chronological timeline. This similarity is one of the reasons Mastodon is attracting thousands of new people looking for an alternative to Twitter after Elon Musk bought the site. Terrified by the messy way the CEO of Tesla and SpaceX is running it, at least 70,000 people have joined Mastodon in the last few weeks.

But many of the thousands of Twitter users who are now migrating to Mastodon to try and find a new, hopefully less toxic, home on social media are clashing with one of the platform’s most unique features: content alerts.

Tooting to Mastodon will ask users if they want to add a content warning. Adding is entirely up to the user, but if you do so, anyone who follows you or belongs to the same instance as you can initially only see the brief warning you are about to write.

It’s then up to them to decide whether to click on the warning to see the rest of the toot or to continue scrolling.

The platform offers highly customizable settings made possible by Mastodon’s open-source nature, clearly distinguishing it from most of the centralized social media apps that have become mainstream in the last decade.

It’s organized across a number of different servers, which the company calls instances, and offers customizability to users.

However, if you keep the default settings that users find in their profiles when they join, you’ll quickly find that people on Mastodon seem to be writing content warnings for just about anything from images containing eye contact to discussions about cars, jobs , travel, and other worldly subjects. Some communities even ask their members to hide any mention of politics or breaking news behind a content warning.

For new users, coming from areas like Twitter where no such feature exists and people can be ridiculed for manually adding content warnings, the experience could prove harrowing.

In recent days, mastodon.social – the main global body with 159,000 members – has been inundated with posts arguing for and against the perceived overuse of content warnings, and others also took to Twitter to voice their grievances.

The most patient Mastodon veterans have tried to raise skeptical newcomers: some explain that it’s just a matter of getting used to platform etiquette, so as not to constantly seem and appear rude. Others call for respect for the uniqueness of the platform, emphasizing that Mastodon has long been populated and shaped by marginalized groups who have been harassed and bullied by other platforms and are accustomed to having a space that respects their boundaries and mental health will.

That main complaint of new users seems to be that content warnings are used too liberally, even for harmless things, while the common understanding outside of Mastodon is that posts need warnings when they deal with particularly sensitive topics – like suicide, addiction, violence, or if they are not safe for work.

“I see a lot of Mastodon instances that force users to do all sorts of things on CW, even harmless things like food, and I think that’s wrong.” wrote one user on twitter. “I don’t understand why Mastodon relies on content warnings for almost anything that isn’t a cat image.” commented another.

“The last time I joined Mastodon I was audio policed ​​because people wanted bloody filtered content alerts for things like ‘dinner’ and ‘cars’ and ‘work’. If there’s a mass exodus from Twitter, I don’t move, I just unsubscribe.” one closed.

All personal preferences and ideas of what a content warning should be for, aside, the feature makes the user experience a bit clunkier, at least for people who aren’t neurodiverse or have other special needs. If you don’t automatically turn off content alerts, reading the timeline will be more of a hassle, since you’ll have to click on each and every toot behind an alert after deciding if you’re interested.

On the other hand, it represents a declaration of consent, which is usually missing on other platforms.

People accustomed to platforms like Twitter, where topics like politics and breaking news are discussed openly—constantly and often very vocally—may be surprised to find that many Mastodon servers post about the same topics without a content warning on the Hitting Toots first results in a warning. On Tuesday, as the midterms played out, a widespread toot on an instance dedicated to artists and creators urged members to “Please, /please/, place all political posts behind a content warning.” It’s actually a strong suggestion in our code of conduct, but if we see someone posting heavy political content frequently and not on the CW, we’ll (at least) “cap” the account so the post doesn’t show up on the public timeline.”

This approach is unsettling for some Twitter users who are dipping their toes into Mastodon for the first time

“Twitter can be unhealthy, but I struggle with Mastodon feeling a little too sterile for me. I think content warnings are valid, but it feels weird to hide ANY political thoughts or even reposts from Twitter behind a warning because of the risk of “playing with the mood”. noted one Twitter user.

There are options for removing warnings site-wide, but impatient Twitter users haven’t taken the time to learn the protocol.

Ultimately, however, much of the criticism stems from a larger misconception: in most cases, content warnings are not flagged on the platform because they believe someone will find the content disturbing in the first place. Interestingly, Mastodon founder Eugen Rochko apparently approved the feature’s inclusion on Mastodon because it was useful in hiding spoilers for TV shows. And over the years, the community on the platform has started using the tool for a number of different areas.

In a now-viral explainer blog post on the subject, artist and developer v Buckenham writes that content alerts can be used for a number of different reasons: “They allow you to talk about shit that feels a little too heavy to do without.” letting the reader opt-in to talk about it,” like mental or physical health issues, but also “allowing the reader to talk about shit that feels a little too boring without the reader choosing to … about things too.” speak that is fine with you but may have more than expected emotional distress to other people” and even make little jokes with your community.

“Different parts of Mastodon, even more than Twitter, have their own culture,” Buckenham told the Daily Dot. “The instance I’m in uses CWs heavily: most of the time just for jokes, but there was always CW discourse, which seems necessary. It’s the community (communities) trying to find consensus and pushing back people who don’t make the experience ideal for them.”

That doesn’t mean that every long-time user agrees with the way content warnings are used on the platform: blogger Karl Voit recently argued that the massive use of content warnings about posts that contain no spoilers, nudity, violence, or other sensitive issues dilute the actual usefulness of the feature.

And as more people give Mastodon a swing and seek an alternative to an increasingly chaotic Twitter, this meta-debate is bound to evolve.

web_crawlr

We crawl the web so you don’t have to.

Sign up for the Daily Dot newsletter to get the best and worst of the web delivered to your inbox every day.

https://www.dailydot.com/debug/mastadon-content-warnings-twitter/ What’s with all the content warnings on Mastodon?

Jaclyn Diaz

InternetCloning is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@internetcloning.com. The content will be deleted within 24 hours.

Related Articles

Back to top button