“Open the social media and you are immediately struck by a flood of dramas even without looking for it.” For today’s Internet users, this scenario is all too familiar. Whether it’s Facebook, YouTube, Tiktok or X (previously Twitter), the controversial content dominates the feed. Behind the scenes, a powerful but invisible force is involved: Social media algorithms.
How do algorithms spread “drama”?
These platforms do not only passively show posts, they actively decide what is displayed on the screen, based on the amount of involvement that each post receives. The more I like it, shares and comments that a post gets, the more likely it is to be pushed to the top.
The drama thrives under this system for two main reasons. First of all, algorithms are designed to optimize involvement and nothing involves people such as a conflict or sensational news. Secondly, through reinforced learning, once an algorithm detects a “hot” topic, further pushes that content to maintain users to scroll longer, which in turn increases profits for the platform.
The downside?
The positive, weighted or educational content is often drowned in the flood of negativity and indignation.
This dramatic machine driven by Algorithm has consequences in the real world. Mentally, users feel overwhelmed by a relentless flow flow. Socially, repeated exposure to polarized arguments distort public perception. And for the people involved in these dramas, whether celebrities or ordinary people-the consequences can be devastating, including public shame or career backlash.
So how can we solve this problem?
The platforms should ideally provide greater transparency on how their algorithms work and adapt them so that they do not excessively favor the controversial content. But reaching this is not simple, it is a complex technical and ethical challenge. Even more important, dramatic stories are intrinsically more “fun” for many users, who continue the cycle. As long as people continue to click, algorithms will continue to feed fire.
The real change will only take place when the users themselves begin to make different choices. This means:
- Recognize when the drama becomes overwhelming and tuned.
- Learn from past experiences with disinformation.
- Get the digital literacy soon, understand what the contents deserve attention and what is not.
- And ultimately, build communities that appreciate the weighted and verified content for disputes.
In the end, the algorithms follow our actions. If most users stop feeding in the drama, the systems will have no choice but to adapt. The change will not happen from today and cannot be unilateral. It requires a joint effort by platforms, educators, media professionals and every individual who accesses and flows every day.