
The last sound most people hear before slumber is the ping of social media notifications, and the first light they see in the morning is their phone screen. This daily ritual is shared by over five billion people, who as of 2024 spend at least two hours of their day laughing at memes, watching animal videos, and reading controversial takes.
While these habits seem harmless, preliminary studies show that social media can tap into one’s psychological gratifications, frustrations, and behaviors through algorithms. And as estimates are being calculated on the various content we consume, it may only be feeding the problem.
Dissecting the feed
At its core, an algorithm is loosely defined as a set of rules that determine a sequence of operations. In terms of social media, it collects data on user engagements and dynamically adjusts feed content accordingly. Briane Paul Samson, an associate professor of the Department of Software Technology, illustrates it as a system that “tries to understand who you are as a person, and recommends posts that match how it understands you.”
Bernie Hogan, an Oxford Internet Institute associate professor and senior research fellow, points to YouTube as an example, wherein the video platform initially provides a diverse selection of content before gradually tailoring its recommendations to reflect what users watch.
Building on this, Samson elaborates that modern social media algorithms typically integrate user modeling and recommender engines. Such engines employ two common approaches: content filtering, which collects data on individual users, and collaborative filtering, which draws information from their surrounding communities. “If you’re friends with someone…there is a high chance that you have the same interests; [so they look into that, too],” he explains.
As the algorithm groups users into clusters aligned with the content they interact with, the order and frequency of certain posts, images, and videos shown vary by the type of content across different social media platforms.
Yet, as Hogan warns, “we do not know what sort of rules are used for sorting.” These algorithms undergo continuous rounds of iterative testing to maximize user engagement, making each platform’s inner processes difficult to understand. While the precise mechanics remain a mystery, it is clear that these are designed to keep users engaged.
“[But] engagement isn’t necessarily satisfaction or pleasure”, he laments. “Sometimes it can be things that people find upsetting but gain traction,” highlighting the troubling possibilities of what captivates audiences online.
Hooked by design
This pursuit of engagement reveals the murky side of social media algorithms, mainly the exploitation of brain reward systems like dopamine. Department of Sociology and Behavioral Sciences Chair and Full Professor Dr. Jerome Cleofas explains that “the algorithm learns what we want, and it keeps on feeding us what gratifies us, [releasing] dopamine and other related neurotransmitters and hormones that make us feel happy.”
Over time, algorithms create feedback loops that reinforce existing opinions and shape self-perceptions. Dr. Cleofas explains that social media’s comparative nature can be particularly damaging, positing that algorithmic content can make users feel like their lives are “not as fabulous or as pleasurable” as the curated standards they see online.
User clustering further exacerbates these psychological impacts. Research has linked the unceasing stream of idealized lifestyles with feelings of inadequacy, depression, and even suicidal ideation.
Moreover, these algorithmic patterns reshape formative human experiences as well. Dr. Cleofas observes how social media has transformed the dating scene by replacing natural delays of meeting the partner with instant satisfaction. “The relationship becomes more about gratification… That constant need to be present…to perform is always there, and that could be stressful for people.”
Deteriorating in digital prison
Beyond its psychological influence, algorithms also foster echo chambers: digital bubbles that amplify existing beliefs while limiting exposure to opposing ideologies. Dr. Cleofas explains that users tend to “filter information based on core beliefs,” leading to cognitive biases that make it harder to scrutinize information and exercise critical thinking.
This filtering becomes particularly dangerous when combined with the proliferation of fake news online as it deliberately exploits cognitive biases. They can even appear convincing; Dr. Cleofas notes that fake news may “appeal a lot to emotion [and] values.”
He cites the rampant misinformation during the height of the COVID-19 outbreak. Vaccines, for instance, became the center of fear-mongering for its side effects. By preying on people’s anxieties, this also undermined public health efforts.
Compounding these issues is the misconception that social media is a reliable source of information. “I would like to remind people that social media platforms are not search engines,” Samson clarifies. Unlike search engines that rank content based on credibility and reputation, social media platforms prioritize engagement and relevance over factual information.
This contrast becomes increasingly important as many turn to social media for information gathering. “Just because you get things fast doesn’t mean they’re right,” Samson cautions. Bolstering this statement, Dr. Cleofas underscores the need for media literacy: “If we’re more mindful…more critical, then we get more benefits than harm.”
Moreover, the rise of short-form content degrades cognitive abilities, shortens attention spans, and reduces the ability to process an abundance of information. Educators have observed students struggling to maintain focus during class and engage with long-form content. In response, some have begun simplifying lessons and incorporating “trendy” content to hold their attention—an approach that potentially sacrifices deeper cognitive development.
How to log out
Addressing these challenges and risks requires a sophisticated approach that combines technological innovation with user responsibility. Developers have a responsibility to consider an algorithm’s impacts on users; it is crucial that impact assessments be performed and the potential for harm remain a centerpiece in these platforms. After all, a single algorithm can benefit one community and harm another—the difference being the vulnerability of its consumers.
As social media continues to evolve and integrate deeper into daily lives, understanding its influence on its users becomes increasingly important. While algorithms are engineered primarily for engagement and profit, informed and intentional users can reclaim control over how these powerful tools shape their thoughts, beliefs, and behaviors.
This article was published in The LaSallian‘s Vanguard Special 2025. To read more, visit bit.ly/TLSVanguardSpecial2025.

One reply on “Understanding how social media subtly reshape human consciousness”
“If we’re more mindful…more critical, then we get more benefits than harm.” — Dr. Cleofas
Totally agree. This is what I also have been realizing lately. As much as it is easy to scroll and like a social media post or content, the impact it performs every second of the day that I am active on it is very unfathomable. I must influence and train my own social media algorithm rather than letting it influence me.