At this point the meme that social media might be just a tad harmful to your mental health is pretty well known. Reddit used to give out their standard advice: get a lawyer, hit the gym and delete Facebook. Zuckerberg has testified in front of Congress that passive consumption of Facebook may be associated with negative mental health outcomes. This admission and the lack of Congress is pressing him on this question struck me as strange. what percentage of users are passive users? What have they done to improve the well-being of people using its product in this way. Does Facebook track how the changes to its site change the passive versus active status over two years? In other words, does it optimize for the average user's mental health? Given the answer is likely no, what does it optimize for? What side effects would this optimization have? It's worth noting that other major websites, such as search engines DO NOT have this problem. In fact, some studies have suggested that using search engines may actually improve cognition.
When we consider real-world optimization, what I mean by “optimization” is the driving of the world toward some goal frequently measured by metric. Within this definition there is already a sliver of concern. Metrics capture a part of reality but by their nature cannot capture the entirety of reality, therefore driving a metric to its highest possible value using any possibly resource near the guarantees destroying some good existing parts of the world that are not captured by this metric. Moreover, in attempting to change the world and applying a lot of technological optimization pressure to it, one must necessarily look at the results that they have created. Have social media companies succeeded at optimizing its metrics? Yes. Has this optimization on the margin created a better world in the last 5 years? Don't think so.
I used to work at Facebook around 2012, at the time I believed in (or as some might say fell for) the mission. Making the world more open and connected, who doesn't want that? At the time it seemed the leadership believed it as well, but you never know. Now, note the expansiveness of the language. “Making the world” towards a certain goal is a pretty expansive claim which fits the optimizational nature of Facebook's algorithms and self-image. In 2012 Facebook liked to brag about how certain revolutions of Arab Spring were brought about using its platform. This is now a distant memory, dozens of Current Things ago. Facebook doesn’t usually brag about civil unrest on its platform anymore.
The notion of altering the world I am pointing towards is both obvious and under-discussed. Many people, including myself, wish to "alter the world," however once you actually start doing it at scale, it's worth measuring whether or not you are actually altering the world *for the better*. It’s worth theorizing if problems can crop up that elude one's measurement.
The challenge here isn't that an optimization algorithm does something that you didn't specify. An optimization algorithm does whatever you told it to do, it's just that specifying the actual social good is *hard*. It's hard both mathematically and philosophically as well as socially. So, it has become easier to work on metrics which benefit the company and /or are easy to measure and report, such as “engagement.” Other aspects of the world take a back seat. So, when some study shows that a passive consumption of content harm mental health, should this really be a surprise or should we expect this to happen by default from a highly optimizational, but not philosophically thought through outlook? What really shocks and frustrates me about Zuckerberg's answer about mental health of passive users is the implied shifting of blame or level of control *onto* the user. You see, the implication is that this is the *user's* fault for using the site improperly, rather than a series of conscious design decisions. Now, users can use tools improperly before they learn, however in this situation, the site itself is applying pressure unto the user. We cannot use the metaphor of tools for something that has a world-directional nature. If a platform starts out using its changes to improve UI and seem more inviting, then it ends up changing it to something that forgoes actual benefit to the user and only focuses on benefit to the platform. In a sense we need to look at the highly A/B tested nature of certain platforms as something that is changing the user after a certain point.
Now I don't mean to single out Facebook as the only social platform responsible for mental health problems, in fact parasocial platforms are likely worse. There have been instances of users of TikTok developing actual tics. What are the specific ways that the social media harms us and how can we modify it to be better? There are many perspectives, which feel like they frequently capture a sliver of the problem.
Perhaps social media gives us the high-light reel instead of a daily grind, perhaps it inspires envy instead of positive emotions. It’s possible it triggers too much of wrong kind of FOMO, or creates negative feelings body positivity. Some of the rationalists, have also rallied against potentially deliberate dis-organization.
I have a few theories of which tricks social media sites end up using to both boost engagement at the cost of sanity. I suspect one such trick is the *interleaving of good and bad content*. What happens is that the site trains you to "keep going" even if you encounter content that isn't actually that exciting or interesting for you. After all, the last time you scrolled for 5 minutes to find that one post which did brighten your day. One should not underestimate the negative mental health effects of tense necks eyes and posture that come from overuse of any phone app.
Again, imagine that instead of these kinds of tricks, social media worked closer to search engines. Instead of encouraging "doomscrolling," it would encourage closing the site after all the "best" interactions have been finished. This isn’t the best solution, but it is likely an improvement.
Note, that a hypothetical search engine-type organization is in some way *flipping* one of the core utility functions and considering "Time spent" to be a *negative*, not a positive. This simple change could limit the damage the networks are doing, however it would come at a massive short-term cost of advertising revenue and is thus unlikely to be implemented. This is also not a full fix, as it merely addresses "doomscrolling" and not parasociality or reliability. Still, this is an example of a way of approaching the problem of content organization that was well known in the early 2000s. It truly a shame that the social media has burrowed advertising as way to make money, without also borrowing the importance of "separating" your metrics or valuing the user's time.
Another trick, which is probably most used by Twitter is the promotion of controversial ideas over obvious ones. Controversial ideas get both likes from people who like them as well as quote tweets from people who don't. Thus, a controversial idea has a built in engagement that an obvious ideas do not. Poasters thus tend to try and drum up controversy, which creates more stress in the readers. Once again, this is a good example, where it's not *just* that a metric maximization has some problematic side effects, but rather the metric is actually quite anti-correlated with societal well-being.
One of the challenges of criticizing an existing system is that it perhaps makes it makes people believe that making something a lot better is easy. It’s not easy to simply fix certain platforms to rid of “envy” or “rage.” without rethinking core parts of it.
I suspect that YouTiki’s structure can address some of the issues.
#1 incentives to create long-term viable content
Instead of trying to always issue opinions on the latest news, there is an incentive to create content that can last and therefore there is less incentive to flip one’s positions to match the Narrative.
#2 incentives to appeal to experts across the board
This one is perhaps somewhat controversial, but I believe it strikes a balance between the dunking culture of Twitter and blandness of Reddit. Downvotes imply the need to appeal to many people, but the unequal nature of upvotes and downvotes means the best posts on YouTiki are ones that appeal to experts on all sides, which likely means a tendency towards truth and reconciliation.
#3 ranking based on quality rather than quantity
A quality post liked by many people would improve a lot of the author’s scores compared to many low quality posts, which run the risk of downvotes.
#4 social instead of parasocial relationships
As I have mentioned before, having relationships of responsibility or equal kinship is more natural than parasociality and does not create confusion as to their nature.
#5 easier time gain local clout compared to other media
A lot of social media, especially Twitter and other parasocial media can suffer from “lock in”, where it’s hard to gain clout over the existing users. At Youtiki it’s not that hard to gain clout *within* your group, which enables new users to feel seen even. Global clout will be harder to come by, unless the new user is also bringing information about a brand new topic. So, while the lock in is still there, it doesn’t prevent people from feeling at least locally accepted.
Mental health is closely related to having a tribe and not having confusion about truly being as part of one. So, let’s build our tribes!