Social network Bluesky is undertaking a significant overhaul of its Community Guidelines and other foundational policies, two years after its initial launch. The company, a competitor to established platforms like X, Threads, and decentralized networks such as Mastodon, states that these revisions are designed to enhance clarity and provide greater detail regarding user safety protocols and the appeals process.
A primary driver for many of these policy adjustments stems from evolving global regulations. Key legislative acts influencing Bluesky’s changes include the U.K.’s Online Safety Act (OSA), the EU’s Digital Services Act (DSA), and the U.S.’s TAKE IT DOWN Act. These regulations are prompting platforms to implement more stringent measures concerning online safety, content moderation, and user protection.
Beyond regulatory compliance, Bluesky is also making a concerted effort to shape the behavior and overall tone of its community. The company aims to foster a more respectful environment, a move that follows a series of user complaints and media reports suggesting a tendency towards self-seriousness, the prevalence of negative news sharing, and a perceived lack of humor and intellectual diversity within its user base.
As part of its regulatory compliance efforts, Bluesky’s Terms of Service have been updated. These updates specifically address online safety laws and regulations and mandate age assurance where required. For example, the U.K.’s Online Safety Act, which began requiring age verification for platforms with adult content in July, means that Bluesky users in the U.K. must now verify their age by scanning their face, uploading an ID, or entering payment card details to access the site.
The process for handling user complaints and appeals has also been significantly detailed. A notable addition is the introduction of an “informal dispute resolution process.” Under this new provision, Bluesky commits to engaging in phone conversations with users to discuss their disputes before any formal dispute resolution procedures are initiated. Bluesky noted, “We think most disputes can be resolved informally.” This approach contrasts sharply with the practices of larger social networks, where users often report being banned without clear explanations or avenues for direct communication with the company.
Furthermore, Bluesky has announced that it will permit users to resolve certain claims of harm through court proceedings, rather than exclusively through arbitration. This is a somewhat unusual stance for many tech companies, which typically prefer to settle disputes outside formal court systems.
Of particular interest to its user base are the proposed changes to the Community Guidelines. Bluesky has opened a feedback period for these revisions, which are slated to take effect on October 15, 2025, after the feedback process concludes.
The updated Community Guidelines are structured around four core principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. These overarching principles are intended to guide Bluesky’s moderation decisions, determining when content should be labeled or removed, if an account warrants suspension or a ban, or in certain instances, if content should be reported to law enforcement.
The revised rules incorporate numerous common-sense policies. These include prohibitions against promoting violence or harm (encompassing self-harm and animal abuse), posting illegal content or content that sexualizes minors (including in role-play scenarios), engaging in harmful actions such as doxxing and other forms of nonconsensual personal data-sharing, and distributing spam or malicious content.
Importantly, the guidelines include specific provisions for journalism, parody, and satire. For instance, journalists engaged in “factual reporting” are explicitly permitted to post about criminal acts, violence, mental health issues, online safety, and other topics, such as issuing warnings about potentially harmful online viral challenges.
However, potential challenges for Bluesky may arise in the nuanced interpretation of terms like “threat,” “harm,” or “abuse.” The policy emphasizes the principle of “respect others,” prohibiting the posting, promotion, or encouragement of “hate, harassment, or bullying.” As an example, the policy explicitly bans exploitative deepfakes and content that “incites discrimination or hatred,” defining this as posts that attack individuals or groups based on “race, ethnicity, religion, gender identity, sexual orientation, disability, or other protected traits.”
This particular area has historically been a point of contention for Bluesky. In its earlier stages, the platform faced criticism for moderation decisions that strained its relationships with both the Black community and the trans community, with instances where a perceived failure to moderate sparked anger. More recently, the company has encountered backlash from users who perceive the platform as having become too left-leaning, characterized by quick criticism, hateful replies, and a general lack of humor within the community.
The original vision for Bluesky was to empower users with tools to cultivate their desired communities, offering features such as blocking and reporting mechanisms, as well as subscribable block lists or opt-in moderation services aligned with individual values. Despite this, Bluesky users have often demonstrated a preference for the application itself to manage much of the moderation, frequently criticizing its trust and safety department when decisions diverged from their expectations.
In addition to the Community Guidelines, Bluesky’s Privacy Policy and Copyright Policy have also undergone significant rewrites to ensure compliance with global laws concerning user rights, data transfer, retention and deletion, takedown procedures, and transparency reporting. Unlike the Community Guidelines, there will be no feedback period for these two policies, both of which are set to take effect on September 15, 2025.




