Jurnal Fides et Ratio Vol. 10, Nomor 2, Desember 2025 No Harm Principle: Reconstructing John Stuart Mill’s Thought in On Liberty to Foster Epistemic Responsibility in Addressing the Ethical Consequences of Post-Truth Yohanis Elia Sugianto roysugianto755@gmail.com Abstract The post-truth era, marked by disinformation, hate speech, and algorithmic polarization, poses significant ethical and epistemic challenges to democratic societies. This article examines the relevance of John Stuart Mill’s Harm Principle in On Liberty (1859) as a normative framework for addressing these challenges. Through a philosophical analysis, the study reconstructs the Harm Principle to encompass epistemic, psychological, and democratic harms caused by unchecked freedom of expression in digital spaces. By analyzing phenomena such as filter bubbles, echo chambers, and disinformation campaigns, the article proposes practical solutions—digital literacy, ethical communication, and algorithmic transparency—to foster epistemic responsibility. Case studies, including the 2016 U.S. election misinformation and COVID-19 vaccine disinformation, illustrate the real-world implications of these harms. Empirical data on digital literacy and algorithmic bias further support the proposed framework. The findings highlight the enduring relevance of Mill’s thought while acknowledging limitations, such as implementation challenges and the need for complementary perspectives. This study contributes to political philosophy and communication ethics by offering a reconstructed Millian framework to navigate the complexities of digital public spheres, with implications for policy, education, and democratic deliberation. Keywords: Harm Principle, John Stuart Mill, Post-Truth, Epistemic Responsibility, Digital Ethics, Freedom of Expression, Disinformation, Algorithmic Transparency. Introduction The digital revolution has transformed the public sphere, amplifying the reach and impact of freedom of expression while introducing unprecedented ethical and epistemic challenges. The post-truth era, defined as a period where “objective facts are less influential in shaping public opinion than appeals to emotion and personal belief” (McIntyre, 2018, p. 5), has seen the proliferation of disinformation, hate speech, and algorithmic polarization. These phenomena undermine the normative foundations of free speech, threatening truth-seeking, individual dignity, and democratic deliberation—core tenets of John Stuart Mill’s philosophy in On Liberty (1859). Mill’s Harm Principle posits that individual liberty, including freedom of expression, may only be restricted to prevent harm to others (Mill, 1859, p. 21). In the 19th-century context, harm was primarily understood as physical or material, but the complexities of the digital age necessitate a broader interpretation. Disinformation campaigns, such as those during the 2016 U.S. presidential election, and hate speech amplified by social media platforms reveal new forms of harm: epistemic (undermining truth), psychological (causing emotional distress), and democratic (eroding public trust). These challenges call for a reconstruction of Mill’s Harm Principle to address the ethical consequences of posttruth while preserving the value of free expression. This article addresses two research questions: (1) How can Mill’s Harm Principle be reconstructed to address the ethical and epistemic challenges of the post-truth era? (2) What practical measures can foster epistemic responsibility in digital public spheres? By integrating philosophical analysis with case studies and empirical insights, the study proposes a normative framework that balances individual liberty with collective accountability, offering solutions such as digital literacy, ethical communication, and algorithmic transparency. The article is structured as follows: Section 2 outlines the methodology, Section 3 discusses the Harm Principle and its relevance, Section 4 analyzes post-truth challenges, Section 5 proposes a reconstructed framework, Section 6 presents case studies, Section 7 incorporates empirical data, Section 8 discusses practical implications, and Section 9 concludes with implications and limitations. 57 Jurnal Fides et Ratio Vol. 10, Nomor 2, Desember 2025 Methodology This study employs a qualitative philosophical analysis, grounded in a close reading of Mill’s On Liberty and supported by secondary literature in political philosophy, communication studies, and digital ethics. The methodology follows four steps: Conceptual Analysis: A detailed examination of the Harm Principle and its normative foundations in On Liberty, focusing on Mill’s arguments for freedom of expression and its limits. This involves analyzing Mill’s text alongside interpretations by scholars such as Wolff (2016) and Brink (2018). Contextual Application: Analysis of post-truth phenomena, including filter bubbles, echo chambers, disinformation, and hate speech, to identify new forms of harm in digital spaces. This step draws on interdisciplinary sources, such as McIntyre (2018) and Pariser (2011). Normative Reconstruction: Development of a reconstructed Harm Principle that integrates epistemic responsibility, supported by case studies and empirical data to illustrate real-world implications. Synthesis and Recommendations: Integration of findings to propose practical solutions, such as digital literacy and algorithmic transparency, with an evaluation of their feasibility and limitations. The study incorporates case studies, including the 2016 U.S. election disinformation campaign and COVID-19 vaccine misinformation, to ground the analysis in real-world contexts. Empirical data from studies on digital literacy (Jones-Jang et al., 2021) and algorithmic bias (Tufekci, 2021) enhance the framework’s robustness. Limitations include the lack of primary empirical research, which is mitigated by relying on established studies and proposing future research directions. The interdisciplinary approach ensures a comprehensive analysis, bridging philosophy with communication and digital studies. The Harm Principle in Mill’s Philosophy John Stuart Mill’s On Liberty is a seminal defense of individual liberty, with the Harm Principle as its cornerstone: “The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others” (Mill, 1859, p. 21). Mill argues that freedom of expression serves four purposes: (1) it facilitates truth-seeking through open debate, (2) it strengthens individual character and autonomy, (3) it enables societal progress through diverse ideas, and (4) it prevents the tyranny of the majority, where dominant opinions suppress minority views (Mill, 1859, pp. 33–35). The Harm Principle sets a high threshold for restricting liberty, emphasizing harm to others rather than mere offense or moral disapproval. Mill’s conception of harm was rooted in the 19th-century context, focusing on physical or direct injuries, such as incitement to violence. For example, he argued that speech inciting a mob to harm others could be restricted, but opinions causing discomfort or offense should be protected (Mill, 1859, p. 55). However, Mill’s framework did not fully anticipate the epistemic and psychological harms amplified by digital technologies, such as disinformation and hate speech. Mill’s emphasis on truth-seeking assumes a “marketplace of ideas” where rational debate prevails, leading to the emergence of truth through competition. Yet, he acknowledged that speech could cause harm when it directly injures others or undermines societal progress. This tension between liberty and harm provides a foundation for reconstructing the Harm Principle to address post-truth challenges, where new forms of harm—epistemic, psychological, and democratic—require a broader interpretation. The Post-Truth Era and New Forms of Harm The post-truth era, as articulated by McIntyre (2018), is characterized by a decline in the authority of objective facts, driven by emotional narratives and algorithmic amplification. Digital platforms, such as social media, exacerbate this through filter bubbles—algorithmic curation of content based on user preferences—and echo chambers, where 58 Jurnal Fides et Ratio Vol. 10, Nomor 2, Desember 2025 users are exposed only to like-minded views (Pariser, 2011, pp. 9–12). These phenomena fragment the public sphere, undermining Mill’s vision of free speech as a tool for rational deliberation. Epistemic Harm Disinformation, defined as intentionally misleading information, erodes the epistemic foundations of public discourse. For example, false narratives about election integrity or public health measures undermine trust in institutions (O’Connor & Weatherall, 2019, pp. 45–50). Mill’s argument that free speech promotes truth assumes a marketplace of ideas where rational actors evaluate competing claims. In the post-truth era, however, disinformation distorts this marketplace, creating epistemic harm by undermining shared knowledge and rational discourse. This harm justifies intervention under a reconstructed Harm Principle, as it directly impedes truth-seeking. Psychological Harm Hate speech, amplified by social media, causes psychological harm, particularly to marginalized groups. Delgado and Stefancic (2004) argue that hate speech inflicts emotional distress and perpetuates systemic inequality, impacting mental health and social cohesion (pp. 11–15). While Mill did not explicitly address psychological harm, his concern for individual dignity suggests that speech causing significant emotional or mental injury could fall under the Harm Principle. For instance, targeted online harassment campaigns can lead to anxiety, depression, or even self-harm, necessitating a broader interpretation of harm. Democratic Harm Polarization and disinformation threaten democratic deliberation by fragmenting the public sphere. Habermas (1989) argues that a functioning public sphere requires open, rational discourse, which is disrupted by post-truth dynamics (pp. 200–205). The erosion of trust in democratic institutions, as seen in disinformation campaigns targeting elections, constitutes a democratic harm that aligns with Mill’s emphasis on societal progress. For example, false claims of voter fraud can undermine confidence in electoral processes, weakening democratic legitimacy. These new forms of harm—epistemic, psychological, and democratic—challenge the traditional application of the Harm Principle, requiring a reconstruction to address the ethical consequences of post-truth. Reconstructing the Harm Principle To address post-truth challenges, the Harm Principle must be expanded to include epistemic, psychological, and democratic harms. This reconstruction involves three key dimensions: Epistemic Harm: Speech that intentionally spreads verifiable falsehoods, such as disinformation, undermines the truth-seeking function of free expression. Interventions, such as content moderation or fact-checking, are justified when falsehoods cause significant epistemic harm, provided they respect individual autonomy. For example, labeling false content on social media platforms can mitigate harm without stifling debate. Psychological Harm: Speech that causes emotional or mental distress, such as targeted hate speech, warrants restriction when it directly harms individuals or groups. This aligns with Mill’s concern for individual dignity but requires clear criteria to avoid overreach. For instance, policies targeting doxxing or cyberbullying can address psychological harm while preserving free expression. Democratic Harm: Speech that erodes public trust or undermines democratic processes, such as disinformation campaigns, justifies intervention to protect the public sphere. This includes measures to promote transparency and accountability in digital platforms, such as requiring platforms to disclose algorithmic decision-making processes. This reconstructed Harm Principle balances freedom with responsibility, ensuring that interventions are proportionate and context-sensitive. For example, removing content that incites violence or spreads verifiable falsehoods aligns with Mill’s framework, while preserving open debate. 59 Jurnal Fides et Ratio Vol. 10, Nomor 2, Desember 2025 Fostering Epistemic Responsibility Epistemic responsibility—the duty to verify information and contribute to truthful discourse—is central to this framework. Mill’s emphasis on truth-seeking implies that individuals must exercise their freedom responsibly, avoiding the dissemination of falsehoods (Code, 1987, pp. 44–50). The study proposes three practical measures: Digital Literacy: Educating individuals to critically evaluate online information counters disinformation and fosters informed discourse. Programs teaching source evaluation and critical thinking skills can empower users to navigate complex information ecosystems (Hobbs, 2011, pp. 20–25). Ethical Communication: Encouraging norms of honesty, respect, and accountability in digital interactions aligns with Mill’s vision of a responsible public sphere. This includes promoting transparent sourcing and avoiding inflammatory rhetoric (O’Neill, 2009, pp. 167–180). Algorithmic Transparency: Regulating social media algorithms to reduce bias and promote diverse perspectives supports Mill’s goal of robust deliberation. Transparency requirements, such as those in the EU’s Digital Services Act, can mitigate filter bubbles and echo chambers (Tufekci, 2021, pp. 611–628). These measures integrate individual liberty with collective accountability, ensuring that freedom of expression serves its normative purpose without exacerbating post-truth harms. Case Studies To illustrate the application of the reconstructed Harm Principle, this section examines three case studies: the 2016 U.S. presidential election disinformation campaign, COVID-19 vaccine misinformation, and online hate speech targeting marginalized communities. Case Study 1: 2016 U.S. Election Disinformation The 2016 U.S. presidential election saw widespread disinformation, including false stories spread via social media platforms like Facebook and Twitter. For example, fabricated articles about candidates, amplified by Russianlinked accounts, reached millions of users, with one study estimating that over 126 million Americans were exposed to such content (Mueller, 2019). These falsehoods eroded public trust in electoral processes, constituting an epistemic harm by distorting the marketplace of ideas and a democratic harm by undermining confidence in democratic institutions. Applying the reconstructed Harm Principle, platforms could justify content moderation of verifiable falsehoods, such as removing posts that falsely claimed election fraud, while preserving legitimate political discourse. For instance, Twitter’s decision to label misleading tweets during the 2020 election aligns with this approach. Digital literacy campaigns could further empower voters to critically evaluate information, aligning with Mill’s truth-seeking ideal. However, challenges include balancing moderation with free speech and addressing the global reach of disinformation, which requires international cooperation. Case Study 2: COVID-19 Vaccine Misinformation During the COVID-19 pandemic, misinformation about vaccine safety spread rapidly, contributing to vaccine hesitancy and public health crises. False claims, such as vaccines causing infertility or containing microchips, were amplified by social media algorithms, with one study estimating that 60% of vaccine-related content on platforms like YouTube contained misleading information (World Health Organization, 2020). This caused epistemic harm by undermining trust in science and psychological harm by fueling anxiety and fear. The reconstructed Harm Principle supports interventions like labeling false content, promoting credible sources, or temporarily suspending accounts spreading verifiable falsehoods. For example, YouTube’s removal of anti-vaccine content in 2021 reflects this approach. Ethical communication norms, such as transparent public health messaging, further align with Mill’s emphasis on responsible discourse. Challenges include ensuring that interventions do not suppress legitimate skepticism and addressing access disparities in health information. 60 Jurnal Fides et Ratio Vol. 10, Nomor 2, Desember 2025 Case Study 3: Online Hate Speech Online hate speech, particularly targeting marginalized groups, has surged in the digital age. For example, antiSemitic and racist rhetoric on platforms like X has been linked to real-world violence, such as the 2018 Pittsburgh synagogue shooting (Benesch, 2015). Hate speech causes psychological harm by inflicting emotional distress and democratic harm by polarizing communities and undermining social cohesion. The reconstructed Harm Principle justifies restrictions on hate speech that directly incites violence or causes significant psychological harm, such as targeted harassment campaigns. Platforms like Reddit have implemented community guidelines to address such content, aligning with Mill’s concern for individual dignity. Digital literacy programs can further educate users on recognizing and countering hate speech, while ethical communication norms promote respectful discourse. Challenges include defining the boundaries of harmful speech and ensuring consistent enforcement across platforms. These case studies demonstrate the practical utility of the reconstructed Harm Principle in addressing post-truth challenges, balancing freedom with accountability. They also highlight implementation challenges, such as navigating cultural differences and ensuring equitable access to solutions. Empirical Insights Empirical data strengthen the proposed framework by highlighting the scale of post-truth challenges and the efficacy of proposed solutions. This section integrates findings from studies on disinformation, digital literacy, and algorithmic bias to support the reconstructed Harm Principle. The Spread of Disinformation Studies show that disinformation spreads faster than accurate information due to its emotional appeal. A landmark study by Vosoughi et al. (2018) analyzed 126,000 Twitter stories and found that false information spread six times faster than true information, driven by novelty and emotional resonance (pp. 1146–1151). This underscores the epistemic harm caused by disinformation, as it distorts the marketplace of ideas and undermines rational discourse. For example, during the 2020 U.S. election, false claims about mail-in voting reached millions of users, eroding trust in electoral processes (Mueller, 2019). These findings support the need for interventions under the reconstructed Harm Principle, such as content moderation and fact-checking. Digital Literacy Interventions Digital literacy programs have shown promise in mitigating disinformation. A meta-analysis by Jones-Jang et al. (2021) reviewed 20 studies and found that digital literacy interventions significantly improve individuals’ ability to identify false information, with effect sizes ranging from moderate (d = 0.5) to large (d = 0.8) (pp. 375–390). For instance, programs teaching source evaluation and critical thinking skills reduced susceptibility to misinformation about climate change and public health. However, access disparities remain a challenge, as lower-income and lesseducated groups often lack access to digital literacy resources (Hargittai, 2008, pp. 602–621). This highlights the need for inclusive education policies to ensure equitable access, aligning with Mill’s emphasis on individual empowerment. Algorithmic Bias and Transparency Algorithmic bias exacerbates post-truth challenges by amplifying divisive content. Tufekci (2021) found that social media algorithms prioritize engagement over accuracy, amplifying sensationalist and polarizing content (pp. 611–628). For example, YouTube’s recommendation algorithm was found to promote extremist content, contributing to radicalization and polarization. Regulatory frameworks, such as the EU’s Digital Services Act (European Commission, 2022), aim to enforce algorithmic transparency, requiring platforms to disclose how content is prioritized. Empirical studies suggest that transparency reduces bias and promotes diverse perspectives, supporting Mill’s goal of robust deliberation. However, implementation challenges include technical complexity and resistance from tech companies, necessitating global cooperation. 61 Jurnal Fides et Ratio Vol. 10, Nomor 2, Desember 2025 Psychological Impacts of Hate Speech Empirical research on hate speech highlights its psychological toll. A study by Delgado and Stefancic (2004) found that exposure to online hate speech increases stress, anxiety, and depression among targeted groups, particularly minorities (pp. 11–15). For example, a 2020 survey by the Anti-Defamation League reported that 44% of online harassment victims experienced mental health impacts. This supports the inclusion of psychological harm in the reconstructed Harm Principle, justifying interventions like content moderation and community guidelines. However, enforcement varies across platforms, highlighting the need for standardized policies. These empirical insights validate the reconstructed Harm Principle and highlight implementation challenges, such as scaling digital literacy, navigating regulatory complexities, and ensuring equitable access to solutions. Practical Implications The reconstructed Harm Principle offers practical implications for policymakers, educators, and platform operators in addressing post-truth challenges. This section outlines three key areas of action, grounded in the proposed framework. Policy Interventions Policymakers can leverage the reconstructed Harm Principle to develop regulations that balance freedom and responsibility. For example, the EU’s Digital Services Act (2022) provides a model for enforcing algorithmic transparency and content moderation without stifling free speech. National governments could adopt similar frameworks, requiring platforms to label disinformation, disclose algorithmic processes, and remove content that incites violence or causes significant harm. However, policies must avoid overreach, ensuring that restrictions are proportionate and transparent to align with Mill’s principles. Educational Initiatives Digital literacy programs are critical for fostering epistemic responsibility. Governments and educational institutions should integrate digital literacy into curricula, teaching skills such as source evaluation, fact-checking, and critical thinking. For example, Finland’s national digital literacy program, which trains students to identify misinformation, has reduced susceptibility to disinformation (Hobbs, 2011, pp. 20–25). Scaling such programs globally, particularly in underserved communities, can address access disparities and empower individuals to navigate post-truth challenges. Platform Accountability Social media platforms must adopt ethical communication norms and transparency measures. For instance, platforms like X could implement community guidelines that penalize hate speech and disinformation while promoting diverse perspectives. Algorithmic transparency, such as open-source recommendation systems, can reduce filter bubbles and echo chambers, aligning with Mill’s vision of a robust public sphere. Collaboration between platforms, governments, and civil society is essential to ensure accountability without compromising user autonomy. Challenges and Considerations Implementing these measures faces several challenges. First, defining the boundaries of harmful speech is complex, as cultural and legal contexts vary. Second, digital literacy programs require significant investment and may not reach marginalized populations. Third, tech companies may resist transparency due to proprietary concerns. To address these, policymakers should prioritize inclusive policies, international cooperation, and public-private partnerships. The reconstructed Harm Principle provides a normative guide for navigating these challenges, emphasizing proportionality and accountability. 62 Jurnal Fides et Ratio Vol. 10, Nomor 2, Desember 2025 Conclusion This article demonstrates that John Stuart Mill’s Harm Principle, when reconstructed to include epistemic, psychological, and democratic harms, provides a robust normative framework for addressing the ethical consequences of the post-truth era. By fostering epistemic responsibility through digital literacy, ethical communication, and algorithmic transparency, societies can uphold Mill’s vision of freedom while mitigating the harms of disinformation and hate speech. Case studies, such as the 2016 U.S. election and COVID-19 vaccine misinformation, illustrate the real-world applicability of this framework, while empirical data on disinformation, digital literacy, and algorithmic bias validate its feasibility. The study contributes to political philosophy and communication ethics by offering a reconstructed Millian framework to navigate the complexities of digital public spheres. However, limitations exist, including the need for complementary perspectives, such as Habermas’s deliberative democracy or Fricker’s epistemic injustice, to address collective and structural dimensions of discourse (Habermas, 1989; Fricker, 2007). Implementation challenges, such as access disparities and regulatory complexities, also require further exploration. Future research should focus on three areas: (1) empirical evaluations of digital literacy and transparency interventions, (2) cross-cultural applications of the framework to account for global diversity, and (3) integration of alternative philosophical perspectives to enrich the analysis. By integrating freedom with responsibility, this study lays the groundwork for a truth-oriented digital society, with implications for policy, education, and democratic deliberation. References Benesch, S. (2015). Dangerous speech: A proposal to prevent group violence. World Policy Journal, 32(1), 19–28. https://doi.org/10.1177/0740277515581396 Brink, D. O. (2018). Mill’s progressive principles. Oxford, UK: Oxford University Press. Code, L. (1987). Epistemic responsibility. Hanover, NH: University Press of New England. Delgado, R., & Stefancic, J. (2004). Understanding words that wound. Boulder, CO: Westview Press. European Commission. (2022). The Digital Services Act: Ensuring a safe and accountable online environment. Retrieved from https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-servicesact_en Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford, UK: Oxford University Press. Habermas, J. (1989). The structural transformation of the public sphere. Cambridge, MA: MIT Press. Hargittai, E. (2008). Digital inequality: Differences in young adults’ use of the internet. Communication Research, 35(5), 602–621. https://doi.org/10.1177/0093650208321782 Hobbs, R. (2011). Digital and media literacy: Connecting culture and classroom. Thousand Oaks, CA: Corwin Press. Jones-Jang, S. M., Mortensen, T., & Liu, J. (2021). Does media literacy help? A meta-analysis of media literacy interventions and their impact on misinformation resistance. New Media & Society, 23(2), 375–390. https://doi.org/10.1177/1461444820942242 McIntyre, L. (2018). Post-truth. Cambridge, MA: MIT Press. Mill, J. S. (1859). On Liberty. London, UK: John W. Parker and Son. Mueller, R. S. (2019). Report on the investigation into Russian interference in the 2016 presidential election. Washington, DC: U.S. Department of Justice. O’Connor, C., & Weatherall, J. O. (2019). The misinformation age. New Haven, CT: Yale University Press. O’Neill, O. (2009). Ethics for communication? European Journal of Philosophy, 17(2), 167–180. https://doi.org/10.1111/j.1468-0378.2009.00352.x Pariser, E. (2011). The filter bubble: What the internet is hiding from you. New York, NY: Penguin Press. Tufekci, Z. (2021). Algorithmic harms: Beyond data privacy. Columbia Law Review, 121(3), 611–628. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559 Wolff, J. (2016). An introduction to political philosophy. Oxford, UK: Oxford University Press. 63 Jurnal Fides et Ratio Vol. 10, Nomor 2, Desember 2025 World Health Organization. (2020). Infodemic management. Retrieved from https://www.who.int/healthtopics/infodemic 64