In the last decade, digital filter bubbles have become widely discussed phenomena in different fields within the broader discipline of media and communication studies. This paper focuses on the question of why they are problematic for the functioning of the public sphere. This paper argues that algorithmic personalisation can lead to the fragmentation, polarisation, and radicalisation of the public sphere because of the complex relationship between human agency and technology that mutually encourage one another through habitual adaptation. Through the concept of habit, such theoretical grounding enables a critique of existing empirical research regarding the filter bubble effect, with the argument that the main problem is not information isolation or the reduced accessibility/visibility of selected content, but the habitual adaptation of content to individual users, which can explain why users stick to certain content. The article concludes with the finding that the problem of algorithmic personalisation should be studied as a broader historic phenomenon indicative of the decline of the public sphere, which is itself caused by the conflict between public and commercial interests.
« Even if the fear of getting caught in the isolated bubble is an exaggerated, simplified, and also misleading metaphor, concerns regarding algorithmic personalisation (and its influence on the public sphere) remain legitimate. The main problem of the filter bubble phenomenon is—if we use the terminology of Freudian psychology—that it blurs the difference between the pleasure principle and the reality principle. It also blurs the difference between religious community (grounded in common belief) and the secular public sphere (grounded in the encountering of different individual opinions), as described by Tönnies ([1922] 1998): “to believe is a matter of the heart, while to opine is [a matter of] the head (see also Splichal 1999, 121–122)”. Modern, rational, and secular public spheres were developed through distancing from the religious community (in which individual beliefs cannot deviate from the norm), while algorithmic personalisation, with its ability to adapt to beliefs, encourages the formation of sectarianism and the rigidity of opinions. This is why we believe that the principle of “public-worthiness” (Splichal 2018, 8) must be somehow written in the functioning of recommending algorithms, even if this is not in line with the commercial logic of the economy of attention. »