Substack's democratic promise: theorizing our 2025 information economy
Scrolling through Substack, you’ll encounter an eye-watering number of culturally relevant, politically ripe, and savvy seeming think-pieces. Whether it’s soft-secession or the sexual politics of Sabrina Carpenter’s persona (both Substack articles I’ve read recently), you’ll find a piece--or twenty--of writing to indulge your curiosity. This is sort of breathtaking--both for the volume and range of content and, too, for the opportunity for lay voices, including your/my own, to have cultural purchase. It’s also the apex of a profound shift in who controls the production and dissemination of knowledge and the means by which those processes occur. Today, on Substack, anyone can broadcast their opinion. If we choose to, we can all be social scientists, cultural critics, curators, connoisseurs, gurus and commentators. This hasn’t always been the case.
The shift towards a more democratic dissemination of knowledge is part and parcel of the internet era. Substack is the 2025 manifestation of a long-brewing reconfiguration; from the rise of early blogs, to Wikipedia, to TikTok, a more horizontal information economy has emerged. There’s a lot to celebrate in this shift. Making space for more voices, at its best, can be a powerful opening, leading to rich explosions of creativity, thoughtful contributions to discourse, and connection, collaboration, and debate. At its worst, though, it can lead to the amplification of damaging--and, often, untruthful--rhetoric that reverberates with profound import in politics and culture.
Indeed, both the joy and the peril of substack is that anyone (even me, here, right now) can write a think-piece. And, too, any think-piece can have real sway.
Good, bad, and ugly--it’s worth exploring how this seeming shift in knowledge production and dissemination connects to our larger political economy. Indeed, in my view, this evolution is not inevitable but rather contingent with deep ties to how power is structured more broadly.
We will give these ideas holistic consideration--that’s the whole point of this piece--but, first, let me explain how I got here. In a heavily saturated digisphere, substack has emerged as the final boss for exploring these dynamics perhaps because it’s (or its users are) most committed to the artifice of legitimacy. Unlike other social media (and yes, substack now considers itself to be a social media), Substack’s self professed raison d’etre is serious business; its emphasis on the written word almost feels anachronistic in a landscape littered with slop and short-form content. In other words, Substack is explicitly invested not just in writing but, by extension, knowledge (the medium is the message, people!). Contrast with other platforms--take TikTok or Instagram--where plenty of very serious, high stakes knowledge sharing is happening in a sort of reversed engineered way where creators are using widely-trafficked platforms for information dissemination, Substack is designed with the circulation of knowledge in mind.1
It’s not just the medium but also some of the authorship and content. Well established journalists and thinkers such as Anne Applebaum and Heather Cox Richardson (my mom’s substack icons) have made the platform their homes alongside other highly-regarded up-and-coming writers and publications like Ken Klippenstein and the Contrarian. These people add to Substack’s credibility and reputation as a place to find and consume meaningful information and serious thought.
At any rate, many months ago, earnest, knowledge-seeking Coco read a piece, given to her by the Substack algorithm, about the connection between tylenol consumption during pregnancy and autism incidence. This was long before RFK mainstreamed these dubious claims--these ideas were clearly still fringe--but it piqued my curiosity. I read the article skeptically, as I generally would for any piece making broad scientific conclusions, particularly about neurodiverse populations. But the piece seemed kind of legit; it was written by an MD/scientist with links to studies throughout. It took effort and discernment on my part to realize its science was shoddy.
That experience scared me and should raise alarm bells for you too. After all, that piece is one of thousands like it on here: writers purporting to make factual, evidence-based claims under the veneer of expertise; or, albeit much more innocuous, sharing half-baked takes or publishing pieces with little regard for craft; and readers bolstering these pieces through the Substack feed, adding another perceived layer of legitimacy through their endorsements (nearly every “note” on my feed is some version of “this is one of the best things I’ve read on this app” or “must read for anyone concerned with xyz”). I worry that Substack’s democratic promise has opened the door to something more nefarious: a platform without guidelines, standards, or ethics.
When it comes to considerations about quality, the consequences are often relatively benign and worth incurring for the purpose of expanded access. There’s good reason to challenge the historical arbiters of taste who were and are often white, monied, and seeking to reinforce a cultural (and therefore political and economic) status quo. Too, for those same reasons, the doors to recognition were/are tightly controlled and highly inaccessible especially for marginalized writers and creators. If Substack doesn’t require an MFA from Iowa, I say “good!” or at least “mostly good!” Do I care about craft? Yes! And, I do feel some concern about how the quality and complexity of writing has shifted in the internet era (and, too, changes in the capacity of readers) but that’s a separate essay. For our purposes here, the tradeoff is simple: If more people writing and sharing their work means there’s more shitty content, I’ll take the shitty content any day. The act of thinking, creating, and communing is, dare-I-say, sacred and increasingly rare, so anything that empowers more people to engage in these practices is of tremendous value. This is the democratic promise I’ve referred to and it’s truly worth celebrating!
Here’s where things get more complicated: when the content of the piece has real world implications with the potential to cause harm. This goes for content that’s explicitly antagonistic or hateful--which, thankfully, I haven’t seen on my side of Substack, though I’m sure exists. It also matters deeply for pieces making arguments that do--or should--rest on evidence, like the autism tylenol piece. These articles not only purport to make factual arguments but also, often implicitly, make ideologically motivated normative claims that carry real political and moral weight. Again, it would have been totally possible to read that autism-tylenol piece and come away from it accepting its argument as factual and its call to action as justified. It’s not hard to see how our newfound horizontality’s informational free-fall--this lack of enforced standards--is high stakes.
No doubt, the unfactual Substack information economy is an appendage of our global disinformation crisis, an issue that extends far beyond the bounds of this beautiful orange app. Commentators have described our era as “post-truth” and it doesn’t take much time on the internet to see just how accurate of a descriptor that is. As we know, this abandonment of truth has taken storm of our politics. It is frankly breathtaking that the head of HHS, RFK Jr, went on to parrot the same dangerous claims about autism incidence that I had read on this very platform in his capacity as our nation’s top health official. That untruth is one of many he--and the rightwing political and media apparatus--have circulated and promulgated.2 We’ve now reached a point where, despite their lack of factuality, these untruths have become representative, and then constitutive, of reality for many on both the level of experience and through this legitimation (in law, media etc). It’s a hair-on-fire, house-burning-down situation.
But I digress. Both more (shitty) content and widespread disinformation are results of Substack’s horizontality and worth exploring (and we’ll spend more time fully understanding what’s driving the latter later in this piece) but their emergence as features is not my main point here. Rather, I’d argue that they’re symptomatic of larger shifts both in the distribution of power and how we relate to authority. Indeed, I think Substack is fertile ground for exploring one of the key dynamics at play in our shifting information economy: an angry and full-throated rejection of elites and/or elitism. Stick with me here.
Depending on where you lie on the political spectrum the elites in question will vary; it might be pro-vaccine Democratic-coded scientists, political leaders across both aisles, or corporate tycoons like Bezos but I think it’s fair to say that there’s broad discontentment and a deep-seated feeling that those with power are abusing it. Of course, I find myself sharing in and sympathizing with much of that anger. Wealth inequality in the U.S. is more extreme than in over 100 nations, so profound that, by some measures, the wealthiest 3 Americans have more than the bottom half of us combined. It’s no wonder that many feel our system is dysfunctional--it is--and power only continues to concentrate at the top without any meaningful remediation.
We see this sentiment reflected in widely documented waning trust across society but it has also manifested in more diffuse ways throughout our politics and culture. Of particular note here is a broad rejection of elites and the systems they power and reinforce: systems that are supposed to serve us but instead have left us feeling like we’ve been thrown to the dogs. This includes the media, with its traditional keepers of expertise and its world-shaping power. Indeed, expertise--particularly expertise originating from and circulated by elite institutions--is no longer qualifying and, for some, lack of expertise is actually advantageous. In the wake of these dynamics we’ve arrived at an information economy (and a politics) where lay voices are almost instinctively trusted above legacy institutions/figures.
This has manifestations for people across the ideological spectrum. After all, why should I, the lefty that I am, read the NYT when they publish idiotic (not to mention insulting) headlines such as “Did liberal feminism ruin the workplace?” (which BTW was previously simply entitled “Did Women Ruin the workplace?”...LOL)? I mean, I’m all for viewpoint diversity but that is… beyond the pale… and a horrible misuse of the responsibility that comes with having the platform that is the NYT. Our information economy is broken: shaped by and shaping our dysfunctional politics and chasing our dystopic (attention) economy’s bend towards virality.
And, indeed, these are the same reasons why I, and many of my friends, now use Reddit as one of the first places we go to gather information. Amidst so much uncertainty, amidst a barrage of content, really, amidst a collapse of what feels like some semblance of a shared and sane reality, another person’s voice is instinctively more trustworthy.3 It’s an interesting paradox.
Following the current, Substack offers a disruption to the status quo of elites and their expertise. It’s speaking to--and simultaneously fueling--a real hunger for change.
I don’t think Substack’s disruptive presence is inherently bad--in fact, I think there’s something refreshing, powerful, and necessary about it. At the same time, given both the larger politics at play and absent ethical standards, its emergence is not so much a democratic reconfiguration but a lawless land whose cowboys may--and are allowed to--brandish their pens recklessly despite the high stakes consequences. In other words, we now have a platform where a random doctor can write an article about tylenol use and it might have real uptake.
Yet for users, many of these dynamics lay simmering below the surface with no real impact on their experience of the platform. Instead, we’re fed and we feel a version of Substack’s horizontality as a democratic opening and a reclamation, a recalibration of deeply unbalanced power. The other side of our rejection of institutional expertise and our corresponding trust in new and/or disruptive and/or lay voices is the feeling--and to some extent the experience--of agency! We now not only have the power to generate information/content but to engage with knowledge beyond the scope of the stale, snotty, status-quo loving elite. WE are the arbiters.
To me, these are the threads that underpin--from a citizen, consumer perspective--the reconfiguration of our information economy to be more horizontal and some of its interplay with our politics. But, here’s the critical thing, this is all happening on a platform (or, as it were, across many platforms and channels) and that platform is just as much a part of the story when it comes to making sense of this shift.
Let’s start with the big and maybe obvious point: it can’t be overstated what a crucial role algorithms and profit incentives play in shaping the terms and conditions of our changing information economy. In 2024, Meta’s profits climbed to $62 Billion, a 59% increase from 2023. And, on the day of this writing, Zuck is the world’s 6th richest man (X owner Elon Musk is still in first place). The only company edging out Meta?: ByteDance, TikTok’s owner, whose revenue in just the first quarter of 2025 was $43B. Social media is a business and platforms benefit from successful distribution of and engagement with content. We, the writers--the content creators--generate the material but it is the platform that actually dictates the spread of information.
Yes, there’s some degree of agency in that chain: you can ‘hack’ the algorithm by formatting your content in trend-savvy and gripping ways, you can post on specific schedules, you can gain traction by commenting or engaging with others’ content, or you can pay to promote your content. And, certainly, there’s real money to be made for creators. It’s no coincidence that there’s a sizable subset of the content generation world that is focused solely on teaching people how to succeed online which, in many instances, is itself a monetized business (shoutout to the grifters… uh… I mean, content strategists! lol). (Dear reader, I DO understand the emergence of this industry and why there’s a market for those wanting to find success online. For better or worse, this is where so much of business, politics, and culture happens, so forgive my snark).
Ultimately, though, we are at the behest of the powers that be, the mighty algorithm. And, to put it simply, most algorithms are not democratic in either their design or governance.
(Information about Substack’s particular algorithm was challenging to find; this write-up of interviews with Substack’s co-founders was the most concrete source I came across).
At any rate, by now, there is ample evidence that most social media platforms boost content that is emotionally charged (read: sensationalist). And, crucially, when it comes to political content--or our earlier umbrella of content with “real world implications with the potential to cause harm”--that prioritization remains intact. In other words, it’s not some passive, aw shucks, kind of situation that leads inflammatory and unfactual content like the tylenol autism essay to be boosted, it’s by design.
It’s worth noting that Meta ended its fact checking program in January (following X’s model) and other platforms aren’t doing much better (1 in 5 videos on TikTok includes misinformation per this 2022 analysis; here’s one example with electoral implications from the UK). So, when you DO get sensationalist mis/disinformation on your feed, there aren’t any meaningful buffers for intervention. Might I mention that this sensationalist bias often skews right, leading to lopsided radicalization. To repeat, the dissemination of this politically charged, often exaggerated--or patently false--content is hugely consequential for our politics themself. We’ve seen, for example, in the case of the tylenol autism situation, how fringe ideas circulated on social media are amplified and then can be taken up at the highest level of our government with real impacts on real people.
Not surprisingly, Substack has had public brushes with these issues in the past. First, in 2022, the platform faced scrutiny for platforming fringe anti-vax conspiracy theorists like Joe Mercola who warned readers that the “unvaccinated might soon be imprisoned in government-run camps” and shared bogus studies “purporting to use government data to prove that more children had died of covid shots than from the coronavirus itself.” Substack’s founders responded to the blowback in a post defending their “hands-off approach to content moderation.” Then, in late 2023, Substack found itself in the limelight again after an article in The Atlantic identified 16+ newsletters using Nazi symbols, prompting an open letter signed by 200+ prominent Substackers pushing for the removal of this content. Substack’s founder responded in a note where he “[re]-committed to upholding and protecting freedom of expression, even when it hurts.” It was only after writers left (or threatened to leave) the platform that they actually removed some of the newsletters.
Here, I think we need to address the elephant in the room and talk explicitly about the free-speech dimension of this all. As many know, in the U.S., hate speech (which doesn’t even have a gov’t definition) is protected by the first amendment unless it takes the form of a “clear and present danger” aka an active threat of violence. Similarly, most disinformation is also protected under the law with notable exceptions for defamation, fraud, perjury, and false advertising. Are my concerns about disinformation and standards for speech on Substack really just legally ignorant lefty hand-wringing? Well, not quite! Because Substack and other platforms are private entities, they themselves are entitled to first amendment rights. In other words, per this explainer, platforms “can moderate the content people post on their websites without violating those users’ First Amendment rights.” This means that it’s incredibly challenging for the government to legally regulate how platforms handle speech and, simultaneously, platforms have a great deal of immunity and autonomy in their content moderation policies. So far this has looked like a relatively--and increasingly--hands off approach to most speech including hateful speech and disinformation but, as we know, because algorithms aren’t neutral it’s exactly that kind of speech that ends up getting boosted.
I found it incredibly challenging to find Substack’s content guidelines (it was only after quite some time going down this rabbit hole that I was able to) but unsurprisingly, they are bare bones. There’s the expected restrictions on copy-right infringement, porn, and content focused on illegal activities. When it comes to hate speech, writers cannot “incite violence based on protected classes,” a statement which is clarified as “credible threats of physical harm.” I remain skeptical that this is adequate in the face of today’s challenges.
This begs a question that we’ve been flirting with throughout: if the current state of affairs on these platforms has serious negative consequences, what might an alternative look like? This, of course, goes beyond our legal framework and raises thorny philosophical and moral questions. While I don’t have a firm answer to this Big Question, I will share some thoughts in line with my argument so far. Generally, I follow the orthodoxy in thinking that free speech is a core tenant of a democratic society. Perhaps less conventionally, I’d argue that people on the left have an especially strong vested interest in protecting free speech given that radical ideas that seek to subvert systems of domination are often some of the first to be punished in restrictive eras and societies.4 Call me a lib if you want but I think the lefties should come around on this one.
At the same time, if it isn’t clear by now, I think our 21st century information economy faces novel challenges when it comes to speech and thus some sort of standards are needed. To beat a dead horse, we are in a disinformation crisis with a truly mind boggling barrage of false and deceptive information inundating our media ecosystem. And, as generative AI becomes more sophisticated by the day there’s a whole other stream of false, manufactured content that’s increasingly challenging to identify as such. If we return to the legal threshold of a “clear and present danger,” we might argue that although the threat posed by much of the speech allowed on these platforms is not directly violent, the way that these platforms approach content moderation poses an existential threat to our democracy and society. I’ll also add that plenty of vile right-wing white supremacist and antisemitic content--the kind of sensationalist stuff boosted by algorithms--has a strong historical basis in profound acts of violence and that when it comes to this kind of speech, the idea of a threat is not abstract but one that comes with a devastatingly large death-toll. This to me is another area where more serious consideration about content moderation is needed.
While there may be some tension between my pro-free speech values and the concerns I’ve raised here and throughout, I don’t think they are irreconcilable. I’m not going to hash out the details or provide a roadmap here (essay part 2?!) but, in my view, for our platforms to live up to their democratic promise we urgently need more robust guidelines or standards. It goes without saying that I think any such standards need to be crafted with the protection of free speech at the center because any meaningful infringement would be just as much of an impingement on democracy.
Discussions about free speech--or the circulation of discourse more broadly--often refer to a marketplace of ideas--fora for free intellectual exchange where winning views rise to the top. It’s a powerful metaphor, one that “drives many of the [Supreme] Court’s First Amendment decisions.” It is perhaps our most potent cultural imaginary for the exchange of ideas and as such Substack and the like pose in its image.
Unfortunately, as we’re seeing, the analogy simply doesn’t hold up. For one, as we’ve welllllll established by now, algorithms are engineered by tech executives with profit motives in mind and therefore promote content regardless of its factuality, undermining the idea of a true marketplace without regulation. (This is actually sort of an interesting inversion insofar as the moneymaking incentive would suggest the primacy of capitalist principles, among them deregulation [if we suppose no intervention=neutrality]. But, as we know from every facet of our economy, what’s even better than no regulation is creating rules and systems that specifically enrich YOU [aka lopsided algorithms and tech monopolies] while making a stink about the tyranny of law etc).
The marketplace analogy also falls flat when we consider the experience of the consumer (us! You! Me!) who is a) not rational and b) lacks full information. I’ll start with point b) since we’ve already touched on it: we lack full information insofar as platforms show us false information without disclaimers and, also, insofar as we lack robust media literacy education to engage with content critically. But point a), which I guess we’ve also sort of touched on, I think is more interesting. The fact that highly emotional content is what’s salient for viewers and drives the algorithm is a critical piece of the puzzle. Consuming content is a visceral affective experience by design. It’s not just the content itself that plays on/to our emotional responses but the medium as a whole, particularly with regards to short form video. (Jon Stewart speaks very well about this at some point in this interview as I recall). Too, the way platforms structure our interactions/engagement--via scrolling, etc--also ties into the profoundly emotional nature of our information economy. At all of these levels, scrolling elicits physiological responses especially in our neuroendocrine systems or, to put it more plainly, embodied experiences of adrenaline, stress, pleasure et cetera. If you don’t believe me, take it from Oxford, whose 2025 word of the year is rage-bait. And this doesn’t even touch on the attention-economy, addictiveness side of this. Rational? Surely not. Need I remind you that this is all to the tune of massive profits for these companies?
One other key consideration: the (social) media ecosystem is not self-contained and unevenness comes not only from platform design and tech execs but also the vast and disproportionate power of corporations and the ultrawealthy on discourse. It’s lobbying. It’s advertising. In a society that’s so deeply unequal, most any marketplace will reflect and perpetuate those same dynamics. Different ideas will carry different weight not simply because of their value but for all of the reasons listed above.
Whether you find the marketplace of ideas to be a compelling model for democratic knowledge circulation, it’s clear that the format of today’s information economy has a different, less democratic shape despite the posturing. This all seems pretty self-evident when it comes to, say, TikTok. But what about sweet, dear Substack? Here we find a home for the written word that, as I said earlier, “feels anachronistic in a landscape littered with slop and short-form content.” Just take it from Substack’s CEO:
Here, you have a media ecosystem with a different set of rules that serve you. This is a place for real human relationships built around stories and culture. The feed is designed to lead you to deeper experiences. Meaningful connection—not shallow attention—is the fuel for the whole machine.
These conditions can make online media both stimulating and nourishing, prizing quality longform work while also facilitating discussion and discovery. They reward depth and trust. They ensure creators own their relationships with their audience and can make real money doing the work they believe in. They protect creative freedom, including freedom of expression itself.
This set of rules attracts the world’s best independent voices, who in turn bring the world’s smartest audiences. The resulting culture is different from what’s in the other places.
So, yes, Substack is social media, but it’s more than that, too. It’s a media system built with a different outcome in mind: not an endless scroll, but a culture worth participating in.
“Serves” ME!!!!?????!!!!! Me!!!? Wow! I can rest easy knowing that “deeper experiences” and “meaningful connection” are just a few “this is a must read article” re-stacks away.
Dear reader, I fear she’s a wolf in sheep’s clothing. Why? BECAUUUUUSE of stuff like the tylenol-autism piece. It’s the exact same sensationalist content just cloaked in layers of legitimation both through the medium (writing) and through the not-expert-expert paradox of Substack’s democratic promise (the random doctor paying attention to fringe scientific-ish lit who can shed light on a medical phenomenon that ‘those institutional scientists’ don’t want you to know about and who should be trusted because of his discord with those very institutions). Also why? BECAUUUUUSE Substack has become a social media, all gussied up with a feed and an algorithm to boot. It’s a smart move--business-savvy. Indeed, in July, Substack’s valuation crossed the $1B threshold after raising $100M in Series C; predictably, its profitability is growing in step and, crucially, this growth has been contingent on Substack’s transition from just a newsletter site to a social platform.
Let’s go back to 2022 when Substack first faced controversy for platforming fringe anti-vaxxers. Amidst the blowback, its founders held firm in keeping up the content. I was struck by their response to the criticism, especially their attention to many of the same issues I’ve discussed here:
“We allow writers to publish what they want and readers to decide for themselves what to read, even when that content is wrong or offensive… We believe this approach is a necessary precondition for building trust in the information ecosystem as a whole. The more that powerful institutions attempt to control what can and cannot be said in public, the more people there will be who are ready to create alternative narratives about what’s ‘true,’ spurred by a belief that there’s a conspiracy to suppress important information. When you look at the data, it is clear that these effects are already in full force in society...
Our information systems didn’t create these problems but they do accelerate them. In particular, social media platforms that amplify contentious content contribute to the intensification and spread of mistrust. At the same time, they ratchet up the pressure on traditional media – legacy print, TV news, radio – to vie for attention at all costs, with similar consequences…
This is the area where we hope to make a contribution with Substack. While the attention economy generates power from exploiting base impulses and moments of attention, a healthy information economy would derive power from the strength and quality of relationships that are built over time…
The key to making this all work is giving power to writers and readers.”
I mean… yeah. There’s a lot to like about what’s here. The idea that “when you use censorship to silence certain voices or push them to another place, you don’t make the misinformation problem disappear but you do make the mistrust problem worse” is the best counterargument to my insistence that we need more robust standards. And, the idea that Substack was designed with systemic issues of distrust in traditional media and the disinformation crisis in mind should feel like panacea.
Here’s the thing, though: Substack in 2022 is a very different place than Substack in 2025. At that time, the platform didn’t even have an app, let alone a feed or video capabilities. And, as the platform moves increasingly into the ever-lucrative social media territory--both in its design and its funding model--I think the terms of the conversation need to shift accordingly.
I’d like, here, to return to my main point which--despite all of the ground we’ve now covered--is to draw our attention to the rejection of traditional experts and institutions that I believe undergirds the emergence of today’s information economy. I wrote, with regards to legacy media like the NYT: “Our information economy is broken: shaped by and shaping our dysfunctional politics.” While I didn’t use this language then, some of what I was getting at was that it feels like those institutions don’t serve us in part because they seem to lack ethics, to have abandoned the values that once made them trusted sources. Of course, you’ll note, though, that it is precisely that language--that emphasis on a lack of ethics--that I’ve been hollering about with regards to social media platforms this whole time. And, now, finally, we can actually get to my real main point: we’re dealing with two sides of the same coin.
Our rejection of elites is a rejection in spirit only. We deride the NYT only to line Zuck’s pockets. Today’s up-and-coming media elite, those who control our beloved social media platforms, are not technocrats in presentation but, often, they are oligarchic technocrats in practice. We’ve found ourselves in an arguably much more dangerous cultural environment: one that has been profoundly disruptive to how we disseminate knowledge all while reifying the same power structures that make us in need of disruption in the first place.
If part a) of my augment was that our information economy is shifting to become more horizontal in keeping with our cultural climate and part b) was to examine the way that horizontality is threatened (or perhaps even an illusion) because of where power is located when it comes to the design and ownership of platforms, part c), right now, aims to tie these threads together by resituating this discussion in the bigger picture.
Substack cannot solve the deep inequality or dysfunctional politics that are sculpting the contours of 2025’s information economy. But, for now, it still differs from platforms like TikTok and Instagram in meaningful ways. It still has founders who are thinking critically about these issues and it is still very much evolving in its structure. I want to believe that the democratic promise can be real.
To keep the promise alive, I do think we need more robust guardrails for content moderation. And, too, I think we need to take care to design algorithms that not only do not disproportionately amplify sensationalist content but also do not contribute to an information ecosystem that reflects and recreates the dynamics of today’s political-economy (at least beyond inevitability). I am but one voice, and I’m glad that the democratic promise is alive enough that I can share my thoughts here. That is horizontality in action. But, to get at the bigger picture, what I’d really advocate for is not my ideas but for more transparency and much greater collective agency. These are actionable items. It shouldn’t be hard to find Substack’s content moderation policy or concrete information about how its algorithm is designed. And, really, if the platform wants to live up to its democratic promise, there should be opportunity for readers and writers alike to have a say in how the platform is governed. The only real antidotes to an olig-technocracy lie in power to the people.
Substack may seem like an odd frontier for one of the most urgent moral imperatives of our era but I believe—clearly, having written this—that it’s worth real consideration. There is an opening and an opportunity here for meaningful intervention. A chance to shape our information economy to reflect something closer to democracy. Whether that’s truly possible under our political and economic conditions is not a trivial matter but, so long as we read, write, and publish these godforsaken think-pieces and consume media, we must take real care to probe and push the mechanisms by which we do so.
Whether this distinction is all that helpful--there’s certainly reason to assert that these platforms ultimately manifest in the same behaviors and mechanisms--my interactions with Substack as a purveyor of serious thought have led me here.
Our serious lack of media literacy education is, of course, a HUGE part of this problem as well.
I think this dynamic also critical to understanding Trump’s appeal for many--his positioning as an outsider, a disruptor (and certainly he has been destructive) is what there’s a political and cultural appetite for right now. He’s also made choices in line with this anti-expert sentiment (virtually no experts in his cabinet!) and has clearly reaped political rewards as a result of this messaging. Obviously, he only stands to reproduce wealth, create profound destruction, and work for the interest of a powerful few, but the “drain the swamp,” anti-elite attitude of his campaign is no coincidence.
One of the most fascinating threads of Trump’s second term has been to watch his administration lambast the left for woke/PC culture that impinges upon free speech while simultaneously being demonstrably hostile when it comes to legitimate free speech issues. The most obvious example is his weaponization of antisemitism as a blanket dismissal for any pro-Palestinian speech. It is CHILLING how much both academic freedom and free speech on and off college campuses have been undermined during Trump’s presidency. This era is one where leftists can see clearly how valuable actual free speech protections are but the not-so-distant McCarthy Era is just as compelling evidence.


