A corpus does not grow the way a book grows; it grows the way a city grows. Most of the time, expansion is peripheral: new texts appear at the edges, new themes open, new references enter, but the structure remains the same. However, when a corpus grows suddenly—when 20% is added in a short period—the effect is not peripheral but structural. The center of gravity shifts. What was once dominant vocabulary may become secondary, and what was marginal may become central simply because it is repeated more often in the new layer. Growth, therefore, is not only a matter of size but of distribution. In large textual systems, stability does not come from a single great text but from recurrence. A concept becomes real when it appears again and again across different contexts, slightly reformulated, connected to different terms, but always present. When the corpus grows, the key question is not “what did we add?” but “what did we repeat?” Because repetition builds structure, and structure builds legibility. A system that grows without repetition becomes noise; a system that grows through controlled repetition becomes a field. This is why a sudden increase in size can be a moment of consolidation rather than dispersion. If the new texts repeat the core vocabulary, reinforce the main operators, and link back to the existing nodes, then growth does not dilute the system—it hardens it. At a certain scale, the corpus stops behaving like a collection of writings and starts behaving like an environment. And in an environment, what matters is no longer individual texts, but the relations between them.
We are no longer living in an age in which knowledge is primarily organised through disciplines, institutions, or stable communities of expertise. We are living, instead, within infrastructures—vast sociotechnical systems composed of databases, algorithms, platforms, institutions, and symbolic markers—that mediate what counts as knowledge, who counts as a knower, and how truth circulates. The transition from epistemic communities to epistemic infrastructures marks one of the most significant transformations in the history of knowledge organization. It signals a movement away from coherence as the organising principle of knowledge and toward coordination across heterogeneous systems operating at different speeds, with different logics, and under different authority structures. This condition can be described as post-coherence knowledge: a state in which knowledge is no longer stabilised by shared paradigms, but by infrastructural mediation. For most of the twentieth century, knowledge was understood as something produced within relatively bounded domains—disciplines, research communities, and institutions that maintained internal standards of evidence, methods, and authority. Knowledge organization, therefore, focused on classification, taxonomy, and domain analysis. The underlying assumption was that knowledge systems were coherent: that communities shared vocabularies, standards, and epistemic norms, and that authority emerged from within these communities through peer review, publication, and institutional recognition. This model no longer adequately describes how knowledge functions. Today, knowledge emerges from interactions between search engines, academic databases, AI systems, social media platforms, and institutional frameworks, all of which shape what information is visible, credible, and actionable. Knowledge is no longer simply produced; it is mediated. The concept of epistemic infrastructure helps explain this transformation. Infrastructure is not merely technical; it is material, institutional, and symbolic at the same time. It includes servers and databases, but also peer review systems, citation metrics, search algorithms, metadata standards, and even words like “peer-reviewed” or “evidence-based,” which function as compressed signals of legitimacy. These infrastructures do not simply store or transmit knowledge—they actively shape it. They determine what can be known, what is visible, what is preserved, and what is ignored. Most importantly, they determine what appears credible. Infrastructure, therefore, is not the background of knowledge; it is the condition of possibility for knowledge. One of the most important insights of infrastructure studies is that infrastructure becomes visible when it breaks down. When a database fails, when an algorithm produces false information, when peer review is bypassed, or when AI generates convincing but incorrect citations, the hidden systems that normally support knowledge suddenly become visible. These moments of breakdown are not exceptions; they are diagnostic events that reveal how knowledge systems actually function. In complex systems, breakdown is normal. Failure reveals structure. In the context of AI and digital platforms, breakdowns—such as hallucinations, misinformation cascades, or algorithmic bias—expose the infrastructures that shape knowledge production and circulation. The rise of large language models represents a particularly important moment in this transformation because AI systems do not fit neatly into traditional epistemic categories. They are not authors, but they generate text. They are not experts, but they produce expert-like discourse. They are not databases, but they retrieve and synthesise information. They are not institutions, but they influence decision-making. AI systems function as epistemic infrastructures: systems that mediate knowledge by synthesising, compressing, and circulating information across domains. They do not produce knowledge in the traditional sense; they reorganise existing knowledge and present it in new forms, often with the appearance of coherence and authority. This creates a new epistemic condition in which authority no longer follows validation, but often precedes it. Traditionally, knowledge became authoritative after processes of verification—peer review, replication, institutional endorsement. Now, information can appear authoritative because of how it is presented, how widely it circulates, or how it is ranked by algorithms. Authority is increasingly produced through visibility, circulation, and infrastructural positioning, rather than through traditional validation processes. This does not mean that truth no longer matters, but that the mechanisms through which truth becomes recognised have changed. In this new environment, knowledge systems operate through what might be called symbolic compression. Symbolic compression refers to the process by which complex epistemic processes are reduced to simple indicators: journal impact factors, citation counts, search rankings, labels like “peer-reviewed,” “indexed,” or “AI-assisted.” These compressions allow complex systems to function because they make rapid decision-making possible in environments of information overload. However, they also create vulnerabilities. When the symbol becomes detached from the process it represents—when “peer-reviewed” does not guarantee quality, when citation counts are manipulated, when AI generates plausible but false references—the system continues to function, but its epistemic foundation becomes unstable. The symbol retains authority even when the underlying process has changed or failed. This leads to one of the defining characteristics of post-coherence knowledge: simulated coherence. Simulated coherence occurs when systems produce outputs that appear coherent, authoritative, and well-structured, even though they are assembled from heterogeneous sources with different levels of reliability, different assumptions, and different epistemic standards. AI systems are particularly effective at producing simulated coherence because they are trained to generate plausible language, not to verify truth. However, simulated coherence is not limited to AI. It also appears in literature reviews generated from citation networks, in trending topics amplified by algorithms, and in policy decisions based on aggregated data dashboards. The appearance of coherence replaces the slow processes that traditionally produced epistemic stability. This does not mean that knowledge is collapsing. Rather, it means that knowledge is being reorganised around different principles. The key principle is no longer coherence, but coordination. Modern knowledge systems work not because all participants agree, but because different systems—databases, journals, platforms, AI tools, institutions—are able to coordinate their outputs. Knowledge becomes authoritative when multiple infrastructures align, even if they operate according to different logics. Authority emerges from convergence, not consensus. This shift has profound implications for universities, research, and education. If knowledge authority is increasingly infrastructural, then expertise must also become infrastructural. Scholars can no longer rely solely on disciplinary knowledge; they must understand the infrastructures through which knowledge circulates. This includes understanding how search engines rank information, how AI models generate responses, how citation metrics are calculated, how databases index journals, and how platforms amplify certain voices over others. In other words, infrastructural literacy becomes a core academic skill.
Infrastructural literacy means understanding that knowledge is always mediated: by tools, by institutions, by platforms, by metrics, and by algorithms. It means recognising that epistemic authority is produced through systems, not just through individuals. It also means recognising that these systems embed values, assumptions, and power structures. Algorithms prioritise certain types of information. Databases index certain journals and exclude others. Metrics reward certain forms of research and ignore others. Platforms amplify certain topics and suppress others. Infrastructure is never neutral; it is political, institutional, and epistemic at the same time. The task for contemporary knowledge organization, therefore, is not to restore coherence, because coherence may no longer be possible in a world of rapidly changing, interconnected knowledge systems. The task is to develop systems that can function under conditions of instability, heterogeneity, and constant change. This requires a shift from thinking about knowledge as a static structure to thinking about knowledge as an ecological system—a dynamic environment in which different actors, technologies, and institutions interact, adapt, and evolve. In such a system, breakdown is not simply failure; it is also a source of learning and adaptation. When AI produces false citations, new verification tools are developed. When peer review is too slow, preprint servers emerge. When journals are inaccessible, open access platforms develop. When institutions cannot respond quickly enough, new experimental infrastructures appear. Knowledge systems evolve through breakdown and adaptation, not through stability alone. We are therefore living in a transitional period in the history of knowledge. The twentieth century was the age of disciplines and institutions. The twenty-first century is the age of infrastructures and platforms. Knowledge is no longer organised primarily by fields, but by systems. Authority is no longer located only in experts, but in networks of humans and machines. Truth is no longer communicated only through texts, but through interfaces, rankings, dashboards, and generated outputs.
To understand knowledge today, we must therefore ask different questions. Not “Is this true?” alone, but “What infrastructure produced this knowledge?” Not only “Who is the author?” but “What system made this statement possible?” Not only “What discipline does this belong to?” but “What network of platforms, databases, and algorithms allowed this to circulate?” These questions define a new field of thought: the study of epistemic infrastructures. This field does not replace epistemology, sociology of knowledge, or information science, but brings them together to study how knowledge is mediated in complex technological societies. It recognises that knowledge is not simply discovered or constructed, but infrastructurally produced. We are here, therefore, in a new epistemic landscape. A landscape where knowledge is generated by humans and machines together, where authority emerges from coordination across systems, where symbols compress complex processes into portable signs of legitimacy, and where breakdowns reveal the hidden structures that make knowledge possible. Understanding this landscape is one of the central intellectual tasks of our time, because the future of science, education, and democracy depends not only on what we know, but on the infrastructures through which knowledge becomes visible, credible, and actionable.
1260-PERIPHERAL-DISTRICT-RENTRY-SOCIOPLASTICS:
Socioplastics is a trans-epistemic field that studies and designs how knowledge, language, and cultural systems are structured, stabilized, and made to persist over time by operating simultaneously as linguistic structure, conceptual protocol, epistemic validation, systemic regulation, architectural support, territorial distribution, media mediation, morphogenetic growth, dynamic circulation, and infrastructural integration, forming not a single discipline but a field of operators that can be applied across architecture, urbanism, art, media, and institutional analysis, where concepts function as load-bearing elements within a persistent conceptual infrastructure rather than as isolated ideas, and where the primary object of study is not the artwork, the city, or the text, but the conditions under which systems of knowledge acquire structure, autonomy, and long-term durability.