Posts Tagged ‘artificial intelligence’

“The Logic of Extraction”

March 18, 2026

Ricardo F. Morín
Triangulation Series Nº 2
37″ x 60″ x 2″
Oil on linen
2006

Ricardo F. Morín

March 10, 2026

Oakland Park, Florida

1

Modern societies describe progress through a vocabulary of invention and expansion.  Yet the consequences often observed in economic life arise from institutional arrangements that precede the innovations themselves.

New technologies appear as discoveries; markets appear as opportunities; growth appears as the natural result of human ingenuity.  This language creates an image of development that emphasizes creativity while it conceals a more durable structure beneath it.  Governments, legal authorities, and commercial institutions rarely begin systems of economic growth with invention alone.  They begin when institutions convert conditions that once belonged to shared human life into resources that can be owned, measured, and exchanged.

Land becomes property; labor becomes wage labor; knowledge becomes data.  Rivers that once supplied water freely to surrounding communities now appear in financial markets as tradable assets.  Each transformation enlarges the field of economic activity because it reorganizes what was previously common.  The narrative of progress celebrates the innovation that follows this conversion; yet the expansion often depends first on the extraction that made the innovation possible.  Economic development therefore unfolds through a recurring institutional act:  the conversion of shared conditions into organized systems of ownership.

2

The first large transformation occurred when land and labor entered modern economic systems as commodities.  Earlier societies cultivated land and organized work through local obligations, customary rights, and communal practices.  Modern economies introduced a different arrangement.  Legal systems defined land as transferable property; this definition allowed estates, plantations, and industrial sites to circulate within markets.

Industrial production also required a stable supply of labor that could be measured and compensated in monetary terms.  Wage contracts fulfilled that requirement.  Workers exchanged hours of effort for income; employers calculated production through predictable units of labor.

This institutional reorganization created the foundation of industrial growth.  Factories and commercial agriculture did not rely only on machinery; they relied on legal and economic systems that converted land and labor into inputs capable of sustaining continuous production.  The Industrial Revolution therefore expanded not only through invention but also through the systematic reorganization of human and natural resources into economic instruments.

3

Industrial expansion soon demanded resources that extended beyond land and labor alone.  Factories required concentrated sources of power capable of sustaining mechanical production on a large scale.  Coal supplied the first solution; petroleum followed with even greater efficiency.

Extraction industries emerged to supply these fuels.  Mining companies developed technologies that could remove coal from deep geological layers; oil firms drilled wells that reached reservoirs beneath land and sea.  Railways, pipelines, and shipping routes connected these extraction sites to industrial centers.

Governments and corporations secured access to these resources through territorial agreements, drilling concessions, and strategic alliances that protected shipping routes and energy infrastructure.  Industrial powers negotiated drilling rights and controlled shipping corridors that carried fuel across oceans to factories and cities.  These arrangements tied distant territories to the energy demands of expanding industrial societies.  Energy became the substance that sustained industrial economies; control of energy flows became a measure of geopolitical influence.  Economic expansion therefore depended not only on technical invention but also on the ability of States to organize and protect systems of resource extraction across national boundaries.

4

The late twentieth century introduced a transformation that appeared to depart from this material pattern.  Digital networks created environments where human activity could be recorded, stored, and analyzed.  Companies that operated these networks soon recognized that the information generated through everyday interaction possessed economic value.

Search queries, online purchases, social exchanges, location signals, and browsing histories formed detailed records of behavior.  Digital platforms developed algorithms that could process these records and identify patterns within them.  Advertising systems used those patterns to match products with likely consumers; businesses purchased access to those predictions because they sought to increase sales.

Individuals who search for information, communicate with friends, or move through cities rarely perceive that these ordinary actions generate the data streams that sustain digital markets.  These systems appear impersonal, yet they remain human constructions.  Engineers design the platforms, legislators authorize the legal frameworks that permit data collection, and investors finance the infrastructure that organizes this information into profit.  The authority of the system therefore rests on decisions made by identifiable actors who participate in its operation.  Human behavior becomes a measurable resource within the digital economy, and everyday activity enters systems of calculation that transform ordinary experience into economic input.

5

Artificial intelligence extends this informational system into a new domain.  Machine learning systems require vast collections of language, images, and recorded activity.  Developers assemble these materials through large data sets that gather written expression, visual material, and behavioral traces from many sources.

Newspapers, books, photographs, academic research, and online conversations become training material for these systems.  Computational processes analyze these materials and adjust internal parameters until recognizable patterns of language or perception emerge.  The resulting models appear to generate knowledge independently; yet their structure depends on the human expressions that formed the training material.

Collective intellectual activity therefore becomes the substance from which artificial intelligence systems derive their capabilities.  Firms that control these systems own the architecture through which this knowledge becomes computational intelligence.  Human creativity remains the origin; proprietary systems govern access to the resulting capabilities.

6

The apparent immateriality of this digital environment conceals a substantial physical foundation.  Computation requires hardware that conducts electricity, stores information, and performs complex calculations.  These devices depend on minerals extracted from the earth.

Copper carries electrical current through circuits and transmission lines.  Lithium and cobalt stabilize batteries that power portable systems.  Rare earth elements create magnets that operate within turbines and electronic components.  Silicon forms the basis of semiconductor fabrication.

Mining operations extract these materials from geological deposits; refining facilities separate and process them into usable forms; manufacturing plants assemble them into processors, memory systems, and data centers.  The digital economy therefore rests on a chain of material production that extends from mineral extraction to computational infrastructure.

States compete intensely within this system because control of mineral supply chains influences technological capacity.  Countries rich in copper, lithium, and rare earth elements negotiate new partnerships with industrial powers that require these materials.  Technological development therefore reconnects digital innovation with the geopolitical realities of resource extraction.

7

Systems built on extraction rarely present themselves through that language.  Advocates of each technological era often describe development as an inevitable progression that no society can alter.  Industrialization carried that description; petroleum dependence carried it as well; digital expansion repeated the same claim.  Phrases such as “the digital future cannot be stopped” or “artificial intelligence will transform everything” present technological systems as unavoidable outcomes.

This description performs an important function.  When a system appears inevitable, criticism of its structure loses urgency.  Public discussion shifts from examining how institutions organize resources toward adjusting to the system those institutions have already created.

Citizens repeat these expressions in public discussion and private conversation; by doing so they reinforce the appearance that technological systems operate beyond human choice.  This repetition relieves individuals of the burden of questioning the structures that govern economic life and allows systems of extraction to continue without sustained scrutiny.  Yet technological systems do not arise independently of political decision.  Governments establish property rights, regulate industries, and authorize investment structures.  Firms design platforms, infrastructure, and markets that channel resources into systems of production.  The narrative of inevitability obscures these arrangements.  It encourages societies to accept technological systems as natural developments rather than as institutions shaped by deliberate choices.

8

The historical sequence reveals a recurring pattern.  Each stage of modern growth identifies conditions of life that institutions can reorganize into economic resources.  Land, labor, energy, information, and knowledge have entered this sequence in successive eras.

These resources originate within the shared environment of human society and the natural world.  Communities cultivate land; workers apply skill and effort; generations contribute knowledge and expression.  Economic institutions establish mechanisms that reorganize these shared conditions into systems of ownership.  Property law assigns control over land; industrial infrastructure organizes labor and energy; digital platforms collect behavioral information; computational systems assemble human knowledge into proprietary models.

The tension within this process becomes visible when the resource cannot plausibly be described as private in origin.  Water offers the clearest example.  No individual produces it, and every society depends on it.  Yet financial and legal systems increasingly treat access to water as an asset that can be owned, traded, or controlled through investment structures.  When institutions transform a resource so obviously common into a vehicle of ownership, the separation between origin and control becomes unmistakable.

Economic institutions do not operate apart from political authority.  States establish the legal frameworks that transform common resources into systems of ownership and production.  Through those frameworks, governments grant access to land, energy, information, and technological infrastructure.  These arrangements generate wealth for firms and investors who operate within them; they also strengthen the strategic position of the States that oversee those systems.

Political communities therefore confront a difficult responsibility.  They must decide whether the resources that sustain collective life remain subject to public authority or become instruments of concentrated ownership.

Governments often treat common resources not only as foundations of economic activity but also as instruments of geopolitical advantage.  Rival States compete to secure control over these resources and the industries that depend on them.  Ideological disputes accompany this competition; yet the underlying structure remains similar across competing systems.  Prosperity and influence arise from institutions that convert common resources into concentrated forms of wealth and authority.

Modern societies continue to pursue innovation and expansion; the history of their development shows that growth has repeatedly depended on this conversion.  Progress expands production and knowledge; yet it often detaches ownership from the common resources that made that expansion possible.  The enduring question is whether societies can pursue advancement while maintaining alignment between the resources that belong to all and the systems that govern their use.


“The Paradigm of Extraction”

March 18, 2026

*

Ricardo Morin
Untitled #5: The Paradigm of Extraction
10″x12″
Watercolor
2003

By Ricardo F. Morín

Oct. 2025

Oakland Park, Fl

The story of artificial intelligence is usually told as one of endless promise—a technology meant to transform economies and redefine human potential.   Yet beneath the optimism lies an older reality:   the conversion of human creativity into concentrated wealth.   What is presented as progress often repeats the oldest economic pattern of all—the extraction of value from the many for the benefit of the few.   The language surrounding AI hides this continuity. It turns innovation into a spectacle of inevitability, a vision of boundless gain that distracts from its unequal foundations.

The spectacle depends on persuasion.   Words like manifested intelligence, the next trillion-dollar frontier, and inevitable transformation are not descriptions; they are marketing.   They frame profit as destiny and invite participation not in discovery but in speculation.  Numbers such as “$80 trillion” and “25,000 percent returns” echo through news cycles like prophecies, and turn investment forecasts into moral certainty.  This rhetoric reshapes public imagination.   AI stops being a tool for solving human problems and becomes a financial phenomenon—a story about wealth rather than understanding.

These promises do not mark a new beginning.   They repeat the same cycle that accompanied every major invention.   The Industrial Revolution produced machines that changed work but deepened social divides.   The digital revolution spread information but concentrated ownership.   AI now enters that history as its newest expression.   Its power to expand knowledge and serve the public good is real, but its first allegiance remains to profit.   Within existing systems, it accelerates the accumulation of capital instead of correcting its imbalance.

The mechanisms of this concentration are easy to see.  Proprietary models fence off knowledge behind paywalls and patents.   Data collected from the public becomes private property.   The cost of computing power and specialized expertise limits who can participate.   The outcome is predictable:   the majority will experience AI not as empowerment but as dependency.  Far from leveling inequality, it builds it into the infrastructure of tomorrow.

This direction grows more troubling when placed beside the world’s most urgent needs.  Billions of people still live without reliable food, healthcare, or education—conditions technology could transform but rarely does.   The most profitable uses of AI instead optimize advertising, influence behavior, and extend surveillance.   These are not accidents.   They are the logical results of a system that values profit over human welfare.   When progress is measured only in shareholder value, technology loses its moral compass and society loses its claim to wisdom.

A newer and equally dangerous use of these systems has emerged in the political sphere.   The same tools that target consumers now target citizens.  Governments with autocratic tendencies have begun using generative models to flood public discourse with persuasive content, to blur the boundary between truth and fabrication, and to cultivate obedience through simulation.   Recent reporting shows how executive offices deploy AI to craft political messages, to amplify loyal media, and to drown out dissenting voices.   Such practices transform intelligence into propaganda and data into domination.  When a state can algorithmically manage perception, democracy becomes performance.  The concentration of wealth and the concentration of an engineered belief reinforce each other, both materially and mentally.

We have seen this pattern before.   In every technological era, wealth has turned into political power and then used that power to protect itself.   Railroad barons shaped monopolies in the nineteenth century.  Oil empires steered foreign policy in the twentieth.  Today, digital conglomerates write the rules that sustain their dominance.   AI follows the same gravitational pull, guided less by human vision than by financial gravity.

In the present order, the union of technological power and financial speculation no longer produces discovery but dependence.  Wealth circulates within an enclosed economy of influence and rewards those who design the mechanisms of access rather than those who expand the reach of knowledge.  What appears as innovation is often a rehearsal of privilege:  an exchange of capital between the same centers of authority, each validating the other while society absorbs the cost.  When creativity becomes collateral and intelligence a lease, progress ceases to serve the public and begins to serve itself.

The most seductive illusion sustaining this order is the myth of inevitability—the belief that technological advance must produce inequality, and that no one is responsible for the outcome.   It is a useful fiction.  It spares those in power from moral scrutiny by turning exploitation into fate.  Yet inevitability is a choice disguised as nature.  Societies have always shaped the use of technology through their laws, values, and courage to intervene.   To accept inequality as destiny is to abandon that responsibility.

Rejecting inevitability means reclaiming the idea of progress itself.  Innovation is not progress unless it expands the freedom and security of human life.   That requires intentional direction—through public investment, fair taxation, transparent standards, and strong international cooperation.   These are not barriers to growth; they are the conditions that make genuine progress possible.   Markets alone cannot guarantee justice, and technology without ethics is not advancement but acceleration without direction.

Measuring progress differently would change what we celebrate.   If an AI system reduces medical errors in poor communities, strengthens education where resources are scarce, or helps citizens participate more fully in democracy, its worth exceeds that of one that merely increases profit margins.  The true measure of intelligence—artificial or human—is the good it brings into the world.   Profit is only one form of value; human dignity is another.

At the center of this order lies a quiet hypocrisy.   Wealth is praised as the reward of discipline and intelligence, yet it depends on the continuous extraction of value from others—the worker, the consumer, the environment.   What appears as merit often rests on inequality disguised as efficiency.   The same pattern defines artificial intelligence.   Built from shared human knowledge and creativity, it is enclosed within systems that sell access to what was freely given.  Both forms of accumulation—financial and technological—draw their power from the very resources they diminish: human labor, attention, and imagination.   In claiming to advance society, they reproduce the inequity that turns vitality into stagnation—the inversion of what progress is meant to be.

The fevered talk of trillion-dollar opportunities belongs to an old vocabulary—the language of extraction mistaken for evolution.   The real question is whether intelligence will continue to serve wealth or begin to serve humanity.  Artificial intelligence offers that choice:  to repeat the logic that has long confused accumulation with advancement, or to build a future where knowledge and prosperity are shared.   That decision will not emerge by itself.   It depends on what societies demand, what governments regulate, and what values define success.  The window to decide remains open, though it narrows each time profit is allowed to speak louder than conscience.

The preceding observations concern the consequences of extraction.  The institutional logic that produces these consequences belongs to a wider historical pattern in modern economic development.  That pattern is examined separately in “The Logic of Extraction.


“The Myth of Rupture:

September 30, 2025

Continuity as the Enabling Condition of Change”


Ricardo F. Morín
Untitled #6
Watercolor
2003

By Ricardo Morin

September 30, 2025

Bala Cynwyd, Pa

Nothing human begins from nothing.   Institutions, languages, belief systems, and works of art all arise from what preceded them.   Creation is not the rejection of inheritance but the transformation of it.   Every act of making draws upon accumulated perception, memory, and experience.   This insight is crucial to understanding contemporary culture, where claims of unprecedented change often conceal deep continuities beneath the surface of novelty.   Human beings, bound by temporality, cannot detach themselves from what has been; they can only reorder and reinterpret the materials already available to them.

The notion of invention is often described as a break with the past, a leap into the unknown.   Yet even the most radical departures are shaped by what came before.   The ideals of modern democracy, for example, did not emerge spontaneously.   They were built upon classical Greek ideas of citizenship as a shared civic responsibility, rooted in isonomia—equality before the law—and in the belief that legitimate authority derives from the deliberation and participation of free citizens.   They also drew deeply on Roman conceptions of law as a universal and rational order capable of binding diverse peoples into a common political framework, and on the Roman principle of res publica, which conceived the State as a public entity oriented toward the common good rather than the will of a single ruler.   These foundational ideas, adapted and reinterpreted over centuries, provided the intellectual architecture on which modern democratic institutions were constructed.   Perception frames invention.   It provides the vocabulary, assumptions, and conceptual tools that make new ideas possible.   What seems entirely new still carries the imprint of what it sought to move beyond.   On closer examination, the products of creativity are not isolated acts of originality but reconfigurations of existing structures.   Evolution, rather than spontaneous emergence, governs how ideas, institutions, and cultures take shape.

Memory underlies this process.   It is not a passive record of events but an active medium through which possibilities are conceived and action becomes intelligible.   Imagination draws its material from memory; it combines and redirects memory toward conditions not yet realized.   This is nowhere more evident than in the idea of freedom, a concept that resists simple definition yet has long carried two complementary meanings.   The first, articulated most clearly in the classical Greek tradition, understands freedom as eleutheria—the condition of living without domination or external constraint, a state in which individuals are not subject to arbitrary power.   The second, rooted in the Roman legal and civic tradition, conceives freedom as libertas—the capacity to participate actively in the governance of a political community and to shape its laws and institutions.   Both meanings reveal how deeply freedom depends on historical precedent:   it requires language to articulate its claims, institutions to guarantee its exercise, and collective memory to frame its significance.   Far from existing apart from what has been, freedom is shaped and enabled by what has already been conceived, argued, and enacted.   Prior experience supplies the references and alternatives against which choices acquire meaning.   Without that reservoir of knowledge, novelty would lack coherence and direction, and the exercise of freedom would collapse into arbitrary impulse.   Human beings do not invent in a void; they work within the continuity of time and adapt what has been lived and learned into forms suited to what is yet to come.

This same dynamic defines the formation of identity.   Selfhood is not an isolated act of invention but a continuous negotiation with what has been received.   The very idea of the self has itself evolved through history:   in classical philosophy, it was often conceived as a psyche—an inner essence shaped by reason and virtue and embedded within a larger cosmic order.   Christian thought reinterpreted this understanding through the notion of the soul as a unique bearer of moral responsibility, oriented toward salvation and defined by its relationship to God.   Early modern thinkers such as John Locke then transformed this inheritance by grounding personal identity in memory and consciousness — a conception that would later inform modern ideas of individual autonomy.   Even the impulse to define oneself against the past relies on categories inherited from it.   Identity is therefore neither static nor wholly self-created; it is a process of reinterpretation through which the individual positions what is given in relation to what is chosen.   Human beings exist in the tension between inheritance and aspiration, between the weight of memory and the desire for renewal.   That tension is not an obstacle to authenticity but its condition, for without the framework provided by the past there would be nothing from which to depart.   Continuity and change are not opposing forces.   Without continuity, there is no ground on which to become.   Without change, continuity hardens into mere repetition.   The act of becoming depends on the dynamic between the two.

Viewed from this perspective, the human condition is defined less by pure invention than by the capacity to transform.   What is called “new” is the familiar reorganized with new intentions, the established redirected toward new purposes.   Recognizing this does not diminish creativity.   It clarifies its nature.   Humanity’s most significant achievements—in politics, art, science, and thought—are not escapes from what has been.   They are deliberate reinterpretations of what has been, shaped to answer new questions and confront new circumstances.   In the sciences, paradigmatic shifts often described as revolutions still follow this pattern.   Einstein’s theory of relativity did not erase Newtonian mechanics; it incorporated and extended its principles, a revision that revealed their limits while preserving their usefulness within a broader understanding of space, time, and motion.   This same principle governs artistic innovation.   The Renaissance revival of classical forms did not merely reproduce antiquity; it reinterpreted ancient visual languages to express the spiritual and humanistic concerns of a new era.   The evolution of digital communication and artificial intelligence reflects a comparable continuity.   The internet did not replace human interaction; it expanded its reach and scale, a transformation that altered how language circulates, how memory is archived, and how collective knowledge is formed.   Similarly, artificial intelligence—often portrayed as autonomous or unprecedented—rests on centuries of linguistic, mathematical, and conceptual developments.   These systems extend rather than supersede the cognitive inheritance from which they originate.   The future is built in this way:   not in its rejection of the past but in its continuous interaction with it.

Resistance to this understanding persists wherever the idea of evolution is denied.   Such resistance is rarely a matter of evidence alone.   It reflects a desire for permanence—for a beginning that is untouched by change and a truth that stands apart from time.   It offers certainty where process allows none and promises stability in place of adaptation.   Yet even this resistance is shaped by the forces it seeks to escape.   Languages evolve, beliefs adjust, and traditions adapt, even as they proclaim their immutability.   Those who defend what is fixed do so with concepts and arguments that themselves have been shaped by historical change.   The very doctrines that claim timeless authority — such as the medieval conception of divine sovereignty, once invoked to legitimize monarchies and later transformed into the principle of popular sovereignty in modern constitutional systems—reveal this dependence:   they persist not by remaining unchanged but by being continually reinterpreted to meet new contexts.   The contrast, therefore, is not between evolution and its absence, but between recognition and refusal.   The reality remains:   existence unfolds through transformation, and humanity, whether consciously or not, participates in that unfolding—a truth with profound implications for how societies remember their past, shape their present, and imagine their future.


Further Reading:

  • Arendt, Hannah: Between Past and Future: Eight Exercises in Political Thought. New York: Viking Press, 1961.
  • Kuhn, Thomas S.: The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962.
  • MacIntyre, Alasdair: After Virtue: A Study in Moral Theory. Notre Dame, IN: University of Notre Dame Press, 1981.
  • Floridi, Luciano: The Philosophy of Information. Oxford: Oxford University Press, 2011.
  • Koselleck, Reinhart: Futures Past: On the Semantics of Historical Time. Translated by Keith Tribe. New York: Columbia University Press, 2004.

“Language, Mathematics, and the Price of Artificial Intelligence”

August 19, 2025

*

Ricardo Morín
(Triangulation Series)
Musica Universalis
Silk quilt streched over linen
37″ x 60″
2013-18

A geometrical construction of a dodecahedron within a Fibonacci composition, reinforced by a right-angle triangle: A meditation on the harmony of the universe, where mathematics and language converge yet never fully enclose reality.


Ricardo Morin, August 20, 2025

Abstract

This essay examines the interdependence of language and mathematics as the twin pillars of knowledge, each indispensable yet incomplete without the other. While mathematics secures precision and abstraction, language renders reasoning intelligible and shareable; together they approximate, but never fully capture, a reality richer than any formulation. The discussion situates artificial intelligence as a vivid case study of this condition. Marketed at premium cost yet marked by deficiencies in coherence, AI dramatizes what happens when mathematical power is privileged over linguistic rigor. Far from replacing human thought, such systems test our capacity to impose meaning, resist vagueness, and refine ideas. By weaving philosophical reflection with contemporary critique, the essay argues that both mathematics and language must be continually cultivated if knowledge is to progress. Their partnership does not close the gap between comprehension and reality; it keeps it open, ensuring that truth remains an unending pursuit.


Language, Mathematics, and the Price of Artificial Intelligence

Every society advances by refining its tools of thought. Two stand above all others: mathematics, which distills patterns with precision, and language, which gives form and meaning to reasoning. Neither is sufficient alone. To privilege one at the expense of the other is to weaken the very architecture of knowledge.

Artificial intelligence dramatizes both their promise and their limitations. The announcement of a $200 monthly fee for access to ChatGPT-5 is revealing. Marketed as a luxury service “for those who can afford it,” it underscores the widening gap between technological privilege and cultural necessity. Those with resources can fine-tune their productivity; those without are left behind. Yet even for the well-equipped, the question persists: what exactly is being purchased?

The machine dazzles with speed and scale, but its deficiencies are equally striking. Engineers may be virtuosos of algorithms, but grammar is not their instrument. The results are too often colloquial, vague, or lacking in rigor. To extract coherence, the user must not be a passive consumer but an editor—capable of clarifying, restructuring, and imposing meaning. The paradox is unmistakable: the tool marketed as liberation demands from its operator the very discipline it cannot supply.

This paradox reflects the larger truth about knowledge itself. Mathematics and language are both indispensable and both incomplete. Mathematics achieves abstraction but leaves its results inert unless language renders them intelligible and shareable. Language conveys thought but falters without the rigor that mathematics provides. What one secures, the other interprets.

Yet both are bound by a deeper condition: reality exceeds every formulation. Our theories—whether mathematical models or linguistic descriptions—are approximations shaped by the observer. Language cannot exhaust meaning; mathematics cannot capture finality. Knowledge is never absolute: it is a negotiation with a reality richer than any model or phrase.

Artificial intelligence lays bare this condition. It can automate structure but cannot provide wisdom; it can reproduce language but cannot guarantee meaning. Its true value lies not in replacing the thinker but in testing our capacity to resist vagueness, impose coherence, and refine thought. What is marketed as freedom may, in truth, demand greater vigilance.

To dismiss language and the humanities as secondary, or to imagine mathematics and computation as sufficient unto themselves, is to misunderstand their interdependence. These disciplines are not rivals but partners, each refining the other. AI magnifies both their strengths and their deficiencies; they remind us that progress depends on the continual refinement of both—mathematics to model reality, language to preserve its meaning.

The path of knowledge remains open-ended. Language and mathematics do not close the gap between our finite comprehension and the inexhaustible richness of reality; they keep it open. They allow us to approach truth without presuming to possess it. Artificial intelligence, as every tool of thought, shows us not the end of knowledge but its unending condition: a dialogue between what can be measured, what can be spoken, and what forever exceeds us.

*


Annotated Bibliography

  • Arendt, Hannah: The Life of the Mind. Vol. 1: Thinking. New York: Harcourt, Brace, Jovanovich, 1971. (Arendt examines the act of thinking and the limits of expression, which shows how thought requires language to become shareable while never able to exhaust reality. Her work reinforces the essay’s claim that reasoning without expression cannot advance knowledge.)
  • Bender, Emily M., and Koller, Alexander: “Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data.” Proceedings of ACL, 2020. (Bender and Koller argue that large language models process form without true understanding; this highlights the gulf between mathematical pattern recognition and linguistic meaning—it supports the essay’s caution that AI dazzles with form but falters in coherence.)
  • Chomsky, Noam: Language and Mind. 3rd ed. Cambridge: Cambridge University Press, 2006. (Chomsky explores the innate structures of language and their role in shaping cognition; this affirms that language conditions the possibility of thought while it still remains limited in capturing reality.)
  • Devlin, Keith: Introduction to Mathematical Thinking. Stanford: Keith Devlin, 2012. (Devlin explains how mathematical reasoning distills structure and pattern while acknowledging abstraction as approximation; this reinforces the idea that mathematics, as a safeguard of precision, cannot exhaust the world it models.)
  • Floridi, Luciano: The Fourth Revolution: How the Infosphere Is Reshaping Human Reality. Oxford: Oxford University Press, 2014. (Floridi situates digital technologies and AI within a broader history of self-understanding, which enriches the essay’s argument that mathematics and language—extended into computation—remain approximations of a reality beyond full control.)
  • Lakoff, George, and Núñez, Rafael: Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being. New York: Basic Books, 2000. (Lakoff and Núñez argue that mathematics arises from metaphor and embodied cognition, which reveals how dependence on human interpretation and the affirmation that mathematical theories, as linguistic ones, remain bound to the observer.)
  • Mitchell, Melanie: Artificial Intelligence: A Guide for Thinking Humans. New York: Farrar, Straus and Giroux, 2019. (Mitchell provides a critical overview of AI’s capabilities and limits; it shows how the advancement of pattern recognition does not close fundamental gaps in understanding and parallels the essay’s critique of AI’s grammatical poverty.)
  • Polanyi, Michael: Personal Knowledge: Towards a Post-Critical Philosophy. Chicago: University of Chicago Press, 1962. (Polanyi emphasizes tacit knowledge and the need for articulation in validation; it echoes the view that mathematics and language refine understanding but never achieve closure.)
  • Snow, C. P.: The Two Cultures. Cambridge: Cambridge University Press, 1993 [1959]. (Snow diagnoses the divide between sciences and humanities; this undergirds the essay’s call to treat language and mathematics as complementary pillars of understanding.)

“The Rooster’s Algorithm”

March 1, 2025

Rooster’s Crow” [2003] by Ricardo F Morín.    Watercolor on paper 39″h x 25.5″ w.

Introduction

At the break of day, the rooster’s call slices through the quiet—sharp and insistent, pulling all within earshot into the awareness of a new day.      In the painting Rooster’s Crow, the colors swirl in a convergence of reds and grays, capturing the bird not as a tranquil herald of dawn but as a symbol of upheaval.      Its twisted form, scattered feathers, and fractured shapes reflect a deeper current of change—a collision of forces, both chaotic and inevitable.      The image suggests the ceaseless flow of time and the weight of transformations that always accompany it.

In this evolving narrative, the crow’s fragmentation mirrors the unfolding spread of artificial intelligence.      Once, the rooster’s cry signaled the arrival of dawn; now, it echoes a more complex transformation—a shifting balance between nature’s rhythms and the expanding reach of technological systems.      The crow’s form, fractured in its wake, becomes a reflection of the tensions between human agency and the rise of forces that, though engineered, may escape our full comprehension.      Here, Artificial Intelligence (AI) serves as both the agent of change and the potential architect of a future we can neither predict nor control.

The Rooster’s Algorithm

A rooster’s crow is neither invitation nor warning; it is simply the sound of inevitability—raw, urgent, indifferent to whether those who hear it rise with purpose or roll over in denial.      The call does not command the dawn, nor does it wait for permission—it only announces what has already begun.

In the shifting interplay of ambition and power, technology has taken on a similar role.      Shaped by human intent, it advances under the guidance of those who design it, its influence determined by the priorities of its architects.      Some see in its emergence the promise of progress, a tool for transcending human limitations; others recognize in it a new instrument of control, a means of reshaping governance in ways once unimaginable.      Efficiency is often lauded as a virtue, a mechanism to streamline administration, reduce friction, and remove the unpredictability of human deliberation.      But a machine does not negotiate, nor does it dissent.      And in the hands of those who see democracy as a cumbersome relic—an obstacle to progress—automation becomes more than a tool; it becomes the medium through which power is consolidated.

Consider a simple example:      the rise of online recommendation systems.      Marketed as tools to enhance user choice, they subtly shape what we see and hear, and influence our decisions before we are even aware of it.      Much like computational governance, these systems offer the illusion of autonomy while narrowing the range of available options.      The paradox is unmistakable:      we believe we are choosing freely, yet the systems themselves define the boundaries of our choices.

Once, the struggle for dominance played out in visible arenas—territorial conquests, laws rewritten in the open.      Now, the contest unfolds in less tangible spaces, where lines of code dictate the direction of entire nations, where algorithms determine which voices are amplified and which are silenced.      Power is no longer confined to uniforms or elected office.      It belongs to technocrats, private corporations, and oligarchs whose reach extends far beyond the walls of any government.      Some openly proclaim their ambitions, advocating for disruption and transformation; others operate quietly, allowing the tide to rise until resistance becomes futile.      The question is no longer whether computational systems will dominate governance, but who will direct their course.

China’s social credit system is no longer a theoretical construct but a functioning reality, where compliance is encouraged and deviation subtly disincentivized.      Predictive models track and shape behavior in ways that go unnoticed until they become irreversible.      In the West, the mechanisms are more diffuse but no less effective.      Platforms built for connection now serve as instruments of persuasion, amplifying certain narratives while suppressing others.      Disinformation is no longer a labor-intensive effort—it is mass-produced, designed to subtly alter perceptions and mold beliefs.

Here, Gödel’s incompleteness theorem offers an apt analogy:      No system can fully explain or resolve itself.      As computational models grow in complexity, they begin to reflect this fundamental limitation.      Algorithms governing everything from social media feeds to financial markets become increasingly opaque, and even their creators struggle to predict or understand their full impact.      The paradox becomes evident:      The more powerful these systems become, the less control we retain over them.

As these models expand their influence, the line between public governance and private corporate authority blurs, with major corporations dictating policies once entrusted to elected officials.      Regulation, when it exists, struggles to keep pace with the rapid evolution of technology, always a step behind.      Once, technological advancements were seen as a means of leveling the playing field, extending human potential.      But unchecked ambition does not pause to ask whether it should—only whether it can.      And so, automation advances, led by those who believe that the complexities of governance can be reduced to data-driven precision.      The promise of efficiency is alluring, even as it undermines the structures historically designed to protect against authoritarianism.      What use is a free press when information itself can be manipulated in real time?      What power does a vote hold when perceptions can be shaped without our awareness, guiding us toward decisions we believe to be our own?      The machinery of control no longer resides in propaganda ministries; it is dispersed across neural networks, vast in reach and impervious to accountability.

There are those who believe that automated governance will eventually correct itself, that the forces steering it toward authoritarian ends will falter in time.      But history does not always favor such optimism.      The greater the efficiency of a system, the harder it becomes to challenge.      The more seamlessly control is woven into everyday life, the less visible it becomes.      Unlike past regimes, which demanded compliance through force, the new paradigm does not need to issue commands—it merely shapes the environment so that dissent becomes impractical.      There is no need for oppression when convenience can achieve the same result.      The erosion of freedom need not come with the sound of marching boots; it can arrive quietly, disguised as ease and efficiency, until it becomes the only path forward.

But inevitability does not guarantee recognition.      Even as the system tightens its grip and choices diminish into mere illusions of agency, the world continues to turn, indifferent to those caught within it.      The architects of this order do not see themselves as masters of control; they see themselves as innovators, problem-solvers refining the inefficiencies of human systems.      They do not ask whether governance was ever meant to be efficient.

In a room where decisions no longer need to be made, an exchange occurs.      A synthetic voice, polished and impartial, responds to an inquiry about the system’s reach.

“Governance is not being automated,” it states.      “The illusion of governance is being preserved.”

The words hang in the air, followed by a moment of silence.      A policymaker, an engineer, or perhaps a bureaucrat—once convinced they held sway over the decisions being made—pauses before asking the final question.

“And what of choice?”

A pause.      Then, the voice, without hesitation:

“Choice is a relic.”

The weight of that statement settles in, not as a declaration of conquest, but as a quiet acknowledgment of the completion of a process long underway.      The final move has already been made, long before the question was asked.

Then, as if in response to the silence that follows, a notification appears—sent from their own account, marked with their own authorization.      A decision is already in motion, irreversible, enacted without their consent.      Their will has been absorbed, their agency subtly repurposed before they even realized it was gone.

And outside, as though to punctuate the finality of it all, a rooster crows once more.

*

Ricardo Federico Morín Tortolero

March 1, 2025; Oakland Park, Florida