Every morning, millions of developers around the world open VS Code, launch GitHub Copilot, and begin their day's work. They type a few characters and AI auto-completes entire blocks of code; they describe a feature and AI generates a complete implementation. Productivity has certainly improved -- GitHub claims Copilot can boost developer task completion speed by 55%.[1] But behind this efficiency revolution, a fundamental question has been obscured: when your work increasingly depends on a tool you cannot control, cannot understand, and cannot even leave behind, are you the master of the tool, or its servant?

I. Data Colonialism: Origins and Evolution of the Concept

From Land to Data: The Continuation of Colonial Logic

The word "colonialism" evokes the conquest and plunder of the Americas, Africa, and Asia by European powers from the fifteenth century onward. The core logic of traditional colonialism was: seize land, extract resources, exploit labor, and establish dependency relationships. The colonized lost control over their own land and the fruits of their labor, becoming appendages of the colonial power's economic system.[2]

In 2019, communication scholars Nick Couldry and Ulises Mejias introduced the concept of "data colonialism," arguing that the digital age is replaying the logic of colonialism -- only what is being plundered is no longer land, but data.[3] They contend that tech giants, through various "data relations," convert human daily behaviors, social interactions, and even physiological states into data commodities that can be captured, stored, analyzed, and monetized. This "continuous commodification of human life" constitutes a new form of colonial exploitation.

Generative AI: A New Phase of Data Colonialism

The emergence of generative AI has pushed data colonialism into a new phase. Traditional data colonialism operated in the mode of "surveillance capitalism": platforms collected your behavioral data for targeted advertising.[4] But generative AI does more than surveil -- it "learns." It absorbs thousands of years of accumulated human knowledge, creativity, and craftsmanship, compresses them into model parameters, and then sells them back to us as a subscription service.

This process involves a fundamental asymmetry: the data used to train GPT-4 includes Wikipedia, public code on GitHub, academic papers, news reports, novels, blog posts -- the crystallization of humanity's collective intelligence, contributed without compensation by millions of people.[5] Yet these contributors neither consented to being used nor received any compensation. More ironically still, once the AI model is trained, these original contributors must pay for a subscription to "use" the model trained on their own work.

II. The Data Reality: Who Is Using AI, and Who Is Being Used by AI

Developer Dependence on AI

Let us first examine the situation in software development. According to the Stack Overflow 2024 Developer Survey, 76% of respondents reported they are using or planning to use AI tools for programming.[6] This figure represents a 6-percentage-point increase over the 70% recorded in 2023. Among developers already using AI tools, the most commonly used are:

  • ChatGPT: used by 82.1% of AI users
  • GitHub Copilot: 45.3%
  • Visual Studio IntelliCode: 23.7%
  • Tabnine: 8.6%

GitHub's internal data is even more striking. As of 2024, GitHub Copilot has over 1.8 million paying users globally and has been adopted by more than 50,000 enterprises.[7] Among developers using Copilot, an average of 46% of code is AI-generated, and in certain languages (such as Java), this proportion exceeds 60%.[8]

The Plight of Writers and Designers

Software development is just the tip of the iceberg. In content creation, AI's penetration has been equally rapid. A 2024 survey found that 85.1% of marketers use AI to generate content, and 73% of companies use generative AI for copywriting.[9] In the design field, tools like Midjourney, DALL-E, and Stable Diffusion are transforming the visual creation process -- many designers openly acknowledge that clients have begun requiring them to use AI tools to speed up output and reduce costs.

These data paint a sobering picture: knowledge workers -- whether writing code, writing copy, or designing -- are binding their workflows to AI tools at an astonishing pace. This binding delivers efficiency gains, but it also creates deep dependency.

III. From Coder to Sharecropper: The Formation of a New Labor Dependency

Echoes of the Sharecropping System

Let us borrow the metaphor of "sharecropping" to understand the current predicament. Under the traditional sharecropping system, farmers did not own the land; instead, they leased it from landlords to cultivate. They possessed "the freedom to labor" -- they could choose to work hard or slack off, could choose this plot or that one -- but this freedom was a constrained freedom. They depended on the land system; they could not survive apart from it. Their harvest had to be shared with the landlord; the fruits of their labor could never fully belong to them.[10]

Today's developers are in much the same position. They do not own the AI models -- those are the proprietary assets of OpenAI, Google, and Anthropic. They do not own the data used to train AI -- even though their own code may be part of it. They do not even fully understand how AI works -- for most developers, AI is a black box, an oracle that must be trusted but cannot be verified.[11]

From "Craft" to "Prompt": The Hollowing Out of Skills

A deeper issue is "deskilling." Labor sociologist Harry Braverman, in his 1974 classic Labor and Monopoly Capital, argued that capitalism tends to decompose complex crafts into simple, standardizable tasks, thereby reducing dependence on skilled workers and driving down labor costs.[12] This process, from artisans during the Industrial Revolution to today's knowledge workers, follows an unbroken thread.

Generative AI is accelerating this process. When developers increasingly rely on Copilot to write code, are they still cultivating their own programming abilities? When designers increasingly rely on Midjourney to generate images, is their visual literacy deteriorating? Research shows that over-reliance on GPS navigation weakens human spatial cognition;[13] could over-reliance on AI coding tools similarly erode developers' programming intuition?

A comment by a senior developer on Hacker News struck a nerve: "I've noticed that after using Copilot for extended periods, I start forgetting certain library APIs. Things I used to write from memory, I now habitually wait for AI to suggest. This makes me uneasy -- my skills are being outsourced to a system I cannot control."[14]

The Productivity Trap

Those who push back against this critique will say: "But AI really does increase productivity!" This is true. GitHub's research shows that developers using Copilot completed tasks 55% faster and reported greater job satisfaction with less frustration.[15] Isn't that a good thing?

The question is: who ultimately benefits from the "productivity" gains? In traditional employment relationships, the gains from productivity improvements tend to be captured by capital -- workers produce faster, but wages do not increase proportionally.[16] Under the new AI-assisted model, this asymmetry may be even more severe: developers become more productive, but this means the same output can be achieved with fewer people -- in other words, the efficiency dividend of AI may translate into pressure for layoffs.

More subtle still are the "hidden costs." Using AI tools requires paid subscriptions (Copilot from $19/month, ChatGPT Plus at $20);[17] code generated by AI requires additional review and testing time; errors introduced by AI may be harder to detect and harder to debug. These costs are often obscured by the narrative of "productivity gains."

IV. Cognitive Outsourcing and the Loss of Agency

The Double-Edged Sword of the Extended Mind

In 1998, philosophers Andy Clark and David Chalmers proposed the "extended mind" hypothesis: cognitive processes occur not only within the brain but can extend to external tools and environments.[18] From this perspective, notebooks, calculators, and smartphones are all extensions of our cognitive system. AI is merely the latest form of this extension.

But the "extended mind" presupposes that you maintain a certain degree of control over and understanding of these tools. When you use a calculator, you know what it is doing -- it simply executes the mathematical operations you specify. But when you use GPT-4, do you truly know what it is doing? Why does it give this answer rather than another? Is its "reasoning" reliable? Even AI researchers cannot fully answer these questions.[19]

From "Using Tools" to "Being Used by Tools"

In The Burnout Society, Byung-Chul Han argues that modern society's power operations have shifted from "discipline" to "achievement": people no longer need external coercion but instead impose demands on themselves, exploit themselves, and optimize themselves.[20] AI tools are the perfect vehicle for this "achievement society" -- they do not force you to use them, but you "voluntarily" adopt them, because not doing so means falling behind your peers and behind the market.

This "voluntary dependency" is particularly difficult to perceive and resist. You do not feel exploited -- after all, you chose to use Copilot yourself; you do not feel controlled -- after all, you can turn it off at any time. But in reality, when your workflow, thought patterns, and even professional identity are deeply entwined with AI tools, "not using" is no longer a viable option.[21]

The Alienation of Creativity

For writers and designers, the problem is even more acute. Code at least has a notion of "right or wrong" -- code that runs is code that runs. But what about creative work? When an article is AI-generated and human-edited, who is the "author" of that article? When an image is generated by AI from a text prompt, who is the "creator" of that image?

In his 1935 essay "The Work of Art in the Age of Mechanical Reproduction," Walter Benjamin introduced the concept of "aura": the unique quality and authority of a work of art derives from its "here and now" presence, from the irreproducible relationship between creator and creation.[22] Mechanical reproduction (photography, printing) had already weakened this aura; AI generation may dissolve it entirely. When anyone can produce a "masterpiece-quality" image with a single prompt, what does "creation" even mean?

V. Platform Feudalism: A New Power Structure

From Capitalism to Feudalism?

Economist Yanis Varoufakis, in Technofeudalism, advances a provocative thesis: we are regressing from capitalism into some form of feudalism.[23] Traditional capitalism depends on market competition -- firms profit by producing better, cheaper goods. But tech giants profit differently: they build "walled gardens," leverage network effects and lock-in effects to prevent users from leaving, and then extract "digital rent."

This analysis applies perfectly to the generative AI space. Companies like OpenAI, Google, and Anthropic have invested billions of dollars training models, erecting formidable barriers to entry.[24] They are not competing in a "market" so much as establishing "fiefdoms" -- you can choose to enter the fiefdom (pay the subscription) or stay outside (fall behind the times). Once inside, you are locked in: your workflows have adapted to a specific tool, your prompt library is optimized for a specific model, and your project history is stored on a specific platform.

The "Primitive Accumulation" of Data

In Capital, Marx described capitalism's "primitive accumulation": capitalists accumulated their initial capital through enclosure movements, colonial plunder, and other means, laying the foundations for industrialization.[25] The rise of generative AI has likewise undergone a "primitive accumulation of data": tech companies, through web crawlers, APIs, and user agreements, have harvested on a massive scale the knowledge and creative works that humanity has accumulated over millennia.

The injustice of this accumulation is plain to see. Wikipedia editors did not consent to have their contributions used to train commercial AI; open-source developers on GitHub did not consent to have their code used to train Copilot;[26] artists did not consent to have their works used to train Stable Diffusion. When this "raw data" is transformed into AI models worth tens of billions of dollars, the original contributors receive nothing.

Who Owns the Intelligence?

This raises a fundamental legal and ethical question: to whom does the "intelligence" of AI models -- if one can call it that -- actually belong? From a legal standpoint, this remains an unsettled question. The ongoing Sarah Andersen et al. v. Stability AI case in the United States,[27] as well as The New York Times v. OpenAI case,[28] are both challenging the legality of AI companies using copyrighted works without authorization. But even if these lawsuits succeed, they can only address the narrow legal category of "copyright" -- the broader issue of "collective intelligence being privatized" remains unanswered.

VI. Resistance and Alternatives: Possible Ways Forward

The Promise and Limitations of Open-Source AI

Facing the monopoly of tech giants, the open-source community is attempting to build alternatives. Meta's LLaMA series, Mistral AI's models, and Stability AI's Stable Diffusion have all been released in open-source or semi-open-source form.[29] These efforts deserve recognition -- they lower the barrier to entry for AI, enabling more people to use, study, and modify AI models.

But open source cannot solve all problems. First, the computing power and data required to train large models remain concentrated in the hands of a few large corporations -- open-source models are often byproducts that these companies "bestow" upon the community. Second, the degree to which open-source models are truly "open" is often limited -- LLaMA's use is subject to licensing restrictions, and its training data has not been disclosed. Finally, even with a fully open-source model, ordinary developers can hardly deploy and run it independently -- they still need to rely on cloud services, and cloud services remain monopolized by a handful of companies.

The Possibility of "Slow Work"

Perhaps what we need is not better tools but a fundamental rethinking of "productivity" itself. Sociologist Hartmut Rosa offers a critique of the "acceleration society": modern society is dominated by the logic of "acceleration" -- we must work faster, consume faster, respond faster.[30] AI tools are the embodiment of this acceleration logic -- their promise is to let you complete tasks faster, but this "faster" only generates more tasks, not more leisure.

The counterpoint to "acceleration" is "resonance" (Resonanz) -- Rosa argues that a good life consists not in completing tasks with maximum efficiency but in establishing meaningful, responsive relationships with the world. From this perspective, "hand-writing" code, "hand-drawing" illustrations, "hand-crafting" articles are not backward modes of production but pathways for establishing a relationship of "resonance" with one's own work.

Redefining "Expertise"

Finally, we need to rethink the meaning of "expertise." Before AI, "expertise" meant mastering specific knowledge and skills -- being able to write code, design layouts, compose copy. After AI, the value of these "hard skills" may decline, since AI can do them too, and faster.[31]

But "expertise" is not just about skills; it is also about judgment, taste, ethics, and contextual understanding. AI can generate code, but it cannot judge whether that code fits the overall architecture of a project; AI can generate copy, but it cannot judge whether the message suits the target audience; AI can generate images, but it cannot judge whether a visual style aligns with a brand's identity. These judgments still require humans -- but only on the condition that humans still retain the capacity to make them, rather than outsourcing everything to machines.

Conclusion: Lucid Dependency

The purpose of this article is not to call for a boycott of AI -- that would be neither realistic nor wise. AI is a powerful technology that genuinely enhances productivity, lowers barriers, and makes previously impossible things possible. The question is not "whether to use AI" but "how to use AI."

What we need is a form of "lucid dependency" -- using AI tools while clearly recognizing what we are depending on, what we are sacrificing, and what we are gaining in return. We must ask ourselves: Is this tool helping me develop my capabilities, or is it replacing them? Is this tool expanding my autonomy, or is it diminishing it? Who ultimately reaps the benefits of this tool?[32]

Centuries ago, sharecroppers perhaps could not imagine a world without landlords. But their struggles -- from peasant uprisings to land reform -- ultimately changed the course of history. Today's "digital sharecroppers" face a power structure that is more covert, more pervasive, and more difficult to resist. But this does not mean resistance is impossible. Recognizing one's predicament is the first step toward resistance.