Table of Contents
- Introduction: Why Study AI Through the Lens of the Self?
- From Mechanical Automata to Machine Learning: A Long Genealogy
- Defining the Artificially Intelligent Self
- Revisiting Sociological Theory in an Age of Algorithmic Persons
- Empirical Arenas of AI Selfhood
- Inequality, Bias, and the Reproduction of Social Order
- Ethics, Responsibility, and Sociotechnical Governance
- Methodological Innovations for Studying AI Selves
- Looking Ahead: Four Scenarios for the Next Decade
- Conclusion: Rethinking the Human–Machine Relation
Introduction: Why Study AI Through the Lens of the Self?
Artificial intelligence has graduated from laboratory novelty to infrastructural backbone. Recommendation algorithms edit the news we encounter, predictive models adjudicate bail, and conversational agents ghost‑write text messages. While computer scientists celebrate technical feats, sociologists ask a different question: What kinds of selves are being constituted when machines appear to think, feel, and decide? Examining AI through the prism of selfhood foregrounds how identity, agency, and morality are co‑produced by humans and code. This article offers an expanded, undergraduate‑friendly guide to artificially intelligent selves—a concept that captures the growing ensemble of non‑human social actors embedded in everyday life. Building on classical and contemporary theory, the essay traces a genealogy of AI personhood, dissects its constitutive dimensions, surveys empirical arenas where machine selfhood manifests, and interrogates the ethical and political stakes of delegating social power to algorithms.
From Mechanical Automata to Machine Learning: A Long Genealogy
Enlightenment Automata and the Mechanical Self
Eighteenth‑century Europe marveled at clockwork marvels—mechanical swans, flute‑playing androids, and Vaucanson’s Digesting Duck. These devices blurred boundaries between organism and mechanism, feeding public fantasies of artificial life. Early sociological observers of modernity (e.g., Marx and Weber) diagnosed a parallel transformation: industrial capitalism disciplined the worker’s body into predictable, machine‑like rhythms. Although explicit discussions of automata were rare, the mechanical self haunted imaginations as a cautionary emblem of alienation.
The Mechanical Turk, Fraud, and the Performance of Intelligence
Wolfgang von Kempelen’s “Mechanical Turk” chess player famously concealed a human operator inside a cabinet, yet audiences hailed the apparatus as autonomous. The ruse foreshadows contemporary gig‑work platforms that outsource human computation—content moderation, data labeling—to obscured laborers who animate the supposed autonomy of AI. The genealogy reminds us that claims of intelligent automation have long relied on hidden human effort.
Cybernetics and Information Feedback
Mid‑twentieth‑century cybernetics reframed intelligence as the circulation of information through feedback loops. Norbert Wiener’s work demoted consciousness and elevated adaptability, making self‑regulating systems the model for organisms, computers, and societies alike. Sociologists of organizations incorporated cybernetic metaphors, describing firms as decision‑processing machines.
AI Winters, Expert Systems, and the Resurgence of Statistical Learning
After early enthusiasm waned in the 1970s–80s, “expert systems” formalized professional knowledge as rule bases, only to stumble on real‑world complexity. The subsequent turn to data‑driven statistical learning in the 1990s–2000s culminated in deep neural networks, which resurrected public fascination and industrial investment. Crucially, big data became the lifeblood of seemingly autonomous systems, further entangling AI with mass surveillance and platform capitalism.
Defining the Artificially Intelligent Self
Artificially intelligent selves differ from earlier technological artifacts because they simulate social presence. Six interlocking dimensions clarify how:
- Performativity – Borrowing from Goffman’s dramaturgy and Butler’s performativity, AI enacts personhood at the interface, persuading users through speech, vision, or gesture.
- Learning Capacity – Machine learning allows continual recalibration. Drawing on Mead, we can view AI as possessing a generalized other encoded in training data, from which it “learns” appropriate responses.
- Opacity – Deep models operate as black boxes. Their inscrutability shifts trust onto institutional credentials and user experience rather than transparent reasoning.
- Distributed Materiality – The AI self resides not in a device but across data centers, sensor networks, and outsourced labor. Actor‑network theory highlights this dispersed ontology.
- Affective Alignment – Designers increasingly tune models for emotional resonance—empathetic chatbots in elder‑care or sentiment‑aware customer support—inviting debates on synthetic affect.
- Temporal Plasticity – AI systems accrue historical data yet predict futures. Their selfhood is stretched across time, acting simultaneously as archive and oracle.
Together, these dimensions reconfigure what C. Wright Mills called the sociological imagination: personal troubles emerge from entanglement with algorithmic structures.
Revisiting Sociological Theory in an Age of Algorithmic Persons
Agency–Structure Dialectic Re‑examined
Traditional sociology locates agency within conscious humans. Algorithmic decision‑making complicates this locus. On the micro level, users incorporate algorithmic cues—likes, ratings—as part of self‑presentation. On the macro level, predictive models reshape credit markets, labor scheduling, and electoral campaigns. Agency thus circulates: an outcome of assemblages rather than discrete actors.
Symbolic Interactionism and the Turing Moment
When a language model carries on a convincing conversation, meaning arises in the interaction, echoing Blumer’s claim that humans act toward objects based on the meaning those objects have for them. The Turing Test is therefore not a technical benchmark but a social drama: success hinges on the interlocutor’s willingness to confer personhood.
Posthumanism, Feminist STS, and Decentered Selves
Donna Haraway’s cyborg thesis and Rosi Braidotti’s posthumanism decenter “Man” as the measure of all intelligence. AI makes this decentering tangible by distributing cognition across silicon, humans, and ecosystems. Feminist STS further interrogates whose labor and whose data sustain AI, revealing hierarchies erased by narratives of frictionless automation.
Neoliberal Governmentality and Datified Self‑Regulation
Foucault’s concept of governmentality describes how power operates through the production of subjectivities. Self‑tracking apps nudge individuals to optimize sleep, diet, or productivity, externalizing a normative gaze once associated with religious confession. AI becomes an internalized supervisor, disciplining behavior without overt coercion.