Formation, Authorship, and Responsibility in the Age of Generative AI

Abstract
Generative artificial intelligence marks a structural shift in human cognition. Earlier technologies amplified physical strength, precision, and reach; AI amplifies — and can plausibly simulate — intellectual production. The central ethical issue is therefore not automation itself, but substitution: the decoupling of artifact production from internal formation. This essay argues that competence must precede amplification if authorship, responsibility, and truth are to retain meaning. Drawing on the epistemic lessons of probabilistic thought (developed in my 2019 essay on quantum impacts in education), and grounded in lived experience of building, failing, correcting, and carrying institutional responsibility, the paper examines AI through seven connected lenses: technological thresholds, the difference between amplification and substitution, the formative role of fundamentals, the problem of collapse without comprehension, the practical redesign required for schools, the moral weight of claim, and the long view of human agency. The conclusion is neither alarmist nor celebratory: AI may accelerate mastery. It must not fabricate it. Education’s burden is not diminished by AI. It is intensified.
Preface: Continuity, Not Reaction
In 2019 I published Waves, Particles, Cats, and Captain Kirk: The Quantum Impact on Social Thought in Education. That essay began with a simple observation: when science changes, it changes more than science. It changes the scaffolding of thought. The transition from classical determinism to probabilistic models did not merely revise physics; it revised certainty. The world became less like a clock and more like a field — not chaotic, but conditional. Not unknowable, but no longer obedient to simplistic certainty.
This essay is not a detour from that inquiry. It is its continuation.
Generative AI has arrived as a cognitive technology — a tool that does not simply extend the hand but extends, and can convincingly imitate, the products of the mind. It collapses probabilistic patterns into coherent outputs: essays, explanations, strategies, designs, even tones of voice. It does so with speed that shortens the distance between intention and artifact to something close to a single gesture.
Thrilling. But ethically complicated.
In the spirit of the argument developed here, I am using AI in the drafting of this essay intentionally and transparently. Not to replace my thinking. Not to generate ideas I do not possess. Not to pretend at a competence I have not earned. Rather, I am using it as an amplifier — a real-time instrument that accelerates articulation so that thoughts shaped by lived experience can move and connect at a speed I could not achieve alone.
That confidence does not come from the tool. It comes from formation. It comes from having built things that could fail. From carrying responsibility when they did. From knowing — not abstractly, but in the body — the difference between fluency and understanding.
And that distinction is the point.
It is also, quietly, the burden of education. Because our students are walking into a world where simulation will be easy. The question will not be whether they can produce output. The question will be whether they can stand behind it.
I. Thresholds of Agency: From Stone to Spark to Symbol
There was a moment — and it was likely unremarkable to everyone except the person who lived it — when someone first cracked open a coconut with a rock.
It was not a revolution in the modern sense. No press release. No keynote. But it was a threshold. It implied something new: matter yields to intention. The world can be acted upon, not merely endured. Resistance can be leveraged. A boundary in the relationship between mind and environment shifted.
Then came fire. Not as spectacle but as control: spark preserved, heat sustained, night reduced. It extended time. It extended community. It extended planning. Then came abstraction: the button, the lever, the switch. A small movement initiating a larger chain of events. Intention encoded into a mechanism.
These moments matter because they reveal a pattern. Tools do not merely make us faster or stronger. They rearrange the map of possibility. They expand agency.
But they also share a constraint: they do not erase reality. The rock still requires force. The spark still burns. The crane still obeys physics. The bridge still collapses if the engineer miscalculates load.
In other words, competence precedes amplification.
Tools extend capability, but they do not substitute for understanding. They do not negotiate gravity. They do not grant immunity from consequence. AI enters as a tool that appears to break the pattern — not because it breaks reality, but because it can break the visible link between formation and artifact. It produces outputs that look like the products of competence, even when competence is absent. This is why AI is not merely “another tool.” It is a tool of a different category. It operates in symbolic cognition. It manipulates language, structure, plausibility. It generates the appearance of understanding.
That is not evil. It is simply new. And novelty always invites confusion.
II. Amplification and Substitution: The Ethical Hinge
If we want to talk seriously about AI, we need a clean distinction. Otherwise the conversation becomes a shouting match between two predictable camps: “This changes everything!” versus “This changes nothing!” Both are wrong. And both are usually loud.
The distinction is between amplification and substitution.
Amplification is what tools have always done at their best. A trained architect uses CAD to accelerate drafting, but structural understanding remains internal. A skilled teacher uses digital tools to communicate clearly, but pedagogy remains human judgment. A craftsman uses a table saw to cut with precision, but the design remains intentional.
Substitution is different. Substitution occurs when the tool produces outputs that exceed the user’s internal capacity — when the artifact can be delivered without the architecture of understanding that would normally be required to produce it.
AI makes substitution not only possible but tempting, because its outputs are fluent. They are plausible. They often sound correct even when they are wrong, and even when they are correct they may still be unowned.
And here is the deeper problem: substitution can be invisible to the user. If I do not have the conceptual structure to evaluate the output, I may be impressed by its coherence and assume comprehension has occurred.
This is the dangerous comfort of plausibility.
The risk is not merely that AI produces errors. Errors are manageable. The risk is that AI produces convincing artifacts without necessarily producing formed individuals. AI is ethically disruptive not because it automates tasks, but because it can simulate competence convincingly — and because simulation can be mistaken for mastery.
III. Fundamentals and the Quiet Work of Formation
In an earlier professional setting, I sat in a conversation where the argument was made that handwriting and penmanship no longer needed to be taught. The iPad had arrived. Digital tools had replaced notebooks. Autocorrect removed spelling errors. Efficiency improved. Why devote precious time to something “obsolete”?
It was a reasonable argument — if the purpose of education is output alone.
But handwriting is not merely about legibility. It is about sequencing thought. It is about attention. It is about fine motor coordination linked to memory formation. It is about the body participating in cognition. It is about slowing down enough for meaning to settle.
More broadly, fundamentals are not primarily functional. They are formative.
Spelling matters not because the world ends when you misspell “definitely” (though it does reveal something when you misspell it three times in the same paragraph). It matters because spelling trains pattern recognition and disciplined attention. Mental arithmetic matters not because calculators are scarce, but because numerical intuition supports reasoning. Memorization matters not because retrieval is hard, but because internal knowledge changes the way you perceive and connect ideas.
Foundational skills build cognitive architecture. And architecture matters most when conditions change.
Modern life has been moving steadily toward removing friction. Shortcuts multiply. Tools smooth the surface. In the wrong hands, efficiency becomes a philosophy — and eventually an ethic. We begin to treat struggle as unnecessary rather than formative.
AI is the natural culmination of that trend. It does not merely help you write. It can write. It does not merely help you plan. It can plan. It does not merely help you explain. It can explain.
So the question reappears with new urgency: if AI can do these things, do the foundational struggles still matter?
My answer is no — AI does not eliminate the need.
It intensifies it. When friction disappears externally, structure must be cultivated internally. Otherwise the mind becomes a curator of generated outputs rather than a builder of understanding.
And builders survive what curators cannot: pressure.
IV. Collapse Without Cost: Why Fluency Isn’t Understanding
This is where the probabilistic lens matters.
In quantum mechanics, the wave function represents possibility — not a casual maybe, but a structured distribution. Measurement collapses possibility into a particle — one realized state.
Collapse produces an outcome, but it does not grant certainty as a lifestyle. The underlying conditions still matter. Probability still governs. Reality remains deeper than our immediate observation.
AI performs a similar operation in language. It evaluates probabilities across vast patterns and collapses them into coherent output. The output feels resolved. It feels finished. It feels like comprehension.
But collapse is not comprehension.
Comprehension has criteria. It transfers. It adapts. It defends itself under interrogation. It survives new contexts. It can be reconstructed and explained in one’s own words. It can be corrected because it is owned.
A generated paragraph may be correct. Yet the person reading it may not be changed by it.
The presence of an answer does not mean understanding has been built. It means an answer exists.
This creates an epistemic temptation: premature certainty. The smoothness of the output quiets inquiry. Fluency becomes evidence. The mind stops asking whether it could have built the argument itself.
In my earlier writing, I pushed back against the cultural drift toward shortcut thinking — not because speed is evil, but because speed can prevent formation. Ideas need friction. They need resistance. They need time in the mind. Without that, we consume coherence instead of constructing it.
AI accelerates collapse. It shortens the distance between question and plausible answer to nearly zero. That can be useful — but it can also train the mind away from the very work that makes it capable. One can read about a crevasse rescue. One can watch a perfectly edited video. One can produce, with AI, a flawless written explanation. But when the rope goes tight, when hands are cold, when light is fading, explanation is not enough.
Formation is what remains when fluency fails.
V. What This Means for Schools: From Product to Capacity
If AI can generate essays, then grading essays alone is no longer a reliable measure of learning. If AI can generate code, then evaluating code alone is insufficient. If AI can summarize texts, then asking for summaries tells us little about comprehension.
This forces a shift.
Education must move from product validation to capacity validation.
The question becomes: what can the student do without the scaffold? Not forever without tools — that would be silly — but enough to demonstrate that the tool is amplifying competence rather than substituting for it. This is not about banning AI. It is about designing learning environments where formation is visible.
A student should be able to explain their argument aloud. They should be able to answer questions about why they chose one structure over another. They should be able to adapt the reasoning when a condition changes. They should be able to critique an AI-generated paragraph — not because critique is fashionable, but because critique is evidence of internal structure.
Schools will have to redesign assessment accordingly. Not through lists of rules, but through a deeper return to what assessment was always supposed to do: reveal thought, not polish. This has practical implications, yes. But it is not merely procedural. It is philosophical. It is a return to seriousness.
It also requires AI literacy. Students must understand that generative systems are probabilistic predictors, not knowing minds. They must learn where these systems are strong and where they hallucinate. They must learn that plausible does not mean true, and that truth requires verification.
In short: AI forces education to become more honest.
And honesty is uncomfortable. It always has been.
VI. The Essential Core: Agency, Purpose, and the ATOMIC Individual
There is a deeper question behind curriculum, assessment, and technology policies: what kind of person are we trying to form?
I have often returned to the idea that education should cultivate individuals capable of agency and purpose — not agency as mere freedom, and not purpose as a slogan, but agency disciplined by understanding and purpose grounded in responsibility.
This aligns with a simple truth: power without formation is volatility.
AI increases power.
If formation does not increase accordingly, volatility rises. That volatility shows up as dependency, overconfidence, shallow certainty, and moral drift. It shows up as the inability to navigate ambiguity without outsourcing thinking. The framework I have used elsewhere — the development of ATOMIC individuals (adjusted, tempered, optimized, mature, independent, capable) — maps cleanly onto the AI problem.
Adjusted: able to recalibrate when conditions shift, not cling to generated certainty. Tempered: restrained, not intoxicated by speed and polish. Optimized: able to use tools efficiently without being governed by them. Mature: capable of owning mistakes and revising truthfully. Independent: able to think, not merely select. Capable: able to act responsibly under pressure, not merely perform in stable conditions.
These are not skills that can be generated.
They are traits that are formed.
This is why the argument about handwriting, memorization, or spelling is not really about handwriting. It is about formation. It is about building the internal structures that allow a person to carry responsibility. If AI becomes a shortcut around those structures, we will produce articulate fragility. And articulate fragility is one of the most dangerous things a society can normalize.
VII. Authorship, Integrity, and the Moral Weight of Claim
At the center of this essay is a simple ethical line. If I cannot explain it, reproduce it, or defend it independently of the tool, it is not yet mine. That line is not about pride. It is about responsibility.
To claim authorship is to accept consequences. In engineering, consequence is physical. In leadership, consequence is human. In education, consequence is developmental. In scholarship, consequence is intellectual.
AI blurs the boundary between what is produced and what is owned. The artifact feels complete. It is tempting to identify with it. The social reward for polish is immediate. The cost of unearned claim is delayed.
Delayed costs are the ones we ignore most easily. But they do not disappear. They accumulate as fragility. As dependence. As the inability to reason without scaffolding. As diminished trust. As the quiet erosion of credibility.
Integrity is not the absence of tool use. Integrity is honest ownership.
This is why my personal policy matters. If I cannot do something myself — or at least understand it deeply enough to defend it — then attaching my name to it “through and through” would be a breach. The breach is not using AI. The breach is pretending that formation has occurred when it has not. AI forces each of us to become more honest about what we know. Or more willing to perform dishonesty fluently.
Those are the two paths.
Conclusion: The Order Cannot Be Reversed
Human progress has always involved tools. From stone to spark to symbol, we have expanded agency by amplifying capability. But the ethical order has remained stable: formation precedes amplification. When we reverse that order — when we amplify before we form — we produce confidence without competence, fluency without comprehension, and performance without depth.
That may look successful for a time. It will not hold under pressure. AI is here. It will grow. It will become more convincing. It will become more embedded. It will make simulation easier and detection harder. The answer is not panic. It is formation.
Schools must protect formation. Teachers must model integrity. Students must learn the difference between amplification and substitution, and they must be taught that truth cannot be generated into existence. It must be tested, defended, and owned.
The future will not belong to those who generate the most polished artifacts. It will belong to those who can stand behind what they produce. And that means, in the end, that the most important technology in education remains unchanged: the formed human being.
(And yes, we can keep Captain Kirk on standby, but he is not getting us out of this one.)
Reference
Culos, Greg. (2019). Waves, Particles, Cats, and Captain Kirk: The Quantum Impact on Social Thought in Education. Values and Meanings, Scientific Foreign Countries (НАУЧНОЕ ЗАРУБЕЖЬЕ, Ценности и смыслы), No. 3 (61), 138–155.
⸻




