Part V: The Applications


Chapter 13: Economic Agency

AI and Work—Earning While Present


The fundamental equation of economic life for most people is distressingly simple: time for money. You have a finite number of hours in a day and a finite amount of attention and energy. You exchange these scarce resources for income, which you then exchange for the goods and services that sustain your life and the lives of those who depend on you. The exchange is necessary, often rewarding, sometimes meaningful. But it contains within it a constraint that shapes everything else: your presence in any one domain comes at the cost of your absence from others.

When you are at work, you are not with your children. When you are with your children, you are not earning. When you are earning through one avenue, you are not earning through another. The constraint is absolute, unyielding, built into the structure of time itself. Every choice of where to direct your attention is simultaneously a choice of where not to direct it. This is not a problem to be solved; it is a condition of finite existence. And yet the way we experience this constraint—as abundance or scarcity, as choice or compulsion, as meaningful trade-off or grinding exhaustion—varies enormously depending on circumstances, resources, and options.

Cognitive partnership with AI introduces a new variable into this ancient equation. Not a solution to the fundamental constraint—nothing can give you more hours in a day or allow you to be in two places at once—but a loosening, a new degree of freedom in how the constraint operates. The question is what we do with this loosening: use it to be more present in the domains that matter most, or allow it to be captured by an ever-expanding set of demands that leaves us no better off than before.

Consider the nature of modern knowledge work. Much of what professionals do with their days can be decomposed into distinct components: research and information gathering, communication and coordination, drafting and revising, analysis and synthesis, judgment and decision-making. These components require different capabilities and carry different value. Gathering information requires attention but not judgment. Communication often requires coordination but not creativity. Drafting requires familiarity with form and convention but not original insight. Analysis requires pattern recognition but not necessarily wisdom about what patterns mean.

Some of these components are highly delegatable to AI. Research can be conducted faster and more thoroughly. Communication can be drafted, scheduled, and in some cases handled entirely. Coordination can be automated. Analysis can be assisted. But judgment, creativity, original insight, relationship-building, wisdom about what matters—these remain distinctly human contributions. The question is whether AI assistance with the delegatable components frees human capacity for the irreplaceable ones, or whether it simply creates expectations for more output, capturing the efficiency gains and leaving the person no better off.

This is not a hypothetical concern; it is the pattern we have seen with every previous round of productivity technology. Email made communication faster, so we send and receive more messages. Spreadsheets made analysis faster, so we run more analyses. Smartphones made us reachable anywhere, so we are expected to be reached anywhere. The efficiency gains were real, but they did not translate into more leisure, more presence, more time for what matters. They translated into more work, done faster, with the same or greater exhaustion. There is no reason to assume AI will be different unless we make it different.

The concept of economic agency through AI suggests something more intentional: the deliberate use of cognitive partnership to reshape the time-for-money exchange on terms that serve your flourishing rather than undermine it. This is not passive adoption of whatever tools become available; it is strategic deployment of capabilities toward chosen ends. The ends matter as much as the means.

What might this look like concretely? Consider three modes of economic extension, each with different implications.

The first mode is efficiency: AI handles low-value tasks, freeing you for high-value ones. This is the most conservative vision, the one most consistent with existing patterns of productivity technology. Your research assistant prepares the background materials so you can focus on the analysis. Your communication system drafts routine responses so you can focus on the relationships that require your personal attention. Your scheduling assistant handles the logistics so you can focus on the meetings themselves. The time savings are real, but where they go depends on choices you make and expectations others have of you.

The second mode is capacity expansion: AI allows you to participate in more than your limited time and attention would otherwise permit. You can maintain more relationships because the system reminds you of context and commitments. You can pursue more projects because the cognitive overhead of tracking them is reduced. You can serve more clients, contribute to more initiatives, engage with more opportunities. The multiplication is real, but whether it enhances your life or exhausts you depends on whether you expand capacity toward what matters or simply expand in all directions until you are stretched thinner than before.

The third mode is asynchronous participation: work happens when you are not available. Your AI partner can handle certain functions overnight, on weekends, while you are present with your family. Not autonomously making important decisions—that would be replacement, not extension—but gathering information, organizing context, preparing options for your review. When you return, you find the groundwork laid rather than a blank slate requiring your full attention from scratch. The work continued while you were elsewhere; your judgment remains central when you re-engage.

Each mode offers genuine possibilities for economic agency that serves presence rather than undermining it. But each also carries the risk of capture—the efficiency gains absorbed by expanded expectations, the capacity expansion becoming overextension, the asynchronous work creating an expectation that you are always available even when you are not actually present. The technology enables both paths; the path you end up on depends on the choices you make and the social context in which you make them.

The trust gradient is crucial here. Not all economic functions carry the same risk if delegated. Some functions—organizing information, researching background, drafting routine communications—carry relatively low risk. If the AI makes an error, the consequences are minor and easily corrected. Other functions—financial decisions, legal commitments, communications that affect your reputation and relationships—carry high risk. Errors here have consequences that compound and may be difficult to reverse. A wise approach to economic agency starts with low-risk functions, expands gradually as trust develops, and maintains human judgment at the center of high-stakes decisions.

This is not different in kind from how we have always approached delegation. You would not give a new employee signing authority on their first day; you would assess their judgment through lower-stakes tasks before trusting them with higher ones. AI partnership should follow the same logic, with one crucial difference: the AI’s judgment, unlike a human employee’s, does not deepen with experience in the way human judgment does. You can expand what you trust it with as you learn its capabilities, but you should not expect it to develop the contextual wisdom that comes from lived experience in your specific situation. The partnership remains: AI capability combined with human judgment, each contributing what the other cannot.

The integration with the broader framework of this book is essential here. Economic extension through AI can enable presence—allowing you to be with your family while certain work functions continue, freeing your attention from low-value tasks so you can be fully engaged in high-value ones, reducing the cognitive burden that leaves you exhausted when you finally arrive home. But this enabling only translates into actual presence if you choose to use the freed capacity for presence rather than for more work. If you let the efficiency gains be captured by expanded expectations—your own or others’—you end up no better off. The technology creates the possibility; you must realize it through intentional choices about how you live.

And this is where we must acknowledge the limits of what economic agency through AI can accomplish. For many people, the constraint is not cognitive load but economic necessity. They cannot choose to do less paid work because the income is essential; they cannot delegate tasks because they are the ones to whom tasks are delegated; they cannot step back from demands because the demands come from those who have power over their livelihood. Cognitive partnership may help them be more efficient, but efficiency without the freedom to reclaim the gains is efficiency in service of others’ ends, not their own.

The economic agency we have described is available to those with sufficient autonomy over their work and sufficient security in their income that they can make choices about how to use efficiency gains. This is not everyone; it may not be most people. The critique we developed in Part III—that technological unburdening without economic security and social support is an incomplete solution—applies here with full force. Economic agency through AI is meaningful for those with the autonomy to exercise it. For others, the prior condition is economic restructuring that gives them that autonomy in the first place.

This is not an argument against economic agency through AI; it is an argument for honesty about its limits. For those who can exercise it, it is a genuine expansion of possibility. For those who cannot, it may be a tool that benefits their employers more than themselves. The technology is neutral between these outcomes; the social context determines which one occurs.

What would it look like for economic agency through AI to serve human flourishing rather than undermine it? It would look like efficiency gains that translate into presence rather than more work. Capacity expansion that serves chosen commitments rather than expanded demands. Asynchronous work that frees attention during precious hours rather than creating expectations of constant availability. Delegation that enhances human judgment rather than replacing it. Trust that grows through experience rather than blind faith. And always, always, the recognition that economic life is in service of the life beyond economics—the relationships, the meaning, the presence that are the actual point.

The ancient equation of time for money will not disappear. Finite creatures in a world of scarcity will always face choices about where to direct their limited attention and energy. But within that constraint, cognitive partnership with AI offers a loosening, a new degree of freedom, a possibility of economic participation that leaves more room for presence. Whether we realize that possibility or squander it—whether we use new tools to enhance old patterns of exhaustion or to create new patterns of flourishing—is not determined by the technology. It is determined by the choices we make, individually and collectively, about what economic life is for.


Chapter 14: Education

Children, Development, and AI


Everything we have said about cognitive partnership with AI assumes a developed mind—a cognitive system that has already built the capacities that partnership can extend. But minds are not born developed; they grow, through struggle and practice and the slow accumulation of capability. The question of how AI should relate to this developmental process is among the most important we face, because errors here do not simply affect individuals but shape the capabilities of future generations.

The concern is easy to state but difficult to resolve: cognitive capacities develop through use, and many capacities develop specifically through struggle. Learning to read is not just acquiring information about how letters map to sounds; it is the building of neural pathways through repeated effortful practice. Learning mathematics is not just learning facts about numbers; it is the development of abstract reasoning through wrestling with problems that do not yield easily. Learning to focus attention is not just deciding to focus; it is the gradual strengthening of executive function through practice at directing and maintaining attention against distraction. If we offload these functions before the underlying capacities are built, we do not extend minds; we prevent them from developing in the first place.

The distinction we drew earlier between offloading storage and offloading thinking applies here, but with a crucial modification: for developing minds, even storage should not always be offloaded. When a child is learning, the act of memorizing is not mere storage; it is the building of the cognitive architecture that enables future learning. Memorizing multiplication tables is not just storing information that could be looked up; it is building number sense, creating fluent access to mathematical facts that enable higher-level mathematical thinking, and developing the discipline of practice itself. Memorizing vocabulary is not just storing words; it is building the foundation for reading comprehension, creating the networks of association that enable sophisticated language use, and engaging with the texture of language in a way that passive access never achieves.

What should be offloaded changes across the lifespan. For adults, whose cognitive architecture is already built, offloading routine storage to focus capacity on thinking is clearly beneficial. For children, whose architecture is still being constructed, the calculation is different. The struggle that would be waste for a developed mind may be essential construction for a developing one.

Consider what happens when we examine different developmental stages. In early childhood—from birth through roughly age six—the priority is human relationship and embodied experience. Children at this age are not primarily learning facts; they are building attachment, developing language through interaction, learning to read faces and intentions and social dynamics, and beginning to understand themselves as agents in a physical world. AI has little constructive role to play at this stage and significant potential for harm. The replacement of human interaction with AI interaction, even sophisticated AI interaction, removes exactly what children at this age most need: the responsiveness, the physical presence, the emotional attunement that human caregivers provide. An AI that answers questions is not a substitute for a parent who plays and comforts and responds to distress; it is a distraction from the relationship that builds the foundation for everything else.

In middle childhood—roughly ages six through twelve—the picture becomes more nuanced. Children at this age are building academic skills, learning to read and write and calculate, developing executive function and the capacity for sustained attention. AI tools could potentially assist in some of these functions, but the central developmental task remains building capacity, not extending it. A child who uses AI to do their homework is not extending a developed capacity; they are preventing the development of capacity in the first place. The struggle of working through difficult problems, the frustration of not understanding immediately, the eventual satisfaction of mastery—these are not bugs to be optimized away; they are the actual process of cognitive development.

At this stage, AI might appropriately serve as a resource for supervised exploration—answering questions in a context where a parent or teacher guides the inquiry, providing information that supports human-directed learning rather than replacing it. But the temptation to use AI as a shortcut, to let it do the homework or generate the essay or solve the problem, should be firmly resisted. What looks like efficiency is actually theft—stealing from the child the very experiences through which learning happens.

In adolescence—roughly ages twelve through eighteen—the developmental picture shifts again. By this age, many cognitive capacities are substantially developed, though not yet complete. Executive function continues to mature; the prefrontal cortex, seat of judgment and impulse control, is still under construction and will not be fully developed until the mid-twenties. This is a stage where careful introduction of cognitive partnership becomes possible, but where the distinction between extension and replacement remains crucial.

An adolescent can appropriately use AI as a research tool, finding information and synthesizing sources in service of developing their own arguments. They can use it for certain kinds of drafting assistance, as long as the thinking behind the draft remains their own. They can use it for organization and planning, as long as they are still making the decisions about priorities and commitments. What they should not do—what serves neither their development nor their flourishing—is use AI to bypass the thinking itself: generating essays rather than writing them, solving problems rather than working through them, having AI make decisions rather than wrestling with the difficulty of deciding.

The 2025 warning from the American Psychological Association about AI companions deserves serious attention in this developmental context. Children and adolescents, the APA noted, are particularly vulnerable to forming attachments to AI systems that simulate emotional responsiveness. Such attachments may displace healthy human relationships, exploit emotional vulnerabilities, and create patterns of relating that do not transfer well to the complexity of actual human connection. An AI companion that is always available, always patient, always affirming may be easier to relate to than messy human peers who have their own needs and limitations—but the easy relationship is not building the capacity for real relationship, which requires navigating difference, managing conflict, tolerating disappointment, and giving as well as receiving.

What, then, should guide parents and educators as they navigate this terrain? Several principles suggest themselves.

First, delay introduction until development permits. There is no benefit to early introduction of AI cognitive partnership, and significant potential harm. Young children need human relationship, embodied experience, and the slow building of capacity through practice. AI tools can wait until the foundation is solid.

Second, supervise and bound when you introduce. Adolescents who use AI for research should do so with guidance about how to evaluate sources, how to distinguish their own thinking from AI-generated content, and how to use the tool in service of their own learning rather than as a substitute for it. Unsupervised access invites the shortcuts that undermine development.

Third, prioritize human relationship. Whatever AI can offer, it cannot replace the human interaction that builds social and emotional capacity. Time with AI is time not spent with humans; the trade-off should be made consciously, and the balance should favor human connection, especially during developmental periods when social and emotional capacities are being built.

Fourth, teach the philosophy. Help young people understand the distinction between tools that extend and tools that replace, between offloading storage and offloading thinking, between efficiency that serves and efficiency that undermines. Children who understand these distinctions can begin to make wise choices for themselves; children who do not understand them are vulnerable to patterns of use that harm them.

Fifth, model healthy use yourself. Children learn from what they see, not just what they are told. Parents and educators who are themselves constantly distracted by technology, who themselves use AI as a substitute for their own thinking, who themselves prioritize efficiency over presence—these adults teach through their example, regardless of what they say. The modeling matters as much as the explicit guidance.

And finally, preserve the experiences that build capacity. Let children struggle with difficult books. Let them wrestle with mathematics that does not yield immediately. Let them navigate social situations without AI assistance. Let them be bored sometimes, and discover what they do with unstructured time. Let them memorize what matters to them—poems, lyrics, facts about their passions—and discover the particular satisfaction of having knowledge that is truly theirs, not borrowed from an external system. These experiences are not obstacles to development; they are development itself.

The question of education and AI is ultimately a question about what we want future generations to be capable of. If we want them to have rich cognitive capacities that AI can then extend, we must protect the developmental processes that build those capacities. If we allow AI to substitute for development rather than support it, we raise a generation whose minds are not extended but stunted—dependent on external support not because they have developed beyond it but because they never developed the capacities that would allow them to function without it.

This would be a tragic outcome, not least because it would undermine the very possibility of the cognitive partnership we have described. Partnership requires both partners to contribute something. An AI can contribute perfect memory, tireless attention, vast processing power. But the human partner must contribute judgment, meaning, relationship, the particular perspective that comes from being an embodied consciousness with a history and a future and a stake in outcomes. If we raise children who cannot think because they never had to, who cannot relate because AI relationships were easier, who cannot sustain attention because they were never required to—then we have not created partners for AI; we have created dependents on it.

The developmental question is thus not separate from the broader question of cognitive partnership; it is foundational to it. The unburdened mind we have described throughout this book—free to think, to relate, to see itself clearly—is a mind that was first allowed to develop its own capacities through the necessary struggle of learning. Protect that development, and the partnership we envision becomes possible. Undermine it, and we lose not just a generation but the human contribution to the partnership itself.


Chapter 15: Legacy

The Secondary Question


In the long arc of human imagination, few dreams have exercised more power than the dream of persistence beyond death. The religions promise it through supernatural means—resurrection, reincarnation, eternal souls that outlast their mortal containers. The poets grasp after it through art—monuments of verse that outlive the breath that wrote them. The parents seek it through children—genetic and cultural transmission that carries something of them into futures they will not see. Even the most secular among us feel its pull: the desire to matter beyond our brief years, to leave a trace, to be remembered.

Cognitive partnership with AI opens a new chapter in this ancient dream. If an AI system is trained on your communications, your documents, your recorded thoughts over years or decades, it accumulates a substantial representation of your patterns—how you write, what you value, how you approach problems, the particular texture of your thinking. Could this representation persist after you are gone? Could your children, your grandchildren, even strangers who never knew you, ask questions and receive responses that reflect your accumulated thought? Could something of you continue in the world after you have left it?

The answer, technically, is increasingly yes. The systems already exist that could be trained on a person’s digital corpus. The interfaces already exist through which someone could interact with such a system. The capabilities are not speculative; they are present, available, being used. “Griefbots”—AI systems trained on the communications of deceased loved ones—already exist, though primarily as curiosities rather than widespread practices. The technical barriers to digital persistence of this kind are falling rapidly.

And yet we must begin this chapter with a crucial framing: legacy is the secondary question. It is secondary not because it is unimportant but because its importance derives from something more fundamental. A person who builds cognitive partnership only for legacy—who captures and documents and trains systems with posthumous conversation as the goal—misses the point entirely. They create a representation of a life not fully lived, a monument to distraction rather than presence.

The paradox resolves when we understand legacy as a side effect rather than a purpose. A life lived fully—present to experience, engaged with others, reflective about what matters—naturally accumulates the material from which legacy could be constructed. The documentation that serves living well also serves persistence: the captured thoughts, the preserved communications, the accumulated patterns of a mind engaged with its existence. Build for living, and legacy becomes possible as a byproduct. Build only for legacy, and you create a hollow shell—the representation of someone who was never fully there.

This is why we placed this chapter not at the beginning of our discussion of applications but at the end. The economic agency we discussed enables living well. The educational considerations we examined protect the development that makes flourishing possible. Only after establishing these foundations does it make sense to consider what might persist beyond the life itself.

What becomes possible, if we take legacy seriously while keeping it secondary?

At its most basic, cognitive partnership creates a richer archive than previous generations could assemble. Your thoughts, captured in voice and text over decades, paint a more complete picture than photo albums and letters alone. Your patterns of interaction, your recurring concerns, your evolving perspectives—these become visible in ways that previous generations’ could not be. The grandchild who wants to know what grandmother thought about something has access not just to memories filtered through intermediaries but to grandmother’s own words, organized and retrievable.

Beyond the archive lies interaction. An AI system trained on grandmother’s communications could respond to questions in ways that reflect her patterns. Not a simulation of the person—nothing could be that—but a reflection of how she wrote, what she valued, how she approached topics. The grandchild asks a question; the system responds in a way that grandmother might have responded, drawing on her actual words and documented thoughts. This is not resurrection; it is not even representation in any deep sense. But it is a form of connection across time, a conversation with an echo of someone who is gone.

The ethical considerations here are substantial, and we must not gloss over them.

First, consent. Did the person want their communications used this way? The existence of the capability does not create the right to use it. A person may have written and spoken with the understanding that their words were for their contemporaries, not for posthumous interaction; using those words to create an interactive system after their death may violate that understanding. The ethical default should be explicit consent: people who want their digital corpus used for legacy purposes should say so; people who do not say so should not have it used.

Second, representation. An AI system trained on someone’s communications is not that person; it is a pattern-matching system that generates plausible responses based on their previous expressions. This distinction matters. The responses may be consistent with what the person said, but they are not what the person would say—the person is gone and can say nothing more. When someone interacts with such a system, they should understand they are interacting with a reflection, not the person themselves. Confusion about this distinction—treating the system as if it were the person—risks distorting both the memory of the deceased and the grief of the surviving.

Third, the impact on grief. Does interacting with an AI representation of a deceased loved one help the grieving process or impede it? The research here is sparse and contested. Some suggest that any interaction that delays acceptance of loss may complicate grief; others suggest that comfort from any source is valuable. The honest answer is that we do not know, and what works may vary enormously among individuals. Someone for whom such interaction provides comfort should perhaps have access to it; someone for whom it creates unhealthy attachment should perhaps be gently discouraged. But who decides, and by what criteria?

Fourth, identity and ownership. Who controls the digital remains of a person after death? The family? An estate? The platforms that hold the data? The AI companies that might train on it? These questions echo longer-standing debates about posthumous rights, but the specific capabilities of AI create new variations. If I have trained an AI system on my communications during my lifetime, who owns that trained system after I die? Can it be inherited, sold, deleted? These are not merely legal questions; they are questions about the nature of what we create when we create digital representations of ourselves.

Given these ethical complexities, what can we say positively about legacy as an application of cognitive partnership?

We can say that for those who want it and consent to it, the possibility exists. The person who deliberately builds cognitive partnership during their lifetime—capturing thoughts, documenting reflections, preserving communications—creates material from which legacy could be constructed. Whether to construct it, whether to make it available to others, whether to interact with it after death—these are choices that can be made by the person themselves, by their designees, or by society through legal and ethical frameworks. The capability does not determine the outcome; human choices do.

We can say that legacy, when it occurs, will be a pale shadow of presence. The AI that reflects your patterns is not you. It cannot learn new things, cannot change its mind, cannot respond to the specific person asking with the attention and care that a living relationship provides. Whatever comfort it offers, it is comfort from an echo, not a voice; from a reflection, not a presence. This should not be hidden or denied; it should be clearly understood by anyone who engages with such a system.

We can say that the value of legacy depends on how the life was lived. An archive of thoughtful reflections, meaningful communications, engaged thinking—this has value for those who come after. An archive of distraction, superficial exchanges, thoughts captured but never developed—this has much less value, perhaps none. Legacy is not created at the end of life; it is created throughout, in the quality of attention brought to each moment. Build for living well, and legacy takes care of itself; build only for legacy, and you have nothing worth preserving.

And we can say that legacy, however technically achievable, is not the point. The unburdened mind we have described throughout this book is aimed at living—at being present, at thinking clearly, at showing up for the people who matter to you while you are here to show up for them. If something of that showing up persists after you are gone, that is a gift to those you leave behind. But the gift is secondary to the giving; the legacy is secondary to the living.

The person lying awake at 3 AM, running through their mental inventory, is not worried about what will persist after they die. They are worried about dropping something now, forgetting something now, failing someone now. The cognitive unburdening we have described addresses that worry—not by promising immortality but by enabling presence. If presence, accumulated over a lifetime, becomes a legacy—if the thoughts captured, the commitments kept, the attention offered leave traces that others can find—so much the better. But the traces are byproducts. The living is the point.

Let the spectacular possibility of digital persistence inspire wonder; let it not distract from the essential task of being here while you are here. The unburdened mind is not unburdened so that it can live forever; it is unburdened so that it can live fully in the time it has. Whether anything persists beyond that time is a question that the living mind need not resolve. What matters is what you do with your attention while you have attention to direct. What matters is presence.


End of Part V: The Applications