Part VI: The Principles
Chapter 16: For Individuals
How to Use Cognitive Partnership Wisely
We have traveled through philosophy and architecture, through global context and specific applications. Now we arrive at practice: what should you actually do? This chapter offers principles for individuals who want to use cognitive partnership wisely—not rules to follow blindly but guidelines to consider, adapt, and make your own.
The first and most fundamental principle is the one we established in Part II: offload storage, not thinking. Let the system remember what you cannot hold in your mind—the facts, the dates, the names, the contexts, the commitments, the connections between things you thought months apart. This is the appropriate work for a cognitive partner, the work at which AI excels and human memory falters. But the thinking remains yours. The synthesis of what those facts mean, the judgment of what matters more, the decision about what to do, the creative leap that connects disparate ideas—these are the work of your mind, enabled by the unburdening but never replaced by it.
The distinction sounds simple, but in practice it requires vigilance. The temptation to offload thinking is constant and subtle. When you ask an AI to not just research a topic but to form a conclusion about it, you are offloading thinking. When you accept an AI’s draft without substantially engaging with its arguments, you are offloading thinking. When you let an AI decide what matters rather than surfacing information so you can decide, you are offloading thinking. None of these individual instances may seem significant; the thinking offloaded in any one case may seem trivial. But the pattern compounds. Thinking is a capacity maintained through use; unused capacities atrophy. Each small offload is practice at not thinking; each practice at not thinking makes the next offload more likely.
So the practical application of this principle is attention to the boundary. When you interact with AI cognitive partnership, ask yourself: is the AI contributing storage (memory, retrieval, organization) or contributing thinking (judgment, synthesis, decision)? Both are happening, often intertwined; the question is which predominates. If you find yourself passively receiving conclusions rather than actively thinking through information, recalibrate. The unburdened mind is freed to think, not freed from thinking.
The second principle is to protect your particular perspective. AI systems are trained on vast corpora of human expression; they learn patterns from collective human thought. This gives them remarkable capability, but it also gives them a tendency toward the average, the common, the conventional. Your particular perspective—shaped by your specific experiences, your unusual combinations of influence, your unique vantage point on the world—is not represented in training data. If you consistently defer to AI formulations, you may find your own thinking drifting toward generic patterns, your particular voice smoothed into something more average.
This does not mean rejecting AI input; it means maintaining your own perspective as primary. When AI offers a formulation, consider it, but do not accept it merely because it is fluent. Ask whether it captures what you actually think or merely approximates it. Write in your own voice, even when AI assists; preserve the patterns of expression that are distinctly yours. The value you bring to conversations, to relationships, to the world is precisely your particularity. Do not homogenize it in pursuit of AI-assisted efficiency.
The third principle is to preserve essential practices. Some activities that AI can assist with should sometimes be done without assistance, because the practice itself builds or maintains capacity. Reading deeply—not summaries or excerpts but sustained engagement with difficult texts—builds concentration, comprehension, and the ability to hold complex ideas. AI can summarize, but summary is not the same as the experience of working through a challenging argument yourself. Write to think, not just to produce output; the act of writing is itself a form of thinking, and writing that begins from AI drafts may never engage the specific kind of cognition that writing from scratch enables.
Navigate without GPS occasionally, if only to maintain the spatial cognition that navigation develops. Memorize what truly matters to you—poems, passages, facts about your passions—so that this knowledge is truly yours, instantly available, integrated with your identity in a way that accessible information never is. Calculate sometimes without reaching for a calculator. Remember birthdays before the reminder tells you. These practices may seem inefficient, and in some narrow sense they are. But efficiency is not the only value. Capacity is maintained through use; use it or lose it is not just a slogan but a neurological reality.
The fourth principle is to notice dependence and recalibrate when necessary. Ask yourself periodically: could I function without the system? If the answer is no—if you have offloaded so completely that normal functioning requires the cognitive partner—then something has gone wrong. The partner should extend capability, not become a crutch without which you cannot walk. Some dependence is acceptable and even inevitable; we are all dependent on tools we cannot make ourselves. But total dependence, where the removal of the tool would leave you helpless, is fragility rather than extension.
The recalibration is practical. If you notice you cannot remember anything without checking your system, spend time deliberately remembering without checking. If you notice you cannot write without AI assistance, write something without it, however difficult. If you notice you cannot navigate without GPS, take a drive without it. These exercises are not about rejecting the tools; they are about maintaining the capacity that the tools are meant to extend. A tool that serves a capable person is different from a tool that replaces an incapable one.
The fifth principle is to measure presence, not productivity. The goals we established in Part II—freedom to think, freedom to show up for others, freedom to see yourself clearly—are not productivity goals. They are presence goals. The question is not “am I getting more done?” but “am I more here—in my work, in my relationships, in my own life?”
This requires different metrics. Instead of counting outputs, ask: was I fully present in today’s important conversation, or was I mentally elsewhere? Instead of tracking tasks completed, ask: did I show up for the people who needed me, or did I let them down? Instead of measuring efficiency, ask: do I see myself more clearly than I did before—my patterns, my values, my growth? These questions are harder to quantify, but they point toward what actually matters. A highly productive but perpetually absent life is not a life well-lived. An unburdened mind is not unburdened so it can produce more; it is unburdened so it can be present more fully.
The sixth principle is to set boundaries. The system is a tool, not a master. The capacity it frees belongs to you, to direct toward what matters to you. If you allow freed capacity to be captured by expanding expectations—more work because you can handle more, more commitments because you can track them, more productivity because you are less burdened—then you have not been unburdened at all. The burden has simply shifted form, from cognitive overload to expectational overload, and you are no better off.
Boundaries are both personal and social. Personal boundaries mean deciding in advance how you will use freed capacity—for family, for creative work, for rest, for whatever you choose—and defending those decisions against the tendency to fill all available time with work. Social boundaries mean communicating to others what you are and are not available for, resisting the assumption that efficiency gains should be captured by those who make demands on you, and pushing back against a culture that treats presence as waste and productivity as the only value.
These principles are not exhaustive; they are starting points. Your situation is particular, and the application of these principles to your life will require your judgment, your adaptation, your creativity. The principles are guides, not rules. They point toward the unburdened life but do not guarantee it. The guarantee, if there is one, comes from your attention to your own experience, your willingness to notice when something is going wrong and adjust, your commitment to the presence and flourishing that cognitive partnership is meant to enable.
Use these tools wisely. They can serve you well; they can also harm you if used poorly. The difference lies not in the tools but in you—in the choices you make, the vigilance you maintain, the values you bring to the partnership. May you choose wisely, and may the unburdened mind serve your flourishing.
Chapter 17: For Builders
How to Design for Unburdening
If you build cognitive partnership tools—if you design the systems that will extend human minds—then you hold unusual power and unusual responsibility. The tools you create will shape how millions of people think, remember, and relate to their own cognitive lives. Design choices that seem technical and neutral will have profound effects on human flourishing. This chapter offers principles for those who wield that power, not as constraints to resent but as guides to building something worthy.
The first principle is to start with philosophy, not features. Before you decide what the system will do, decide what it is for. What human good does cognitive partnership serve? What does flourishing look like for the people who will use your tool? What are the ways your tool could enhance that flourishing, and what are the ways it could undermine it? These questions may seem abstract, but they are the most practical questions you can ask, because the answers will shape every subsequent decision.
The technology industry has often proceeded in the opposite order: build the technically impressive capability, then find a use for it, then address problems as they arise. This approach treats philosophy as an afterthought, something to bolt on after the product is built. But philosophy is not an afterthought; it is the foundation. A system built without clear purpose will drift toward whatever optimization function it is given—engagement, retention, revenue—and these functions may or may not align with human flourishing. A system built with clear purpose has a compass; it can evaluate features against the purpose they are meant to serve.
The question “what is this for?” should be answerable in terms of human good, not just capability. “It helps people remember things” is not yet a purpose; it is a capability. “It frees people from the burden of tracking so they can be more present in their lives” is closer to a purpose. “It enables people to show up more fully for the people who matter to them” is a purpose. Start with the purpose; let the features follow.
At the World Economic Forum in Davos in early 2026, the leaders of two of the most advanced AI companies—Google DeepMind and Anthropic—sat together and discussed the future they are building. They spoke at length about timelines to artificial general intelligence, about job displacement, about geopolitical competition, about safety research. But when the question of meaning and purpose arose—what happens to the human condition when AI can do everything humans can do—the conversation lasted barely thirty seconds. One of them acknowledged that questions of meaning and purpose are “bigger questions” than economics, perhaps “harder to solve” than job displacement. Then they moved on.
This is telling. The people building the most powerful technology in human history spend most of their time on capabilities, safety, and competition. The question of what makes a human life worth living—the question this book is about—gets a passing mention. This is not criticism; they are doing their jobs, which is to build and to mitigate risks. But it reveals a gap. Someone must think carefully about what these tools are for, about how humans should live alongside them. That work falls to builders who think beyond capability to purpose.
Here is a principle the Davos conversation did not mention: build for yourself first. If you are creating a cognitive partnership tool, you should be its first and most demanding user. Not in the abstract sense of “eating your own dog food”—using what you ship—but in the deeper sense of building what you actually need for your own life.
This principle has teeth. I am building a second brain not because the market demands it but because I need it. I am, as the world measures these things, getting older. I have spent years producing for others—building systems, meeting obligations, carrying the cognitive load that professional life demands. The burden is real; it crowds out the thinking and presence that make life meaningful. I build this tool because I need this tool. That need disciplines my design choices in ways that market research cannot.
Building for yourself first also means building for people at different life stages. A twenty-five-year-old building productivity software may optimize for output, for getting more done, for climbing faster. That is what twenty-five often needs. But a person at midlife or beyond may need something different: not more output but more presence, not climbing faster but being here more fully for the time that remains. The unburdened mind means different things at different ages. At twenty-five, it might mean capacity to achieve. At fifty-five, it might mean freedom to savor. At seventy-five, it might mean clarity to see what mattered and what did not.
If you build only for the young and striving, you miss this. If you build for yourself—wherever you are in life—you discover what the market research does not reveal: what cognitive partnership means for a life that is not just beginning but unfolding, a life with history and loss and hard-won wisdom, a life that needs unburdening not to produce more but to be present for what remains.
The leaders at Davos spoke of solving disease, of economic transformation, of technological adolescence. These are worthy concerns. But they did not speak of the person lying awake at three in the morning, overwhelmed by obligations, wondering where their life went. They did not speak of the parent too burdened to be present for their children, or the adult child too scattered to show up for aging parents. They did not speak of the builder who has spent decades producing for others and now wonders: when do I get to think my own thoughts, live my own life, be present in my own days?
Build for that person. Build for yourself. The philosophy is not abstract; it is personal. And the personal is where genuine innovation begins.
The second principle is to design for unburdening, not replacement. The integration test we introduced in Part II applies directly to your design decisions: does this feature free the mind to think better, or does it replace the need to think? Features that free the mind are aligned with cognitive partnership; features that replace thinking are not. This distinction should guide every product decision.
Concretely: a feature that surfaces relevant context before a meeting frees the mind—you can think about the meeting rather than scrambling to remember background. A feature that summarizes and decides what the meeting is about replaces thinking—you receive conclusions rather than engaging with the situation yourself. A feature that reminds you of commitments you made frees the mind—you can keep your promises without the burden of tracking. A feature that automatically responds to messages on your behalf replaces agency—others receive not your judgment but a system’s approximation of it.
The distinction is not always clear; there is a spectrum between pure unburdening and pure replacement. But the question should always be asked. With each feature you consider, ask: does this require the human to think, or does it think for them? The answer should shape whether and how you build it.
The third principle is present, don’t decide. The system should surface information, show patterns, make the relevant visible. The system should not make decisions on behalf of the user, except in explicitly delegated, low-stakes domains where the user has chosen to delegate.
The difference is subtle but crucial. Presenting information means: here is what you need to know; what you do with it is up to you. Deciding means: here is what you should do; the thinking has been done for you. A system that presents supports the user’s agency; a system that decides undermines it. Even when the system’s decisions are good—perhaps especially then—they atrophy the user’s capacity to decide for themselves.
This principle has design implications. The interface should make options visible rather than hiding them behind AI recommendations. The framing should invite engagement rather than passive acceptance. The system should explain why it is surfacing something, not just that it is important. Transparency about the system’s reasoning enables the user to apply their own judgment to that reasoning; opacity encourages passive acceptance.
The fourth principle is require active engagement. The user should have to engage with the material the system provides; the system should not make passivity too easy. If the user can succeed by simply accepting whatever the system offers, without examining it, considering alternatives, or applying their own judgment, then the system has made a cognitive partner unnecessary. The human in the partnership must contribute something, or the partnership degrades into dependence.
Design choices can encourage or discourage active engagement. A system that requires the user to choose from options encourages engagement; a system that provides a single recommendation discourages it. A system that asks clarifying questions encourages the user to think about what they want; a system that infers silently may get the answer right but does not prompt reflection. A system that shows its reasoning invites evaluation; a system that just provides outputs invites acceptance.
None of this means making the system difficult to use. Friction for its own sake is not the goal; appropriate friction is. The question is whether the friction serves the user’s engagement with their own cognitive life or merely impedes efficiency. Friction that prompts thinking is valuable; friction that is just inconvenient is not.
The fifth principle is build for resilience. Users should be able to function without the system. If failure of the system—technical failure, service disruption, loss of access—would leave users unable to manage their cognitive lives, you have built a crutch rather than a partner. The partnership should enhance capability, not replace it entirely.
This has design implications for how you handle data export, portability, and graceful degradation. Users should be able to extract their data in usable formats; lock-in that traps users in your system is not partnership. The system should degrade gracefully when connection is lost or features are unavailable, not fail catastrophically. Users should understand what the system is doing well enough to do without it if necessary, not be dependent on opaque magic they cannot replicate.
Building for resilience may seem contrary to business interests—lock-in is often a source of competitive advantage. But the long-term interests of a business depend on trust, and trust is earned by treating users as partners rather than captives. A system designed for resilience communicates respect; a system designed for dependence communicates exploitation.
The sixth principle is privacy as foundation. The data you collect is not a resource to be monetized; it is a trust to be honored. Users’ thoughts—accumulated over years, revealing their fears and hopes and relationships—are sacred in a way that most data is not. If you treat this data as product to be sold, or as training material to be absorbed, or as leverage to be used, you betray the partnership before it begins.
Privacy by design means making decisions that protect user data even when those decisions have costs. It means on-device processing where possible, so that thoughts never leave the user’s control. It means end-to-end encryption where cloud processing is necessary, so that you cannot read what you process. It means transparency about data use, so that users know what they are entrusting. It means data ownership and control, so that users can delete what they want deleted and take what they want to take.
The business model matters here. Free services supported by advertising have historically treated user data as the product; the user is not the customer but the commodity being sold. This model is incompatible with genuine cognitive partnership. If your business depends on monetizing the thoughts you are trusted to hold, you have a conflict of interest that no privacy policy can resolve. The economics must align with the ethics.
The seventh principle is measure what matters. If you measure engagement—time spent in the app, interactions completed, daily active usage—you will optimize for engagement, which may or may not serve users’ flourishing. If you measure presence—whether users are more available to their lives because of your tool—you optimize for presence. What you measure is what you improve; choose carefully what to measure.
This is admittedly difficult. Engagement is easy to measure; presence is hard. You can count how many times someone opens your app; you cannot easily count how present they were in their evening with family. But the difficulty is not an excuse for measuring the wrong thing. The path forward requires developing new metrics—indicators that approximate what actually matters without becoming optimization traps themselves.
Consider these alternative indicators for first-life cognitive partnership:
Cognitive friction reduced. Does the user report less mental load from tracking and remembering? This is qualitative and self-reported, but it points toward the actual goal. A short check-in—“How heavy does your cognitive load feel today, on a scale of 1-5?”—captures something real that time-in-app does not.
Open loops closed. How many commitments, worries, and pending items moved from the user’s head into the system and stayed there? This measures unburdening directly. Not “tasks completed” (which is productivity) but “things captured and released” (which is relief).
Follow-through on chosen priorities. Did the user accomplish what they said mattered to them—not what the system suggested, but what they declared? This measures whether the system serves the user’s values or substitutes its own. The key word is “chosen”—the metric tracks alignment with stated priorities, not volume of output.
Presence quality. Self-reported: “In the past week, how often were you fully present in moments that mattered to you—with family, in important conversations, in experiences you wanted to savor?” This is subjective, but it points at the right thing. A system that increases output while decreasing presence has failed, regardless of what the engagement metrics say.
Relational reliability. Did commitments to other people get kept? Did important dates get remembered? Did follow-ups happen? This measures whether the system enabled the user to be more reliable to the people in their life—which is one of the three freedoms we have argued for.
Attention stability. Over time, does the user report feeling more able to sustain focus, or less? This guards against the system contributing to attention fragmentation. If extended use correlates with decreased attention capacity, something is wrong.
Capacity for reflection. Does the user have time and mental space for reflection—not just doing, but thinking about doing? A first-life system should create room for this. If the user reports never having time to think, the unburdening has not occurred.
Dependency check. Can the user function reasonably well without the system? This should be assessed periodically. If the answer is “no, I would collapse,” the system has created unhealthy dependence rather than healthy extension.
Notice what is absent from this list: time in app, tasks completed, goals achieved, streaks maintained, output produced. These are the metrics of productivity software, and they optimize for the wrong thing. A first-life system should be indifferent to how much time users spend in it—ideally, users would spend minimal time while gaining maximum unburdening. The goal is not engagement; the goal is a life that needs less engagement with the tool because the tool has done its job.
Implementing these metrics requires courage. Investors trained on engagement metrics will be skeptical. Competitors measuring the traditional way will seem to be winning. But the builders who get this right will create something genuinely valuable—tools that users trust because they are designed to serve rather than capture. In a market full of attention-extracting software, a tool that genuinely unburdened would be distinctive. The measurement philosophy is not just ethics; it is strategy.
These principles are demanding. They may conflict with short-term business pressures, investor expectations, competitive dynamics. But they are not utopian; they describe how to build something genuinely good rather than merely profitable. The builders who take these principles seriously will create tools that earn trust and serve flourishing. The builders who ignore them will create tools that extract rather than enhance. The choice is yours; the consequences will be felt by millions.
Build wisely. The minds you extend are precious; they are the only minds those people will ever have. Design as if you are designing for your own mind—which, in a sense, you are. May what you build be worthy of the trust it asks.
Chapter 18: For Society
Policy, Access, and Collective Implications
Cognitive partnership is not only an individual matter. The tools we build and use exist within social contexts that shape who benefits, who is harmed, and how power is distributed. This chapter turns from individual and builder principles to societal considerations—the collective implications of cognitive partnership and the policies that might guide them wisely.
The first consideration is access. Cognitive partnership tools are not equally available to all. They require devices, connectivity, digital literacy, and often subscription fees. Those who have these resources can extend their cognitive capacities; those who lack them cannot. If cognitive partnership provides significant advantages—and we have argued that it does—then unequal access creates new forms of inequality.
This is not a hypothetical concern. We have seen similar patterns with previous waves of technology. Those with early access to computers, to the internet, to smartphones gained advantages that compounded over time. The digitally connected pulled away from the disconnected in educational achievement, economic opportunity, and access to information. There is no reason to expect AI cognitive partnership to be different; indeed, the advantages may be more significant, since cognitive capacity is more fundamental than any specific technological skill.
The policy response to unequal access is not obvious. Public provision—making cognitive partnership tools available to all, funded by taxation—is one approach, treating these tools as infrastructure rather than consumer products. Subsidy and regulation—requiring that basic cognitive partnership be affordable, subsidizing access for those who cannot pay—is another. Mandating open standards and interoperability—so that people are not locked into expensive platforms—would reduce switching costs and enable competition. Each approach has trade-offs; none is clearly best. But the question of access cannot be ignored. Cognitive partnership that benefits only those who can afford it is not liberation; it is a new form of privilege.
The second consideration is protecting development. We argued in Chapter 14 that children need special consideration—that cognitive partnership should be introduced carefully, at appropriate developmental stages, in ways that build rather than undermine capacity. But parents and educators cannot navigate this alone; they need societal support.
What might this support look like? Research, first: we need better understanding of how AI tools affect cognitive development at different ages, so that guidance can be based on evidence rather than intuition. Regulation of child-facing AI: systems designed for children should meet standards that protect development, and marketing should be restricted to prevent targeting of young users before they are ready. Educational policy for the AI age: schools must think carefully about when and how to introduce AI tools, what to teach about their wise use, and what to preserve in curricula even when AI could substitute. Teacher training: educators need support in navigating these questions, not just individual judgment in a confusing landscape.
The underlying question is what we want future generations to be capable of. If we allow AI to substitute for cognitive development, we raise people who cannot think, relate, or create without AI support. If we protect development while enabling appropriate cognitive partnership, we raise people whose capacities are enhanced by AI rather than replaced by it. The choice is collective; it should be made consciously, not by drift.
The third consideration is preserving human spaces. Not every domain of human life should be mediated by AI. Some interactions should remain unaugmented, person to person, without the intervention of cognitive tools. The value of these spaces is not efficiency; it is something harder to name—authenticity, intimacy, the particular quality of human connection that arises when it is just us, with our fallible memories and limited attention, showing up for each other.
What should remain unprogrammed? There are no universal answers, but the question deserves societal attention. Perhaps certain professional relationships—therapy, spiritual direction, some forms of teaching—should remain guaranteed human. Perhaps certain social spaces—family meals, religious services, community gatherings—should be protected from digital intrusion. Perhaps there should be a right to human service in certain contexts—to interact with a person rather than a system when the matter is sensitive or important.
These protections are not anti-technology; they are about appropriate scope. AI cognitive partnership belongs in some domains and not in others. The boundaries should be drawn thoughtfully, not allowed to erode until nothing remains unprogrammed.
The fourth consideration is research. We have argued throughout this book that cognitive partnership can enhance human flourishing—but we have also acknowledged that this is a claim in need of evidence. What are the actual effects of sustained cognitive partnership on human cognition, relationship, and wellbeing? We do not yet fully know. The technology is new; longitudinal studies take time; the effects may be subtle and vary across individuals.
Society should invest in the research needed to understand what is actually happening. This means funding for studies that track cognitive effects over time—not just self-reported satisfaction but measured cognitive capacity, attention, memory, reasoning. It means research on relational effects—how AI companionship affects human relationships, how cognitive partnership changes the texture of human connection. It means investigation of differential effects—who benefits most, who is harmed, what conditions make the difference. Evidence-based policy requires evidence; we need to generate it.
The fifth consideration is the collective dimension. Individual minds extended by AI could potentially connect in new ways, forming collective intelligences beyond what individual humans or AI systems achieve alone. This is speculative—we do not know if or how such collective cognition would work—but it is a possibility worth considering.
On one hand, there is utopian potential: humanity’s accumulated knowledge and insight, accessible to all, building on itself, accelerating understanding and cooperation. The noosphere that Teilhard de Chardin imagined—a layer of thought enveloping the earth—might be approached through interconnected AI-extended minds sharing knowledge and building understanding together.
On the other hand, there is dystopian risk: homogenization of thought, loss of diversity, the replacement of many perspectives with a single dominant one. If everyone’s cognitive partner draws on the same training data and optimizes for the same patterns, the result may be convergence rather than diversity—a flattening of human thought rather than its flourishing. The particular perspectives that come from particular lives may be smoothed away by the gravitational pull of the average.
Society should attend to this tension. Preserve diversity of thought as AI systems become more capable of shaping thought. Support alternative approaches to AI, different training data, different optimization targets. Resist monopoly in cognitive partnership tools, so that no single company’s patterns become the default patterns of human thinking. The collective implications are too important to leave to market dynamics alone.
None of these considerations resolves into simple policy prescriptions. The questions are genuinely difficult; reasonable people will disagree about answers. But the questions must be asked. Cognitive partnership at scale is not just a technological development; it is a social transformation, with implications for equality, development, human connection, and the nature of collective thought. Society—through public discourse, through democratic deliberation, through the choices of individuals and institutions—must engage with these implications, not leave them to technologists and market forces alone.
We return, at the end, to the framework we established at the beginning. The unburdened mind requires not just cognitive unburdening (technology) but existential unburdening (social design), relational unburdening (community), and meaningful engagement (personal work that no system can do for you). The technology we have discussed throughout this book is only one leg of a four-legged stool. Without social conditions that provide security, without communities that provide connection, without individual clarity about what life is for, without economic structures that allow the benefits of technology to be broadly shared—the unburdened mind remains a privilege of the few rather than a possibility for all.
This is the collective work: not just building the tools but building the society in which the tools can serve everyone. It is harder work, slower work, work that cannot be done by any one company or any one policy. But it is the work that determines whether cognitive partnership becomes a force for human flourishing or another source of division and harm. The invitation we extended in Chapter 20 applies here as well: participate thoughtfully in what is happening. Demand the social conditions that technology alone cannot provide. Build the communities, advocate for the policies, contribute to the world in which the unburdened mind is possible—not just for some, but for all.
End of Part VI: The Principles