Recommendations: Navigating the AI Era
These recommendations synthesize findings from 54 research cells across 18 dimensions and 3 time horizons. They are organized by audience and urgency. Each recommendation is grounded in the research evidence and prioritized by impact -- what matters most comes first.
The single most important finding across this entire project: the window for shaping the AI transition is 2026-2035. After that, ownership structures consolidate, institutional paths harden, and the range of achievable futures narrows dramatically. Acting now, even imperfectly, is far more valuable than acting optimally a decade from now.
For Individuals
What to Do NOW (2026-2028)
1. Diversify your identity beyond your job title. This is the highest-priority individual action in the entire research. The identity crisis dimension shows that people whose sense of self depends entirely on professional identity are the most psychologically vulnerable to AI displacement. Begin now: invest seriously in relationships, creative practice, community roles, physical skills, or spiritual life. Treat identity diversification with the urgency you would give to financial diversification before a market crash. The research on deaths of despair (Case and Deaton) demonstrates that identity collapse kills -- literally -- and the populations most at risk are those with the narrowest identity foundations.
2. Build financial resilience for career discontinuity. The job destruction research projects that 40-60% of current job categories in advanced economies will either disappear or contract by more than 50% by 2046. This is not a distant threat -- the early waves are already arriving. Reduce fixed costs. Build savings. Develop multiple income streams. Consider ownership models (equity compensation, cooperative membership, investment in AI-exposed assets) rather than pure salary dependency. The economic models research shows that ownership of AI capital will increasingly determine economic outcomes.
3. Develop "durably human" capabilities. The emerging roles research identifies the skills that remain valuable across all scenarios: complex interpersonal skills, ethical judgment, creative vision grounded in lived experience, physical-world expertise, and the ability to provide authentic human connection. These are not skills you acquire in a weekend course -- they require sustained, deep practice. Start now. Specifically: learn to listen deeply, to facilitate groups, to work with your hands, to sit with emotional complexity, to make ethical judgments under uncertainty.
4. Engage politically with AI governance. The ethics and regulation research makes clear that the policy decisions of the next 5-10 years will shape the distribution of AI's benefits and costs for decades. Support politicians and organizations advocating for sovereign AI wealth funds, meaningful UBI, community infrastructure investment, and international AI governance. The geopolitics research shows that purely national responses create regulatory arbitrage and race-to-the-bottom dynamics -- push for international cooperation.
What to Prepare For (2028-2033)
5. Expect and plan for career disruption. Even if your current role seems safe, the research shows that AI capability expansion is accelerating, not plateauing. The medium-term horizon (2028-2033) is when displacement moves from administrative and routine cognitive work into professional services, creative industries, and skilled trades. Have a transition plan. Know what you would do if your current role were eliminated in 18 months. The people who navigate displacement best are those who have already begun building alternative foundations -- social networks, skills, savings, and identity frameworks -- before the crisis hits.
6. Invest in community. The containment activities and emerging needs research converge on a single finding: social connection is the strongest predictor of wellbeing in the absence of work-based identity. The Surgeon General's advisory identifies loneliness as equivalent to smoking 15 cigarettes per day. Build and maintain dense social networks with the seriousness you previously reserved for career development. Join community institutions. Show up consistently. The research on post-work societies shows that communities with strong social capital weather the transition far better than those without, regardless of economic conditions.
7. Cultivate embodied skills and physical practices. The emerging needs research identifies a striking long-term trend: the reassertion of biological, embodied human needs as central to wellbeing. Physical movement, manual creation, face-to-face interaction, and contact with nature are not lifestyle luxuries -- they are the biological foundation of psychological health. Develop at least one practice that keeps you connected to your body and the physical world: a craft, a sport, gardening, cooking, building. These become more valuable, not less, as cognitive work is automated.
Long-term Positioning (2033+)
8. Develop a personal philosophy of human significance. The identity crisis research shows that by the 2040s, the work-centric identity model will be recognized as historically specific rather than natural. Those who thrive will have constructed frameworks for meaning and worth that do not depend on being better than a machine at anything. The most durable answers, according to the research, involve love, creative expression, service, wonder, and the embrace of human finitude. This is not abstract philosophy -- it is practical psychological preparation for a world that is already arriving.
9. Position yourself in the human experience economy. The emerging needs research projects that the largest economic sector by 2040 will be the provision of genuine human connection, embodied skill development, caregiving, mentorship, and meaning-making. If you are making long-term career decisions, orient toward roles where your human presence, judgment, and relationships are the core value proposition -- not roles where you operate AI systems that could eventually operate themselves.
For Businesses
Immediate Actions (2026-2028)
1. Conduct an honest AI exposure audit. Map every business function against AI capability trajectories. The McKinsey and Goldman Sachs research provides frameworks for task-level automation assessment. Do not assume that "knowledge work" or "creative work" is safe -- the research shows these categories are among the most exposed. Identify which functions will be fully automated within 5 years, which will be augmented, and which will remain human-essential. Build strategy around the honest answers, not the comfortable ones.
2. Invest in your human workforce's durably human capabilities. The emerging roles research identifies the foundational skills that remain valuable across all AI scenarios: ethical reasoning, interpersonal depth, creative judgment, domain expertise, and adaptability. These are not traditional training objectives -- they require a different approach to workforce development. Fund sabbaticals for deep skill development. Create mentorship programs. Build organizational cultures that value judgment and wisdom alongside speed and output. The companies that will thrive are those whose human employees contribute something AI cannot replicate.
3. Adopt stakeholder capitalism proactively. The economic models research shows that companies distributing AI gains broadly -- to employees, communities, and displaced workers -- face less regulatory and social backlash than those concentrating gains among shareholders. This is not altruism; it is strategic positioning. The political environment for AI-intensive businesses will become hostile if the public perceives that AI enriches shareholders while impoverishing workers. Get ahead of this by sharing productivity gains visibly and structurally.
Strategic Pivots (2028-2033)
4. Design for the human experience economy. The emerging needs research projects massive demand for genuine human connection, embodied skill development, authenticity verification, and meaning-making services. Businesses that facilitate these -- in caregiving, education, hospitality, community building, creative collaboration -- are positioned for durable relevance. "Human presence" becomes simultaneously the ultimate luxury good and an essential service. Consider how your business model could pivot from selling AI-generated efficiency to selling irreplaceably human experience.
5. Build AI resilience through diversification. The digital divide research warns against dependency on single AI providers. Develop multi-provider strategies. Build internal AI expertise. Invest in open-source capabilities. The companies that survive disruptions in the AI supply chain -- provider policy changes, geopolitical restrictions, regulatory shifts -- are those with the flexibility to adapt. The geopolitics research shows that AI blocs are consolidating; businesses operating across these blocs need diversified technology stacks.
6. Prepare workforce scenarios for radical transformation. The job destruction research shows that the shift from "AI as tool" to "AI as colleague" to "AI as replacement" completes during the 2028-2033 window for many functions. Develop scenarios where entire business functions are fully automated. Plan for the workforce, community, and political implications. The research on economic models suggests exploring transition support for displaced workers -- not just severance, but retraining, community placement, and equity participation in the AI systems that replaced them.
Long-term Business Model Adaptation (2033+)
7. Rethink value creation for a post-scarcity production economy. The economic models research projects that when AI and robotics drive the marginal cost of production toward zero in key sectors, business models predicated on scarcity pricing break down. The surviving business models will be those built on what remains scarce: human attention, authentic human connection, trust, and verified human creativity. The artisanal premium -- goods and services valued precisely because of their human origin -- becomes a major market force.
8. Serve the 2-3 billion entering the AI economy. The digital divide research identifies the largest untapped market opportunity in human history: the billions of people entering AI-mediated economic life between 2030 and 2046, primarily in Africa, South Asia, and Southeast Asia. Building products and services for this population requires deep local knowledge, not just technological capability. Businesses that develop this capacity early gain durable competitive advantage.
For Policymakers
Emergency Measures (2026-2028)
1. Establish sovereign AI wealth funds immediately. This is the single highest-leverage policy action identified in the entire research project. The economic models research shows that capturing 5-10% of AI-generated economic value through public equity stakes in AI companies and infrastructure can fund substantial citizen dividends by the 2030s. The window is now -- while AI companies are still growing and equity is accessible. Norway began its oil fund in 1990, early in the oil boom; waiting until AI dominance was fully established would make public equity acquisition prohibitively expensive. Every year of delay costs billions in foregone returns. The funds generated will finance UBI, community infrastructure, and transition support.
2. Begin designing and piloting UBI. The research across job destruction, identity crisis, containment activities, and emerging needs converges: income security is the prerequisite for every other form of adaptation. Without material security, displaced populations cannot invest in identity reconstruction, community engagement, skill development, or creative practice. Pilot programs should test not just income levels but delivery mechanisms, cultural framing, and integration with purpose infrastructure. The Finland basic income experiment and subsequent studies provide methodological foundations.
3. Invest in community infrastructure at emergency scale. The containment activities research shows that "purpose infrastructure" -- libraries, community centers, maker spaces, civic service corps, learning communities, parks, and recreation facilities -- is as essential as physical infrastructure. Fund it accordingly. The research demonstrates that the gap between communities with rich activity infrastructure and those without produces health, social cohesion, and mortality disparities comparable to the gap between the highest and lowest income quintiles. This is not discretionary spending; it is preventive social medicine.
4. Launch international AI governance negotiations. The geopolitics and security research makes clear that the window for preemptive governance is narrowing. Begin building the institutional architecture for international AI governance now -- safety standards for frontier models, prohibited military applications, compute monitoring, incident reporting mechanisms. The Bletchley/Seoul/Paris summit process must evolve into standing institutions with technical capacity. Include meaningful representation from the Global South -- current governance discussions dominated by the US, EU, and China ensure policies that perpetuate the digital divide.
Structural Reforms (2028-2033)
5. Transform education from job preparation to human development. The research across identity crisis, emerging roles, emerging needs, and containment activities converges: the purpose of education must shift from workforce preparation to the cultivation of judgment, creativity, ethical reasoning, relational skills, and meaning-making capacity. This is not a curriculum update -- it is a wholesale institutional transformation. Fund lifelong learning infrastructure. Redesign K-12 for identity resilience and embodied competence. Expand humanities, arts, philosophy, and sciences as intrinsically valuable, not vocationally instrumental. The countries that make this transition produce populations equipped for the post-work world; those that cling to skills-based job-preparation models produce populations mismatched with reality.
6. Enshrine economic rights in law. The economic models research suggests that if income decouples from employment, legal guarantees of material security (right to housing, healthcare, nutrition, education) may need constitutional status to withstand political fluctuations. Begin the legal and constitutional groundwork now. The debate will take years; starting later means finishing later.
7. Develop comprehensive AI labor market governance. Automation taxes, mandatory transition support for displaced workers, shortened work-week standards, and portable benefits systems that follow workers across employers and employment models. The job destruction research shows that the pace of change will outstrip the pace of job creation for extended periods -- the social contract must evolve to acknowledge that traditional full-time employment will not absorb the majority of the working-age population indefinitely.
8. Address the digital divide as a matter of rights. The digital divide research recommends establishing AI access as a human right -- the right to AI capabilities sufficient for meaningful participation in economy and society. This is the 21st-century equivalent of the right to education. Mandate interoperability across AI platforms. Invest in sovereign AI capacity at national and regional levels. Build public AI infrastructure -- government-funded research, open-source models, public data trusts -- to prevent total private capture of AI capability.
Civilizational-level Investments (2033+)
9. Build the institutional architecture for post-work society. The research across all dimensions converges: by the 2040s, "employment" as the primary mechanism for distributing purchasing power will have been partially or fully superseded in advanced economies. The question is whether the replacement system provides dignity and agency or merely subsistence and dependence. Design income systems (UBI, universal basic services, care credits), purpose systems (civic service, creative grants, community facilitation), and governance systems (citizen assemblies, participatory budgeting, algorithmic accountability) that work together as an integrated social architecture.
10. Invest in international AI safety governance with the seriousness of nuclear arms control. The security research warns that AI-enabled risks -- autonomous weapons, AI-powered bioweapons, AI-driven escalation between nuclear powers -- are existential. The AGI governance crisis projected for 2036-2042 requires institutional capacity that takes decades to build. Begin now. Create international AI safety research institutions with genuine multilateral participation. Develop verification and enforcement mechanisms for AI capabilities analogous to nuclear inspections. Fund this at levels commensurate with the risk.
11. Protect democratic governance from AI erosion. Establish constitutional guardrails: rights to human review of consequential automated decisions, rights to cognitive liberty (protection against AI manipulation), rights to meaningful human contact in essential services, and mandatory algorithmic impact assessments for government AI deployments. The ethics and regulation research shows that AI can either strengthen democracy (through better-informed citizens and more responsive governance) or undermine it (through surveillance, manipulation, and concentration of power). The outcome depends on deliberate institutional design, not technological inevitability.
For Educators
Curriculum Changes Needed Now (2026-2028)
1. Teach identity resilience and meaning-making as core competencies. The identity crisis research demonstrates that the populations most vulnerable to AI displacement are those with the narrowest identity foundations. Education systems must explicitly develop the capacity for identity flexibility -- the ability to construct and reconstruct a sense of self and purpose across changing circumstances. This means integrating philosophy, psychology, ethical reasoning, and self-reflection into core curriculum, not as electives. Teach students to answer "who am I beyond what I produce?" before the economy forces the question.
2. Prioritize relational and embodied skills alongside cognitive ones. The emerging needs research identifies deep social skills and embodied competence as the most valuable human capacities in the AI era. Teach active listening, conflict resolution, group facilitation, emotional intelligence, and vulnerable self-expression with the same rigor as mathematics and science. Maintain and expand physical education, arts, crafts, and hands-on learning -- these are not frills but foundations. The research on the AI-native generation shows that digital fluency without embodied competence produces psychologically fragile adults.
3. Develop AI literacy as civic competence. The digital divide research frames AI literacy not as a technical skill but as a requirement for democratic participation. Every student should understand how AI systems work, what they can and cannot do, how they affect individual lives, and how they are governed. This is not coding instruction -- it is the AI equivalent of media literacy and civic education. Citizens who cannot evaluate AI systems cannot participate meaningfully in the democratic governance of those systems.
Institutional Transformation (2028-2033)
4. Restructure for lifelong learning. The containment activities research shows that by the 2030s, lifelong learning has become the default activity for a significant portion of the non-employed population. Educational institutions must transform from credential factories serving 18-22-year-olds into lifelong learning platforms serving learners from adolescence through old age. This requires new business models, new pedagogies, new scheduling, and new relationships with communities. Universities that adapt become anchors of post-work social infrastructure; those that do not become irrelevant.
5. Integrate disciplines around human capabilities. The emerging roles research shows that the long-term roles share common foundations: deep domain expertise, ethical reasoning, interpersonal depth, creative judgment, and adaptability. These do not emerge from disciplinary silos -- they require interdisciplinary education that integrates sciences, humanities, arts, and practical skills. Create programs that combine, for example, bioethics with hands-on biological research, philosophy with community facilitation, engineering with artistic design. The most valuable educational institutions of the future will be those that produce graduates with judgment and wisdom, not just knowledge and skills.
6. Build assessment systems for what matters. Current assessment systems measure cognitive performance on structured tasks -- precisely what AI does best. Develop assessments for ethical reasoning, relational depth, creative originality, physical mastery, and the capacity to navigate ambiguity. This is technically challenging but essential: what gets measured gets valued, and education systems that continue to measure only what AI can outperform humans at will produce graduates optimized for obsolescence.
The Future of Learning (2033+)
7. Redefine education as human development. The research across all dimensions converges on a long-term vision: education is not workforce preparation but the cultivation of full human capability -- the capacity for meaning, connection, creation, judgment, and wise action. This is not new (Aristotle, Dewey, Montessori, and many others articulated versions of it), but the AI era makes it operationally necessary rather than aspirationally nice. Educational institutions that embrace this mission become the most important institutions in post-work society. Those that resist become vocational training centers for a shrinking market, producing graduates who are neither employable by traditional measures nor equipped to flourish in the AI era.
8. Prepare the AI-native generation for its unique challenges. The emerging needs research identifies that the generation reaching adulthood in the 2040s faces questions no previous generation confronted: What is a genuine human relationship in a world of convincing AI companions? What does it feel like to accomplish something without AI assistance? What is the value of unaided human thought? Education must help this generation develop the capacity for deep attention, tolerance of frustration, comfort with imperfection, and the skills for genuine human connection that their AI-saturated childhoods may not have cultivated naturally.
What Matters Most: A Decision-Maker's Summary
If you read nothing else, read this. The research across 54 cells, 18 dimensions, and 3 time horizons produces five findings that should drive every decision:
-
The ownership question is everything. Who owns the AI systems that generate the majority of economic value determines whether the AI era produces shared prosperity or techno-feudalism. Sovereign AI wealth funds, open-source AI infrastructure, and democratic governance of AI capital must be established before ownership consolidates beyond democratic reach. This window is approximately 2026-2034.
-
Income security is the prerequisite. Without material security decoupled from employment, no other adaptation is possible at scale. People in economic desperation cannot invest in identity reconstruction, community building, or creative expression. UBI or equivalent mechanisms are not optional -- they are the foundation on which everything else depends.
-
Purpose infrastructure is as essential as physical infrastructure. The difference between a population that adapts to the post-work world and one that descends into despair is not income alone but the availability of meaning, structure, social connection, and community. Invest in purpose infrastructure -- community institutions, civic service, creative opportunity, learning communities -- at the scale of roads and power grids.
-
Human connection is the irreducible core. Across all dimensions and time horizons, the research converges: the deepest human need that AI cannot satisfy is the experience of being known, valued, and loved by other mortal, vulnerable, conscious beings. Societies that protect and cultivate genuine human connection thrive. Those that allow AI to substitute for it produce populations that are materially adequate and psychologically devastated.
-
Act now. The window is closing. The institutional lead time for new social systems is measured in decades. The policy decisions made between 2026 and 2035 determine the distribution of AI's benefits and costs for the rest of the century. Every recommendation in this document is less effective if implemented five years late. The most dangerous response to the AI transition is not the wrong policy but no policy -- waiting for certainty in a situation where certainty will arrive too late to act on.