The Wisdom That Algorithms Can’t Capture: What Three Generations of OFWs Know That ChatGPT Never Will

Lola Remedios worked as a domestic helper in Hong Kong for twenty-three years, starting in 1987 when she was thirty-two. She navigated a foreign city without smartphones, Google Maps, or translation apps. When she faced problems with employers, she couldn’t ask ChatGPT for advice at 3 AM—she walked to Statue Square on Sundays to find other Filipinas who might know what to do. When she felt homesick, she wrote letters that took two weeks to reach Pampanga and another two weeks to receive responses. When she needed to send money home, she stood in remittance lines for hours with cash in her purse, calculating exchange rates in her head.

I asked her last week: “Lola, young OFWs now have AI tools that can answer any question instantly, help them prepare for interviews, negotiate better contracts, even practice their English. What do you think about that?”

She was quiet for a long moment, then said something I haven’t stopped thinking about: “That’s wonderful, anak. But will they learn what to do when the technology fails? When the answer isn’t in any computer? When what they need isn’t information but courage, or forgiveness, or knowing how to recognize good people from bad ones by looking at their eyes?”

Her question reveals something profound about the current moment in OFW history. We’re experiencing unprecedented access to information, guidance, and skill development through AI tools. This access creates genuine advantages that this article series has explored extensively. But in our enthusiasm for technological empowerment, are we overlooking forms of knowledge that can’t be algorithmic? Are we accidentally devaluing wisdom that took previous generations of OFWs decades to develop—wisdom they could only acquire through direct experience, suffering, and human connection?

This isn’t an argument against AI tools. This is an exploration of what they can’t do, what they can’t replace, and what we might be losing even as we’re gaining. This is about the irreplaceable value of wisdom earned through lived experience, passed down through human relationships, and embodied in ways that no language model can capture.

What Lola Remedios Knows That ChatGPT Doesn’t

When I asked Lola Remedios to tell me what she learned in twenty-three years that no AI could teach, she shared seven forms of knowledge that took me three hours to understand were just scratching the surface of her accumulated wisdom.

Reading People Beyond Their Words

“In my third employer’s home,” she said, “there was a teenage daughter who always smiled at me and said thank you. Very polite. But I knew she resented me. How did I know? The way she opened doors just a little too forcefully when leaving rooms I was in. The way her thank you’s had a tiny pause before them, like she had to remind herself to say it. The way she never looked at me when speaking, only near me. When I told the mother I thought the daughter was uncomfortable with my presence, the mother was shocked. She said her daughter had never said anything negative. Two months later, the daughter had emotional breakdown and admitted she felt guilty having a maid when her friend’s families didn’t. She felt I was a living reminder of her family’s privilege that she didn’t know how to process.”

“Can your AI tool,” Lola asked me, “tell you when someone’s politeness is genuine versus obligatory? Can it teach you to see resentment underneath courtesy? Can it prepare you for reading the emotional undercurrents that determine whether you’ll be treated well or poorly in a home?”

She’s right. AI can tell you that workplace tensions exist, can role-play conflict scenarios, can explain power dynamics theoretically. But the embodied knowledge of reading subtle behavioral cues, interpreting micro-expressions, sensing emotional atmospheres—this comes only from extended experience observing humans in specific cultural contexts. You can’t prompt-engineer your way to this kind of perceptual wisdom.

Knowing When Rules Should Be Broken

“My fourth employer,” Lola continued, “had a rule: I was not supposed to use any of the family’s things, including the washing machine, for my personal laundry. I should hand-wash my clothes in my bathroom. This was in the contract I signed. One day, the employer’s elderly mother who lived with them had a stroke. The family was at the hospital. I was alone with the recovering woman. She soiled her bed. I needed to wash the sheets immediately for hygiene. The washing machine was right there. But the rule said I couldn’t use it without permission, and the family wasn’t answering phones—they were in emergency mode.”

“What did I do? I used the machine. When the family returned and saw clean sheets, they thanked me. I never mentioned that I’d broken the rule. Sometimes, anak, doing the right thing means breaking the rule, but you need to know which rules and when. If I’d asked your AI tool ‘Should I break my employment contract rule about not using the washing machine?’ what would it say? It would probably tell me to follow the contract, or to try harder to reach my employers. But I knew—through experience, through understanding this specific family, through common sense that can’t be programmed—that in this situation, breaking the rule was exactly right.”

This is practical wisdom that transcends algorithmic decision-making. AI tools give you rules, policies, best practices, typical approaches. But judgment about when circumstances warrant deviation from rules? When following the letter of the law would violate its spirit? When being technically wrong is morally right? That requires contextual wisdom developed through navigating countless ambiguous situations where principles conflict and no answer is simply correct.

Surviving What Can’t Be Fixed

“There were times,” Lola said quietly, “when I was treated poorly. Not illegal abuse that POEA would investigate, but daily small humiliations. Being spoken to rudely. Being given the least desirable food. Being made to feel I was less than human, just a service machine. I couldn’t leave—I had signed two-year contract, my family depended on my income, and walking away would have created debt I couldn’t repay to the agency. So I survived. Each day, I chose to see my employer’s rudeness as their problem, not my worth. I chose to be grateful for food even if it was plain. I chose to maintain my human dignity internally even when externally I was treated as furniture.”

“Your AI tool can tell someone facing mistreatment what their legal rights are, what actions they should take, what resources exist. That’s valuable. But can it teach them how to preserve their sense of self-worth when they can’t escape circumstances that daily attack it? Can it show them how to metabolize anger so it doesn’t poison them? Can it guide them in finding meaning in suffering that seems meaningless? This is survival wisdom that only comes from surviving.”

She’s identifying something crucial: the difference between problem-solving knowledge (which AI excels at) and endurance wisdom (which requires human experience of suffering and transcendence). AI can help you analyze situations, plan exits, advocate for rights. But the psychological and spiritual capacity to endure what can’t be immediately changed—to remain yourself when circumstances try to diminish you—this is earned through lived experience of adversity, often passed down through stories and modeling from others who’ve endured.

Recognizing Good Fortune Before It Announces Itself

“In my fifth employer’s household,” Lola remembered, “there was a moment in the initial interview when the father asked me about my children. I told him my youngest was ten. He asked what she liked to do. Not ‘how old is she’ for contract purposes—he wanted to know what she liked. That small question told me more about whether I would be treated humanly than any contract clause could. I took that job even though another family offered slightly better salary. I stayed with them for eight years. They treated me like family. They helped send my daughter to college.”

“Young OFWs now,” she observed, “have so much information about salary comparisons, contract terms, negotiation tactics. But do they know how to recognize the small signs that distinguish employers who will treat them well from those who won’t? Not the illegal employers who will abuse them—those have obvious red flags. I mean the difference between employers who view you as a person versus as a service they purchased? That recognition comes from experience evaluating humans across contexts, learning which small signals predict larger patterns.”

This is anticipatory wisdom—the ability to read situations and predict their likely trajectories based on subtle initial indicators. AI can provide checklists of green and red flags. But the intuitive pattern recognition that experienced OFWs develop, the gut feelings that turn out to be accurate, the ability to sense character through brief interactions—this emerges from processing thousands of human encounters over years and developing predictive models that operate below conscious awareness.

Making Peace With Impossible Choices

“The hardest part of overseas work,” Lola said, tears suddenly visible, “wasn’t the work itself. It was missing my children’s lives. My youngest daughter’s first menstruation—I wasn’t there. My son’s high school graduation—I watched blurry video weeks later. My mother’s final illness—I couldn’t come home in time to say goodbye. Every OFW knows this pain: You sacrifice presence for provision. You trade moments you can never reclaim for money that allows moments to exist at all.”

“The question isn’t whether to feel guilty—you will. The question is how to live with that guilt without letting it destroy you. How to accept that there was no good choice, only less-bad options. How to forgive yourself for sacrificing one form of love to provide another form. Your AI tool can intellectually explain that these are structural problems created by global inequality, that you’re doing the best you can, that your sacrifice is meaningful. But can it teach you how to actually carry this burden? How to weep and then wake up the next morning and continue? How to live with permanent ache while still finding joy? This is wisdom I learned from other OFWs who survived similar losses, who showed me by their example how to bear the unbearable.”

What she’s describing is existential wisdom—how to live with ambiguity, loss, and moral complexity without collapsing into despair or bitterness. AI can provide therapy frameworks, emotional processing tools, cognitive reframing techniques. But the deep acceptance of tragedy, the capacity to hold contradictory truths simultaneously (I was right to leave / I should have stayed), the modeling of how to remain open-hearted despite wounds—this wisdom is transmitted human to human, often through witnessing others’ dignity in suffering.

Knowing What Matters Most

“After twenty-three years,” Lola reflected, “I came home. I had saved money. I had sent all three children through college. I had missed fifteen years of daily life with them. Was it worth it? Young people want simple answers. They want AI to calculate: ‘Was Remedios’s sacrifice justified by the outcomes it produced?’ But life doesn’t calculate that way. It was worth it AND it cost too much. I gained opportunities for my children AND I lost irreplaceable time. Both truths exist together.”

“What I know now, what I wish I’d understood earlier: The money matters, but not as much as I thought. The presence matters, but not quite as much as I feared. The real question is: What did I learn about being human? What did I discover about my own strength? What did I develop in my capacity to love, to endure, to forgive, to hope? These questions have no numerical answers. Your AI tool can optimize for salary, career advancement, skill development. But can it help you understand what ultimately matters in a human life? This wisdom comes from living that life fully, making choices, experiencing consequences, and reflecting deeply on the whole arc.”

This is philosophical wisdom about meaning, value, and what constitutes a life well-lived. AI can discuss these topics intellectually, can quote philosophers, can present different ethical frameworks. But the personal clarity about what matters to you, earned through decades of choices and their consequences, integrated through reflection and conversation with others who’ve walked similar paths—this is irreducibly human wisdom that algorithms can inform but never replace.

What My Tito Bobby Learned That LinkedIn Never Will

Tito Bobby, now fifty-eight, worked as a seafarer for thirty years—merchant marine, cargo ships, container vessels crossing oceans for months at a time. He’s from a different OFW generation than Lola Remedios, with different challenges and different wisdom.

When I told him about AI tools helping current OFWs with career management, he said: “That’s good. But here’s what I learned that no career optimization algorithm will teach you.”

The Wisdom of Doing Work Nobody Notices

“On ships,” Tito Bobby explained, “there’s work everyone sees—the captain navigating, the chief engineer managing the engine room, the officers on watch. Then there’s work nobody notices until it’s not done: maintenance, cleaning, checking, preventive repair. The unsexy work. I spent my first decade wanting recognition, wanting visible impact. I was frustrated when my best work—preventing problems before they occurred—went completely unnoticed.”

“Eventually I learned something: The most valuable work is often invisible. If I did my job perfectly, nothing dramatic happened. No emergency, no breakdown, no crisis. Just smooth operation that everyone took for granted. Learning to take pride in work nobody notices, to find satisfaction in preventing problems that never manifest, to be okay with your contributions being invisible—this is mature professional wisdom that no AI-optimized career advice teaches. AI tells you to document achievements, demonstrate impact, make your value visible. Sometimes the deepest value is precisely in keeping things running so smoothly that nobody realizes anything is being done.”

This is wisdom about intrinsic versus extrinsic motivation, about finding meaning in process rather than recognition, about professional maturity that measures success by what didn’t go wrong rather than by what impressively went right. It contradicts the personal branding, achievement documentation, visibility optimization that modern career advice (including AI-generated advice) emphasizes. Both approaches have value, but the wisdom of invisible excellence is counter-cultural and can only be appreciated through having experienced the frustration of unrecognized contribution and choosing purpose over recognition.

Reading the Sea’s Moods

“Weather forecasting technology has improved dramatically,” Tito Bobby noted. “We had satellite data, radar, weather routing systems. Very sophisticated. But experienced sailors still developed what we called ‘sea sense’—intuitive reading of ocean conditions beyond what instruments measured. The way waves moved, the color of the sky, the behavior of birds, the smell of the air, the temperature changes. These subtle indicators, processed through years of observation, sometimes warned us earlier than instruments about changing conditions.”

“One night,” he continued, “instruments said we had eight hours before a storm system reached us. But the sea felt wrong—specific wrongness I can’t fully articulate even now. I recommended to the captain we alter course immediately rather than waiting for the forecast storm arrival time. He trusted my experience. We changed course. The storm intensified faster than models predicted and arrived three hours early. If we’d followed instrument timing, we’d have been caught in dangerous conditions. My ‘sea sense’ was accumulated pattern recognition from experiencing hundreds of ocean crossings, learning to perceive subtleties instruments don’t capture.”

“Can AI tools,” he asked, “teach someone to develop this kind of intuitive pattern recognition? Can they replace the embodied knowledge of reading environments through thousands of direct experiences? They can present information, but they can’t transfer the accumulated instinct that comes from living in specific contexts over extended time.”

This is embodied expertise—knowledge stored not just cognitively but in perceptual capacities developed through sustained engagement with particular environments. While AI can analyze patterns in data, the human capacity to perceive subtle environmental cues and integrate them intuitively operates through mechanisms different from algorithmic processing. This isn’t mystical—it’s expertise developed through repetition that creates perceptual sensitivity AI can inform but not replicate.

The Brotherhood That Saves Lives

“Most dangerous situations at sea,” Tito Bobby said seriously, “are resolved not by technical knowledge but by trust between crew members. When an emergency happens, you need to know—instantly, without thinking—that your crewmates will do their jobs, will watch your back, will make good decisions under pressure. This trust isn’t built through training manuals or simulation. It’s built through months together in confined spaces, shared meals, shared hardships, working through problems together, seeing how someone reacts when they’re exhausted or scared or frustrated.”

“Before a new crew member can access critical systems, they need technical certifications—those are verified by documentation. But before I’d trust a new crew member in an emergency, I needed months of observation. Did they stay calm under pressure? Did they follow through on small commitments? Did they own their mistakes? Did they help others without being asked? These character questions determined whether I’d trust them with my life. AI can verify credentials, can check training certifications, can review work history. Can it evaluate whether someone is trustworthy in crisis? Can it replace the human process of building relationships through shared experience over time?”

This is relational wisdom—understanding that some of our most critical work happens through relationships built on trust that takes time to develop. AI tools can facilitate connection, can help schedule calls, can draft communications. But they can’t shortcut the temporal process of building trust through repeated interactions, shared challenges, and demonstrated character over time. The wisdom of recognizing trustworthiness, of building teams that function under pressure, of knowing the difference between technical competence and reliable character—this requires human engagement AI can support but not replace.

Remembering You’ll Eventually Go Home

“The mistake many OFWs make,” Tito Bobby reflected, “is optimizing entirely for overseas success while neglecting relationships and plans for eventual return. They think ‘I’ll work overseas for twenty years, save money, then figure out what’s next.’ But when they return, they’re strangers in their own families and communities. They’re wealthy compared to neighbors but isolated. They have no current skills relevant to Philippine work market. They’ve optimized for a life phase that ended.”

“Wisdom is remembering that overseas work is temporary—even if it’s decades, it eventually ends. This means maintaining relationships back home even when it’s difficult. This means preparing mentally and financially for return. This means thinking about your entire life arc, not just the current chapter. AI tools help optimize current career phase. But do they remind you that this phase will end and you need to prepare for what comes after? Do they help you think about the whole life, not just the current optimization problem?”

This is longitudinal wisdom—thinking beyond immediate optimization to consider long-term consequences, maintaining perspective about temporary versus permanent, preparing for transitions even while focused on current phase. AI tools are inherently present-focused, optimizing current situations without naturally incorporating decades-long life arc thinking. The wisdom of maintaining connections that will matter after current circumstances end, of investing in eventual transitions even when they seem distant, of remembering your temporary work doesn’t define your permanent identity—this requires human capacity to hold multiple time horizons simultaneously and make choices that sacrifice current optimization for long-term wellbeing.

What My Ate Grace Understands That Google Translate Doesn’t

Ate Grace, forty-one, has worked as a caregiver in Taiwan for twelve years. Her wisdom represents the current generation of OFWs who have some technology but not yet AI tools.

The Language Beneath Language

“I learned Mandarin,” Ate Grace explained, “but I learned something more important than vocabulary and grammar. I learned how my Taiwanese employer’s mother communicates through indirection. When she says ‘I’m not hungry’ in a particular tone, she means ‘I want to eat but don’t want to impose.’ When she says ‘Whatever you think is fine’ about an activity option, she means ‘I have a preference but want you to choose it without me stating it directly.’ Translation apps can convert her Chinese words to English words, but they can’t translate her actual meaning, which requires understanding her specific personality, her cultural communication patterns, her relationship to me, and the context of our current situation.”

“Young caregivers now,” she continued, “have AI tools that can translate in real-time. Wonderful. But will they learn the patient’s specific language—not Mandarin or Hokkien, but Mrs. Chen’s unique way of expressing needs, preferences, discomforts, and fears? Will they develop the intimacy required to hear what’s being said underneath what’s being said? This is the heart of caregiving. Technology can facilitate communication, but it can’t replace the relationship-building that allows you to truly understand another person.”

This is interpersonal wisdom specific to individuals—learning the unique communication patterns, preferences, fears, and needs of specific people you care for. While AI can provide general guidance about dementia communication, caregiver best practices, or cultural communication patterns, it cannot learn the highly specific patterns of individual human beings who may communicate in ways that contradict general patterns. This intimate knowledge comes only through extended relationship with specific others, paying attention to their particularities rather than treating them as exemplars of general categories.

Caring for the Family, Not Just the Patient

“I’m employed to care for elderly mother,” Ate Grace said, “but I quickly learned I’m actually caring for entire family system. The mother’s adult children feel guilty they can’t provide care personally. The grandchildren feel confused about their grandmother’s decline. The son-in-law resents the disruption to household routines. Everyone is stressed, grieving, and sometimes taking it out on each other or me. My job isn’t just helping the mother with physical needs. It’s helping the family navigate this difficult transition without destroying their relationships.”

“Sometimes,” she continued, “I spend more emotional energy helping the daughter process her grief than I do on physical caregiving tasks. I’ve learned to recognize when family members need to vent, when they need reassurance, when they need practical advice, when they need someone to simply witness their suffering. AI tools can tell me about family dynamics in general, can explain grief processes theoretically. But can they help me read this specific family’s emotional needs in real-time and respond appropriately? Can they teach me when to speak and when to be silent? When to suggest solutions and when to just listen? This is relational intuition developed through caring for multiple families across years, learning from each situation, becoming sensitive to emotional atmospheres and needs.”

This is systemic wisdom—understanding that individuals exist within relationship systems and that caring effectively for one person often means attending to the whole system. While AI can provide family systems theory, it can’t give you the practiced sensitivity to read emotional dynamics in real-time, the timing sense to know when to intervene versus step back, the intuitive understanding of which family member needs what type of support at which moment. This wisdom comes from sustained engagement with families in crisis, learning through trial and error how to help systems navigate stress without increasing it.

Maintaining Yourself While Giving Yourself

“The hardest part of caregiving,” Ate Grace admitted, “is maintaining your own boundaries, your own energy, your own life while working in a job that demands you give yourself constantly. Patients need you at odd hours. Emergencies don’t respect your scheduled days off. Families expect you to always be available, always patient, always selfless. If you give everything, you burn out. If you protect yourself too rigidly, you become a poor caregiver. Learning where this balance point is—specifically for you, in your specific situation, with your specific patient and family—this is wisdom that takes years to develop.”

“I’ve learned,” she said, “to recognize my own early burnout signs before they become crisis: I start resenting small requests. I become impatient. I have trouble sleeping. When I notice these signs, I know I need to adjust something—maybe request a schedule modification, maybe have a conversation about expectations, maybe do something to refill my own reserves. AI tools can tell you generally about burnout prevention, self-care importance, boundary-setting. Can they help you notice your specific early warning signs? Can they help you navigate the guilt of setting boundaries with people who genuinely need you? Can they teach you how to say no lovingly to requests you can’t fulfill without destroying yourself?”

This is self-knowledge wisdom—learning your own patterns, limits, needs, and warning signs through extended self-observation. While AI can provide general guidance about burnout and self-care, the specific contours of your psychological and physical limits, the particular situations that drain versus energize you, the unique ways your body and mind signal distress—these require sustained self-attention and experimentation. Furthermore, the moral wisdom of balancing care for others with care for self, of maintaining sustainable service rather than martyrdom, of navigating the guilt of self-protection—this requires human modeling, moral reasoning, and lived experience of the costs of various approaches.

The Integration: AI as Amplifier, Not Replacement

These stories from three generations of OFWs reveal forms of wisdom that AI tools cannot generate: embodied expertise developed through sustained engagement with specific environments, relational wisdom built through years of human interaction, moral wisdom earned through navigating impossible choices, existential wisdom developed through suffering and transcendence, intuitive pattern recognition below conscious awareness, and practical judgment about when principles conflict.

This doesn’t mean AI tools aren’t valuable—they demonstrably are, as other articles in this series have explored. It means we need to understand AI as amplification technology, not replacement technology. AI tools amplify your existing capacities, making you more efficient, effective, and informed. But they don’t replace the development of wisdom through lived experience, human relationships, and sustained engagement with reality.

The Risk of Information Without Wisdom

The danger in unlimited access to AI-generated information and guidance is believing information equals wisdom. You can get instant answers to questions like “How do I negotiate a salary increase?” or “What are cultural norms in Saudi Arabia?” But knowing the answer intellectually is different from having the judgment to apply it appropriately, the courage to act on it, the sensitivity to read situations accurately, and the wisdom to recognize when general advice doesn’t apply to your specific circumstances.

Young OFWs with AI tools may have more information than Lola Remedios ever did. They may prepare more efficiently, optimize more systematically, learn more quickly. But will they develop the depths of human judgment that Lola earned through twenty-three years of direct experience? Will they build the character that endures what cannot be changed? Will they cultivate the relational sensitivity that reads people beyond their words?

The Risk of Efficiency Without Reflection

AI tools bias toward efficiency and optimization—getting better results faster with less effort. This efficiency is genuinely valuable. But some forms of wisdom require inefficiency. They require taking time to process experiences deeply, to sit with discomfort rather than immediately solving it, to let understanding emerge gradually rather than arriving instantly.

Lola Remedios’s wisdom about surviving with dignity came not from efficient problem-solving but from enduring difficulty over extended time and reflecting deeply on that experience. Tito Bobby’s sea sense came from thousands of hours attentively observing oceans, accumulating subtle perceptions that never crystallized into efficient tips. Ate Grace’s relational intuition came from making mistakes with families, feeling the consequences, adjusting her approach, and gradually developing sensitivity.

These forms of wisdom require time, direct experience, and reflective processing that AI tools might actually undermine by making everything too efficient, providing answers before you’ve struggled with questions long enough for deep learning.

The Integration Model: AI and Human Wisdom Together

The optimal approach isn’t choosing between AI tools and human wisdom but integrating both. Use AI for what it does well: information access, skill practice, preparation efficiency, optimization guidance, and pattern analysis. Seek human wisdom for what only humans provide: embodied expertise, relational depth, moral guidance through ambiguity, existential support through suffering, and judgment about particular situations.

Specifically:

Use AI to prepare for job interviews, but seek advice from experienced OFWs about reading potential employers’ character beyond what they say in interviews.

Use AI to learn destination language, but build relationships with locals and other OFWs who can teach you the communication beneath language.

Use AI to analyze employment contracts, but talk to OFWs who’ve worked in similar situations about what really matters versus what sounds good on paper.

Use AI to optimize your finances, but learn from elders about what ultimately matters in life and what you’ll regret optimizing for.

Use AI to practice professional skills, but apprentice yourself to experienced workers who embody excellence you aspire to, learning from their example as much as their advice.

Use AI to handle routine questions efficiently, but invest the time you save into building deep relationships with mentors, peers, and community members who provide wisdom technology cannot.

Teaching Wisdom to the Next Generation

If you’re a current OFW using AI tools effectively, you have responsibility to help younger workers understand what technology can and cannot do. Share not just technical efficiency tips but hard-earned wisdom from your own experience. Model judgment, character, relational sensitivity, and depth that AI cannot teach.

If you’re an experienced OFW from generations before AI, your wisdom is more valuable than ever, not less. Younger workers need what you know—but you need to find ways to communicate it that connect with their technology-mediated experience. Don’t dismiss AI tools out of hand; recognize their value while clearly articulating what they miss.

If you’re a young OFW enthusiastic about AI capabilities, receive it with appropriate humility. The tools are powerful, but they’re amplifiers of wisdom you still need to develop through direct experience, human relationships, and sustained reflection. Don’t let information abundance create the illusion that you don’t need elders, mentors, or the slow accumulation of judgment through living.

The Both-And Path Forward

The emerging reality isn’t AI versus human wisdom—it’s AI and human wisdom integrated intelligently. The workers who will thrive most fully are those who master AI tools while also cultivating irreplaceable human capacities that technology cannot replicate.

This means:

Developing technical facility with AI tools while also building deep relationships with mentors and peers.

Using AI for efficient preparation while also investing in experiences that build embodied wisdom.

Leveraging AI for optimization while also maintaining reflective practices that develop judgment.

Employing AI for information access while also seeking elder wisdom about meaning and purpose.

Applying AI to solve problems while also developing character through enduring difficulties.

The mistake isn’t using AI tools—that puts you at competitive disadvantage. The mistake is believing AI tools are sufficient—that prevents development of depths required for fully human life and work.

Lola Remedios, when I finished our conversation, said something that I think captures the integration perfectly: “These AI tools sound wonderful, anak. I wish I’d had them. They would have made many things easier. But don’t let ease prevent you from becoming wise. Use the tools to work more efficiently, then invest the time you save in becoming more human. Learn from technology, but learn from people too. Especially learn from people who struggled without technology and can teach you how to be strong in yourself, not just efficient at tasks.”

That’s the wisdom algorithms can’t capture but humans can pass forward: The call to develop not just competence but character. Not just information but judgment. Not just efficiency but depth. Not just success but meaning.

The AI tools will make you more competitive as an OFW. The human wisdom will make you more complete as a person. You need both.

What will you learn from Lola Remedios that ChatGPT can never teach you?

Leave a Reply

Your email address will not be published. Required fields are marked *.

*
*