Friday, February 27, 2026

You Are Your Career Pilot. AI Is Your Copilot.

What a 25-year IT career transition taught me about AI, conversations, and standing out in a changed job market

A colleague reached out to me recently. After nearly 25 years at a major international humanitarian organization, his position was being eliminated in a restructuring. He is not a junior person. He is a seasoned enterprise architect with PMP, CISA, and ITIL credentials, and a career that spans infrastructure, cloud migration, ERP programs, and data architecture on a global scale.

His question, between the lines, was the same question a lot of experienced IT professionals are sitting with right now: where do I fit and what are my options?

We had a good conversation. Here is what I shared with him.

The Job Market Has Changed. So Has the Competition.

Here is what is actually happening in hiring right now: it is often AI talking to AI. Candidates are using AI to generate resumes and cover letters. HR systems are using AI to screen and rank them. By the time a human being gets involved, you may already be in or out of the pile.[1]

That creates a trap. If your resume looks like an AI wrote it, and the screener is AI, you will score like everyone else who used AI. You will be generic. You will be invisible.

The way out of that trap is the same principle that has always applied in marketing: long copy sells. Tell a story. Not a list of job duties. Not a summary paragraph full of keywords. A story: here is a problem I faced, here is what I did, here is what happened as a result. What did the organization gain? What did you learn? That kind of resume stands out precisely because most submissions do not have it.

The question to ask yourself is: what can only I say? What is in my experience that no one else can claim? Find that, and build your materials around it.

AI Fluency Is Now a Baseline. Build Your Credentials.

Every organization I follow right now is either adopting AI or scrambling to figure out how to. And the ones further along are discovering a surprise: AI takes more supervision than they expected, not less.

In my own experiments, which now number more than 50 projects since last summer, technical work typically takes more than six iterations to get right. The AI is capable. But it needs direction. It needs someone who can frame the problem, evaluate the output, catch the wrong turns, and keep the project on track.

That is the skill I call orchestration. Not coding. Not prompting tricks. The judgment to know when the AI is going in the right direction and when to redirect it. That kind of judgment comes from experience. You have experience.

But here is the practical reality: you also need to signal AI fluency on your resume, because every job description now asks for it. The good news is that credentials are available, many of them free or nearly free.

Where to start:

Google AI Essentials on Coursera is a solid entry point. Microsoft Azure AI Fundamentals (AI-900) is relevant if your background includes infrastructure or cloud. For those in data architecture, Retrieval-Augmented Generation (RAG) architecture is worth understanding: it is the design pattern of combining your own data with an LLM to produce outputs grounded in your specific context, and it is already standard in enterprise AI deployments. A course in prompt engineering also signals that you understand how to direct AI effectively, which is what orchestration is at its most practical.[2]

Take the class. Get the certificate. Put it on your resume and your LinkedIn profile. These credentials tell a hiring manager that you are not just familiar with AI in the abstract; you have done the work to understand it and apply it.

Treat AI as a Conversation, Not a Search Engine.

This is the insight I find myself repeating most often, to my students, to colleagues, and now to anyone navigating a career transition with AI as a tool.

Most people approach AI the way they approach Google: type a query, get an answer, move on. That approach produces mediocre results, because the AI does not have enough context, and you never give it the chance to develop any.

The better approach is a conversation. Pose the problem. See what comes back. Add context. Make comments. Correct what is wrong. Ask a follow-up. Push on the weak spots. The AI gets more useful as the conversation develops, just as a colleague does on a team when you are working through a problem together.

I have applied this directly to my own writing. I loaded samples of my own writing into Claude, Gemini and ChatGPT and said: analyze my voice. What patterns do you see? The AIs identified three distinct writing modes I use depending on the context. Now, when I want to draft something, I can say: write this in my voice, mode one (which is how I write blog posts). The output sounds like my blog posts, not like an AI template.

You can do the same with your resume, your LinkedIn profile, your cover letters. Load your own writing. Ask the AI to model your voice. Then use that voice in everything you send out.

The Shadow IT Problem Is a CIO-Level Insight.

My colleague and I also talked about something that I think is worth naming directly for anyone positioning themselves for a leadership role.

Many organizations believe they have AI governance under control. They have approved tools, usage policies, token limits, data handling rules. And yet employees are regularly working around all of it. Loading data into personal accounts. Using consumer AI tools from home. Emailing themselves results. I know this not from research but from direct experience with my family and friends.

Governance by restriction does not work when the tools are this accessible. The better frame, one I borrowed from an old political saying about keeping people inside the tent, is to build a bigger tent. Get the experimentation inside the organization where you can see it, learn from it, and harvest what works. If people are going to experiment with AI regardless, and they are, the organization is better served by knowing about it and talking about it.

If you are going into an interview for a senior IT or architecture role, having a clear point of view on this question sets you apart. Most candidates do not.

A Practical Plan for the Next 90 Days

If I were in transition right now, here is what I would focus on.

First, get AI credentials on your resume. Pick the one most relevant to your background and complete it. Do not wait until you have several. One, done, visible, is better than several in progress.

Second, load your own writing into an AI tool and ask it to identify your voice. Then practice using it. Your materials should sound like you, not like everyone else who used the same AI tool with no customization.

Third, rewrite your resume as a story. Not a duty list. For each significant role, answer: what was the situation, what did I do, what resulted? Specific outcomes, specific scale, specific impact. The AI screener and the human reader both respond to this.

Fourth, upgrade your LinkedIn profile and consider LinkedIn Premium at least during your active search. Pay attention to where you rank against other applicants for target roles. That is real signal.

Fifth, reach out personally to former colleagues, mentors, and advisors. Not a mass message. Individual notes. The human network still matters once you clear the AI screening stage, and you need someone who will say: I know this person, and I stand behind them.

One More Thing

My team at the Data4Good Center is building a tool called Career Lighthouse, designed specifically for professionals in transition, especially from the nonprofit sector. It does skill matching across industries and will eventually flag where a targeted course or credential closes the gap for a near-match role. We are not live yet, but we are close. Stay tuned.

In the meantime, the principle is the same whether you use Lighthouse, LinkedIn, or just a good conversation with a trusted advisor: your 25-year career has more range than your job title suggests. The skills transfer. The question is whether your materials make that legible.

AI is not replacing experienced IT professionals. But experienced IT professionals who know how to work with AI will replace those who do not. You are the pilot. AI is the copilot. Get comfortable in that seat.

What are you finding in your own job search? What is working, and what is not? I would love to hear from you in the comments.

Full disclosure: I used Claude to help draft this post, drawing from a recent advisory conversation, my own AI project notes, and D4G team discussions. I provided the outline and edited the final copy you are reading. Another collaborative use of AI.

 

Notes

[1] See Nino Paoli, “Trust is at an all-time low for both job seekers and recruiters’: Hiring platform CEO says talent acquisition is in an ‘AI doom loop’,” Fortune, November 18, 2025.

[2] Recommended starting points: Google AI Essentials; Microsoft Azure AI Fundamentals; Prompt Engineering for ChatGPT, Vanderbilt University; Google Career Certificates — AI. See the Resources section below for direct links.

 

Resources

AI Credentials

 RAG Architecture Background

Job Search Tools

Data4Good Center

The postings on this site are my own and don’t necessarily represent positions, strategies or opinions of any of the organizations with which I am associated.

Wednesday, February 4, 2026

The Personas Project: Now on NotebookLM

An Update on the Evolution of AI-Powered Conversations

When I first introduced The Personas Project in October 2024, I shared my vision of using AI to create conversational access to decades of professional experience, teaching materials, and personal stories. The original implementation used AnythingLLM and Ollama—powerful tools that demonstrated the concept worked.

Since then, Google has significantly advanced NotebookLM with features that align perfectly with my "dinner conversation, not the library" philosophy. NotebookLM's Audio Overview podcasts, improved chat interface, and public sharing capabilities offer a more engaging and accessible experience for visitors.

The New Workspace Series

I'm rebuilding the Personas workspaces on NotebookLM, starting with:

  1. "Ask the Professor" (available now) - Teaching materials, Crisis Informatics, and IT Leadership & Management insights
  2. "Ask the CIO" (coming soon) - IT strategy, leadership, and professional experience
  3. "Ask the DR Tech" (coming soon) - Disaster response technology and humanitarian innovation
  4. "Ask Grandpa" (private, invitation-only) - Personal stories and family wisdom

Try "Ask the Professor" on NotebookLM

The new workspace brings together years of teaching materials from courses on Crisis Informatics and IT Leadership & Management. It includes classroom discussions, guest speaker insights, student perspectives, and practical applications of theory.

How to Get the Most from This Resource:

  1. Start with the Audio Overview - Click Studio Audio button for the Audio Overview podcast. NotebookLM created an engaging 19 minute conversation between two AI hosts who synthesize the key themes. It's like listening in on a dinner conversation about the content—the perfect introduction.
  2. Then explore through chat - After listening to the overview, use the chat to ask questions, dive deeper into topics, or explore connections between concepts.

Starter Questions to Try:

  • What are the core principles of crisis informatics and how do they apply to disaster response?
  • What lessons from IT leadership are most relevant for today's technology leaders?
  • How can "conversations as a way of knowing" transform organizational learning?
  • How can humanitarian disaster response lessons be applied to IT management?
  • How do you build effective communication practices during emergencies?
  • What are the key differences between managing technology and leading with technology?

Access "Ask the Professor"

NotebookLM Workspace: https://notebooklm.google.com/notebook/1217ce8a-eab1-48d5-a6aa-d94681487b76

You'll need a Google account to access the workspace. Viewers can listen to the podcast, ask questions, and explore generated content, but cannot edit the original sources.

Why NotebookLM?

The transition to NotebookLM offers several advantages:

  • Audio Overview podcasts provide an accessible entry point that synthesizes complex material into conversational format
  • Public sharing via simple links makes the workspaces easier to access
  • Improved chat interface creates more natural, contextual conversations
  • Generated artifacts like study guides, FAQs, and briefing documents offer multiple ways to engage with the content

What's Next?

I'll be releasing "Ask the CIO" and "Ask the DR Tech" workspaces in the coming weeks. Each represents a different facet of my professional experience and will serve different audiences—from IT leaders to disaster response practitioners to students.

"Ask Grandpa" will remain private, shared by invitation only with family and close friends who want to explore personal stories and family history.

Your Feedback Matters

Please try the workspace and share your thoughts. What works well? What could be better? Your insights help me refine this approach and determine whether it truly delivers on the vision of creating meaningful, accessible conversations with accumulated experience.

Let the conversation begin—this time, with better tools for the journey.


"The postings on this site are my own and don't necessarily represent positions, strategies or opinions of any of the organizations with which I am associated."

Saturday, January 17, 2026

Creating an Interactive Legacy

 

The Question I've Been Wrestling With

Most of us have heard about what AI can do. It's in the daily news, and podcasts. The harder question, especially as we get older, is what should we do with it?

Here's one answer I've been exploring: use it to preserve your life story in a form your family can actually engage with. Not a dusty memoir sitting on a shelf. Not another box of papers in the attic. But something interactive, something your grandchildren can listen to, ask questions of, and respond to.

Think of it as an "Ask Grandpa" chatbot, created with tools you already have.

 

The Problem Legacy Projects Face

I've been thinking about how to share my life experiences with my grandchildren. Not just the big moments, but the small ones too. The lessons I've learned. The mistakes I made. The things I wish someone had told me when I was their age.

We all want to pass these things down, right? But here's what usually happens: writing a full autobiography feels overwhelming. Typing and editing is tiring. You're staring at a blank page wondering where to start. And even if you do write it all down, would your grandkids actually read a 300-page book? Probably not.

So I started asking myself: what if there was a simpler way? What if my grandchildren could just ask me questions whenever they wanted, and get answers in my own words?

That's when I discovered you can actually build something like this, using free tools and your smartphone. No technical skills required. Just a few afternoons of your time.

 

What This Actually Looks Like

Imagine your grandchild picks up their phone and asks: "Grandpa, what was your first job like?" or "What do you remember about first year in high school or college?" And they get an answer that sounds like you, drawn from stories you've already recorded.

It's not science fiction. You can create this today. Let me walk you through exactly what I did.

 

The Building Blocks Approach

If you've read my other posts, you know I like what I call the "building blocks" approach: connecting simple tools together to solve problems. This project uses the same idea. We're going to snap together three simple pieces:

  1. book with questions to guide you
  2. Your smartphone to record your answers
  3. A free Google tool to turn your stories into something your grandkids can listen to and talk to

 That's it. Each piece is simple. The magic happens when you connect them.


Here's What You Actually Do

First, get yourself a guide. I picked up a copy of "The Book of Myself: A Do-It-Yourself Autobiography in 201 Questions." You can find it on Amazon or at most bookstores.

What I like about this book is it's organized by life stages, your early years, middle years, and later years. Each section asks about your family, friends, school, work, and what was happening in the world around you.

Questions like: "I remember our house, neighborhood and family car in this way." or "My parents felt strongly about passing on these lessons."

You're never staring at a blank page. You're simply answering one thoughtful question at a time. You don't have to answer all 201 questions. I'd say aim for 30 from each section. That's plenty to get started.

Second, just talk. This is the easy part, and it's the breakthrough that makes this whole thing work. Don't write your answers. Talk them.

I used my iPhone's voice-to-text feature. Here's what you do: Open a new document on your phone (I used Microsoft Word, but Notes or Google Docs work fine too). When the keyboard pops up, look for the microphone button. Tap it. Then just start talking.

Read a question from the book and answer it like you're sitting at the kitchen table with your grandchild. Tell the story the way you'd naturally tell it. Your phone captures everything and types it out for you.

(Tip: repeat the question so it's included in the text; this helps with editing, if you so choose.)

This was a revelation for me. Speaking is faster and more natural than typing. Your voice carries personality that writing often flattens. And because I was talking, my answers came out sounding like me, not some stiff, formal autobiography, but actual stories the way I'd tell them in person.

Think of this step as story harvesting, not editing. Imperfect transcription is fine, this is raw material, not a final manuscript.

Third, upload your stories to NotebookLM. This is Google's free tool that does something pretty remarkable. Go to notebooklm.google.com and create a free account (you'll need a Google account, which most people already have).

Create a new project and upload your document, the one with all your transcribed stories. If you wish, you can add letters, emails or other documents. That's it. You don't have to do anything else. The tool reads through your stories and learns about your life.

Here's the part that surprised me: there's a button that says, "Generate audio." When you click it, NotebookLM creates a 10-20 minute podcast where two hosts discuss your life. They talk about themes in your stories, highlight interesting moments, pull out lessons you shared.

Hearing your story reflected back to you is a revelation. It's a bit surreal hearing them discuss your life, but also wonderful.

Fourth, introduce your grandkids to their new chatbot. Now comes the best part, sharing this with your family.

Write your grandchildren an email or text message. This is where the technology becomes relational. Tell them what you've created and why it matters to you.

To share your NotebookLM project, click the three dots in the upper right corner and select "Share." You can generate a link and control who has access, just people with the link, or specific email addresses. Give them the link in your email. Frame it as an invitation, not an assignment.

Show them how to use it: 

  •  "Start by listening to the podcast"
  •   "Then try asking it questions"

I suggest including a few starter questions to get them going:

  • "Tell me about your childhood home and neighborhood"
  • "What was your first job like?"
  • "What did you learn the hard way?"
  • "What do you remember about your grandparents?"

 Then invite them to explore on their own. Ask them to tell you about the experience. What did they learn? What surprised them?

That last part matters. Legacy should be a conversation, not a broadcast.

 

What Makes This Work

The secret is in how these pieces connect. The book gives you structure so you're not staring at a blank page. Talking instead of writing makes it feel natural. NotebookLM takes your stories and makes them searchable and conversational.

None of these tools were designed to work together for this purpose. But when you connect them this way, you end up with something that didn't exist before—a way for your grandchildren to have conversations with your memories.

You've created an on-ramp. A way in.

 

A Few Things I Learned

You don't need to be perfect. Your stories don't need to be polished. In fact, they're better when they're not. The little asides, the way you pause to remember a detail, the way you'd naturally tell a story—that's what makes it authentic.

Start small. Don't try to answer all 201 questions in one sitting. Do five or ten at a time. It's less overwhelming, and you can always add more stories later.

This is ongoing, not finished. We often think of legacy as something static, carved in stone, complete. This flips that idea around. You can add more stories. Your grandchildren can ask new questions, and you can create a supplemental document with new questions and answers as they arise and upload this to NotebookLM. It's curious. It's conversational.

Privacy and security matter. While your stories are only available to those you give access to, they are stored in the cloud and subject to Google's privacy policies. So I would not include financial information, passwords, or other sensitive personal details.

 

Why This Matters

I keep thinking about my grandchildren, the conversations we'll have, the questions they'll ask as they grow older.

With this approach, they'll be able to ask questions even when I'm not around. They can discover what I thought about, what I cared about, what advice I'd give them. And they'll hear it in my own words.

Not "Here's what I did." But "Ask me anything."

You can create the same thing. A week or so of talking to your phone. That's all it takes.

 

What Do You Think?

Have you thought about trying something like this? Are you thinking about creating your own legacy project? What questions would you want your grandchildren to be able to ask you?

I'd love to hear your thoughts. Leave a comment or send me a note. The stories are in you. All you have to do is start talking.


 

Full disclosure: I used ChatGPT 5.2 and Claude Sonnet 4.5 to help draft this post. I provided the outline and I edited the final copy you are reading, another collaborative use of AI.


Note: This post describes my experience creating a personal legacy chatbot using free tools. Your experience may vary, but the basic approach works for anyone willing to spend a week or so sharing their stories.


Wednesday, December 10, 2025

Building Blocks - The Lego Approach

My wife was recently in Taiwan visiting her family for a couple of weeks, and she was facing a long flight home in a few days. She wanted to read a book that had been assigned by our priest at church. Simple enough request, right? Except the book wasn't available as a digital download, wasn't on Kindle, wasn't on any of the usual platforms.

Now, I could have just said "sorry, can't help" and wait to read the physical copy when she got home. But that's not how you solve problems when you work with technology. You start asking: what pieces do I have? What can I combine? What's possible if I string things together?

Let me tell you what happened.

 

The Problem: A Book That Doesn't Want to Be Digital

I found the book on archive.org  a wonderful resource where people have scanned and uploaded books into a lending library. You can sign up, borrow a book digitally, and it has this reading icon that, when you press it, reads the book aloud in a mechanical voice. Not bad for accessibility purposes, provided you have a good internet connection.

But there was no download option. The book was available to read on screen or listen to through their player, but I couldn't send it to my wife. I did buy a physical copy (I'm legitimate here), but that didn't solve the immediate problem: getting her something to read on a 13-hour flight. Scanning the book would have taken a day or two.

So I sat there looking at this reading icon on my iPad and thought: what if I treat this like building blocks?

 

The Chain: Archive  Audio  Otter  Claude  PDF

Here's what I built:

Block 1: Archive.org gave me access to the book with an audio reader  mechanical voice, but it worked.

Block 2: Otter on my iPhone. I put my iPhone next to my iPad, turned on the speaker, hit play on the archive.org reader, and let Otter record and transcribe everything. My iPad was reading the book aloud, my iPhone was sitting there listening and capturing it all as a transcript. I had to do it in two parts, since Otter Pro has a 4-hour recording limit. The first obstacle was that the reading and transcript were all run together as if it were a stream of consciousness.

Block 3: Claude for editing. I took the Otter transcript and fed it into Claude (ChatGPT didn't work well for this) and said: "Act as an editor. Put in the paragraph breaks, add the chapter titles and subtitles, clean this up."

That part took some iteration. We had to establish editing rules — how to handle dialogue, where to break paragraphs, how to identify chapter markers. The mechanical voice reading meant some punctuation cues were lost, some formatting was ambiguous. To aid in the process, I provided a scan of the Table of Contents so Claude could better identify where chapter breaks happened. So once we got the rules set up, Claude could process it chapter by chapter.


The sidebars in the book gave us trouble at first. But we figured out a way to flag them also based on a list of sidebars I created, and we handled them separately. Claude did a pretty good job with those too.

Block 4: Assembly. I took the edited chapters, assembled them into a single document, converted it to PDF, and sent it to my wife. She loaded the PDF into Kindle to read it on her iPad during the flight.

None of these tools were designed to work together. Archive.org wasn't meant to be an audio source for Otter. Otter wasn't meant to transcribe books. Claude wasn't meant to be a book formatter. But when you put them together in sequence, each one doing what it does well, you get a solution that didn't exist before.

 

What I Like About This Approach

This is what I call the Lego approach  building blocks of technology that you can snap together in ways their creators never imagined.

Think about it: I didn't need a special "convert protected digital library books to readable PDFs" application. I didn't need to learn complex workarounds or break any digital rights management. I just needed to recognize that I had pieces that could connect to each other.

Archive.org  outputs audio

Otter  inputs audio, outputs text

Claude  inputs text, outputs formatted text

PDF converter  inputs formatted text, outputs readable document

Kindle  input a readable PDF document, outputs organized book with bookmarks and annotations

Each block does one thing well. The magic is in recognizing how they can connect.

This is how we've been approaching problems in the Data4Good team too. We don't always have the perfect tool for every job. But we have a growing collection of building blocks — web scrapers, transcription services, AI editors, data analyzers, visualization tools. The question isn't "do we have the exact right tool?" The question is "what combination of tools gets us there?"

 

The AI Editing Part: Rules Matter

I do want to mention one thing about the Claude editing phase, because it taught me something important. 

When I first fed the transcript to ChatGPT, it didn't work well. When I switched to Claude and just said "clean this up," it also struggled. The breakthrough came when we established rules together:

  • How to identify chapter breaks
  • Where to place paragraph breaks
  • How to handle quoted dialogue
  • How to format section headers
  • What to do with sidebars

Once we had those rules articulated, Claude could apply them consistently across all the chapters. It wasn't about the AI being "smart enough"  it was about iterating, with some trial and error, and me being clear enough about what I wanted and giving AI enough process clarity to get it right.

This connects back to the building blocks idea: the better you understand what each block does well (and what it doesn't), the better you can connect them. Claude is excellent at applying consistent rules to large volumes of text. But it needed me to establish what those rules were, using evidence from the actual transcript we were working with.

 

It also underscores the conversational approach to problem solving that I advocate.  The back-and-forth dialog with AI is itself a way to iterate to a solution.  So I often approach AI as a conversation.

 

Confession 

Let me be completely honest about the timeline and effort involved. Looking back at the file history, the AI editing phase turned out to be the most difficult and time-consuming building block. The project took about two weeks (late October through mid-November) with at least 23 iterations across chapters and components. I went through 6 versions of the editing rules themselves as we refined the process.

Is that faster than manually editing the raw transcripts? Probably not — the ROI isn't there yet if you're measuring pure efficiency. But the learning value was substantial. I now understand how to structure rules for AI editing, what works and what doesn't, and I have a reusable process. The first book took 13 days with 23 iterations. The next one would hopefully be faster.


What Do You Think?

This project makes me think about how we approach innovation. We often talk about finding "the right application" or waiting for technology to advance enough to solve our problems. But maybe the more valuable skill is recognizing that you can be your own systems integrator. You can build the chain.

The building blocks are already there. Archive.org exists. Otter exists. Claude exists. PDF converters exist. Kindle exists. None of them were designed to work together for this purpose. But they can.

So here's my question for you: What problem are you facing that doesn't have a ready-made solution? What building blocks do you have access to? What happens if you start connecting them? When was the last time you solved a problem by chaining tools together rather than finding the perfect tool? What makes you hesitate to try unconventional combinations of technologies?

The Lego approach isn't about having all the perfect pieces. It's about recognizing that the pieces you have can snap together in ways you haven't tried yet. When the right tool doesn't exist, look for building blocks you can connect. Each tool should do one thing well; the magic is in the connections. Iteration and rule-setting are part of the building process. Being a systems integrator is a valuable skill in the AI age.


For Further Reading

If you're interested in exploring the building blocks approach further, here are some related stories from my

Letters to a Young Manager collection:

  1. "The Lego's Lesson" (Story #9) - A management training exercise using Lego blocks metaphor that reveals how deadline pressure changes our approach to teamwork and process
  2. "Assemble the Components" (Story #5) - How building reusable program subroutines taught me that "assembly is easier and faster than creating from scratch"
  3. "The Truck" (Story #296) - The story of a boy who solved a stuck truck problem with a brilliantly simple solution: "Just let the air out of the tires"

 

[1] This post was created with AI assistance (Claude), drawing from the author’s documents, meeting transcripts, and lessons learned from the project described. The content was then reviewed, edited, and adapted by the author.


"The postings on this site are my own and don't necessarily represent positions, strategies or opinions of any of the organizations with which I am associated."