Wednesday, December 10, 2025

Building Blocks - The Lego Approach

My wife was recently in Taiwan visiting her family for a couple of weeks, and she was facing a long flight home in a few days. She wanted to read a book that had been assigned by our priest at church. Simple enough request, right? Except the book wasn't available as a digital download, wasn't on Kindle, wasn't on any of the usual platforms.

Now, I could have just said "sorry, can't help" and wait to read the physical copy when she got home. But that's not how you solve problems when you work with technology. You start asking: what pieces do I have? What can I combine? What's possible if I string things together?

Let me tell you what happened.

 

The Problem: A Book That Doesn't Want to Be Digital

I found the book on archive.org  a wonderful resource where people have scanned and uploaded books into a lending library. You can sign up, borrow a book digitally, and it has this reading icon that, when you press it, reads the book aloud in a mechanical voice. Not bad for accessibility purposes, provided you have a good internet connection.

But there was no download option. The book was available to read on screen or listen to through their player, but I couldn't send it to my wife. I did buy a physical copy (I'm legitimate here), but that didn't solve the immediate problem: getting her something to read on a 13-hour flight. Scanning the book would have taken a day or two.

So I sat there looking at this reading icon on my iPad and thought: what if I treat this like building blocks?

 

The Chain: Archive  Audio  Otter  Claude  PDF

Here's what I built:

Block 1: Archive.org gave me access to the book with an audio reader  mechanical voice, but it worked.

Block 2: Otter on my iPhone. I put my iPhone next to my iPad, turned on the speaker, hit play on the archive.org reader, and let Otter record and transcribe everything. My iPad was reading the book aloud, my iPhone was sitting there listening and capturing it all as a transcript. I had to do it in two parts, since Otter Pro has a 4-hour recording limit. The first obstacle was that the reading and transcript were all run together as if it were a stream of consciousness.

Block 3: Claude for editing. I took the Otter transcript and fed it into Claude (ChatGPT didn't work well for this) and said: "Act as an editor. Put in the paragraph breaks, add the chapter titles and subtitles, clean this up."

That part took some iteration. We had to establish editing rules — how to handle dialogue, where to break paragraphs, how to identify chapter markers. The mechanical voice reading meant some punctuation cues were lost, some formatting was ambiguous. To aid in the process, I provided a scan of the Table of Contents so Claude could better identify where chapter breaks happened. So once we got the rules set up, Claude could process it chapter by chapter.


The sidebars in the book gave us trouble at first. But we figured out a way to flag them also based on a list of sidebars I created, and we handled them separately. Claude did a pretty good job with those too.

Block 4: Assembly. I took the edited chapters, assembled them into a single document, converted it to PDF, and sent it to my wife. She loaded the PDF into Kindle to read it on her iPad during the flight.

None of these tools were designed to work together. Archive.org wasn't meant to be an audio source for Otter. Otter wasn't meant to transcribe books. Claude wasn't meant to be a book formatter. But when you put them together in sequence, each one doing what it does well, you get a solution that didn't exist before.

 

What I Like About This Approach

This is what I call the Lego approach  building blocks of technology that you can snap together in ways their creators never imagined.

Think about it: I didn't need a special "convert protected digital library books to readable PDFs" application. I didn't need to learn complex workarounds or break any digital rights management. I just needed to recognize that I had pieces that could connect to each other.

Archive.org  outputs audio

Otter  inputs audio, outputs text

Claude  inputs text, outputs formatted text

PDF converter  inputs formatted text, outputs readable document

Kindle  input a readable PDF document, outputs organized book with bookmarks and annotations

Each block does one thing well. The magic is in recognizing how they can connect.

This is how we've been approaching problems in the Data4Good team too. We don't always have the perfect tool for every job. But we have a growing collection of building blocks — web scrapers, transcription services, AI editors, data analyzers, visualization tools. The question isn't "do we have the exact right tool?" The question is "what combination of tools gets us there?"

 

The AI Editing Part: Rules Matter

I do want to mention one thing about the Claude editing phase, because it taught me something important. 

When I first fed the transcript to ChatGPT, it didn't work well. When I switched to Claude and just said "clean this up," it also struggled. The breakthrough came when we established rules together:

  • How to identify chapter breaks
  • Where to place paragraph breaks
  • How to handle quoted dialogue
  • How to format section headers
  • What to do with sidebars

Once we had those rules articulated, Claude could apply them consistently across all the chapters. It wasn't about the AI being "smart enough"  it was about iterating, with some trial and error, and me being clear enough about what I wanted and giving AI enough process clarity to get it right.

This connects back to the building blocks idea: the better you understand what each block does well (and what it doesn't), the better you can connect them. Claude is excellent at applying consistent rules to large volumes of text. But it needed me to establish what those rules were, using evidence from the actual transcript we were working with.

 

It also underscores the conversational approach to problem solving that I advocate.  The back-and-forth dialog with AI is itself a way to iterate to a solution.  So I often approach AI as a conversation.

 

Confession 

Let me be completely honest about the timeline and effort involved. Looking back at the file history, the AI editing phase turned out to be the most difficult and time-consuming building block. The project took about two weeks (late October through mid-November) with at least 23 iterations across chapters and components. I went through 6 versions of the editing rules themselves as we refined the process.

Is that faster than manually editing the raw transcripts? Probably not — the ROI isn't there yet if you're measuring pure efficiency. But the learning value was substantial. I now understand how to structure rules for AI editing, what works and what doesn't, and I have a reusable process. The first book took 13 days with 23 iterations. The next one would hopefully be faster.


What Do You Think?

This project makes me think about how we approach innovation. We often talk about finding "the right application" or waiting for technology to advance enough to solve our problems. But maybe the more valuable skill is recognizing that you can be your own systems integrator. You can build the chain.

The building blocks are already there. Archive.org exists. Otter exists. Claude exists. PDF converters exist. Kindle exists. None of them were designed to work together for this purpose. But they can.

So here's my question for you: What problem are you facing that doesn't have a ready-made solution? What building blocks do you have access to? What happens if you start connecting them? When was the last time you solved a problem by chaining tools together rather than finding the perfect tool? What makes you hesitate to try unconventional combinations of technologies?

The Lego approach isn't about having all the perfect pieces. It's about recognizing that the pieces you have can snap together in ways you haven't tried yet. When the right tool doesn't exist, look for building blocks you can connect. Each tool should do one thing well; the magic is in the connections. Iteration and rule-setting are part of the building process. Being a systems integrator is a valuable skill in the AI age.


For Further Reading

If you're interested in exploring the building blocks approach further, here are some related stories from my

Letters to a Young Manager collection:

  1. "The Lego's Lesson" (Story #9) - A management training exercise using Lego blocks metaphor that reveals how deadline pressure changes our approach to teamwork and process
  2. "Assemble the Components" (Story #5) - How building reusable program subroutines taught me that "assembly is easier and faster than creating from scratch"
  3. "The Truck" (Story #296) - The story of a boy who solved a stuck truck problem with a brilliantly simple solution: "Just let the air out of the tires"

 

[1] This post was created with AI assistance (Claude), drawing from the author’s documents, meeting transcripts, and lessons learned from the project described. The content was then reviewed, edited, and adapted by the author.


"The postings on this site are my own and don't necessarily represent positions, strategies or opinions of any of the organizations with which I am associated."

Friday, May 9, 2025

From Queries to Agency: The Evolution of the Ask the Professor Project

When I began working with OpenAI's ChatGPT-4 to build a custom chatbot, I expected to be impressed. What I didn’t expect was to be drawn into a journey—one that would challenge my understanding of AI’s role in structured projects, and ultimately, redefine it as a collaborative partner. Here is an account of that experience, written, of course, with the aid of my new AI collaborator.

 

The Ask the Professor project has been a journey not just through technology, but through the emergence of a new kind of partnership between human expertise and artificial intelligence.[2] What began as a simple idea—to create a digital extension of my teaching and mentorship—quickly transformed into an exploration of how AI can not only answer questions but also support structured projects, execute real tasks, and contribute meaningfully to complex workflows.

Key Takeaways and Project Context

This journey was motivated by two driving forces: the need to evolve my persona chatbot from its MVP state to a more robust (and lower cost) platform, and adopting the challenges posed by Reid Hoffman in Impromptu and Ethan Mollick in his blog posts to engage AI in a more collaborative conversation.[3] More than that, it became an application of my long-held principle: the insights and truths of an idea (or application) come through the conversation.

More Than a Query Engine

When we think about Large Language Models (LLMs), it's easy to imagine them as glorified search engines: you ask a question, they retrieve an answer, albeit in a narrative style. But the Ask the Professor project quickly moved beyond that role.

To build the Ask the Professor project, I established a two-window strategy: one window was dedicated to my conversations with GPT-4, where we iteratively designed solutions, refined approaches, and mapped out next steps. The second window hosted the evolving chatbot, where those ideas were tested in real time. It was like having a project assistant that not only strategized with me but executed the plan, line by line, as we progressed.

Early in the project, it became clear that this GPT needed to do more than simply give me checklists for progressing the project.

At first, we iterated through structuring the chatbot to reference meeting and classroom transcripts, anonymize student names for privacy, and pose Socratic questions to engage deeper thinking.

But that was just the beginning.

Two-Window Setup: Building and Testing

This dual-window strategy became the backbone of development. In one window, GPT-4 acted as a project assistant — a partner in planning, strategy, and problem-solving. We refined anonymization processes, mapped student names to pseudonyms, and structured the chatbot's logic. In the second window, I applied those steps directly to the evolving Ask the Professor chatbot, testing, refining, and iterating in real-time.

This setup allowed us to move fluidly between ideation and implementation, shortening the cycle from concept to testing.

A New Kind of Collaboration

The turning point came when we ran into the challenge of anonymizing a large set of transcripts. Simple word swaps were not enough; we needed a systematic approach to detect, replace, and verify that each student name was consistently anonymized across different documents—without losing the thread of conversation.

At this point, the Ask the Professor project evolved from being a responsive assistant to something more agentic. Through Python scripting, the AI was able to generate code, execute it, and provide me with transformed files—all while keeping track of name mappings and cross-referencing them with anonymized lists.

This was more than a simple query-response mechanism. This was an aide-de-camp—a trusted assistant that could not only answer but act on information. It processed my Word files, mapped out all the speaker labels, replaced them with anonymized names, and returned them to me in a ready-to-use format. It even produced a translation table to keep track of these changes—no small feat for a conversational interface.

Moving Toward Agentic AI

What we accomplished here hints at something much larger: the progression from reactive LLMs to agentic AI.[4] The difference? A reactive LLM waits for input and responds; an agentic AI proactively engages in problem-solving, can be tasked with a goal, and execute multi-step operations independently, much like a research assistant would.

Moreover, GPT-4 did not simply assist; it guided decisions. It would suggest coding solutions, ask for approval, and execute on those ideas. This was not automation; this was orchestration. Together, we iterated—anonymizing names, transforming transcripts, and generating structured outputs—all within minutes. Tasks that once took days were now flowing seamlessly between ideation and execution, with GPT acting as both a collaborator and an executor.

A key difference from Agentic AI, was that in this conversation, the AI would suggest a programming step to take (an Agent task, if you will) but paused to ask what I thought and to approve proceeding.  I could then modify the approach, and we’d go through another iteration in the conversation.  When it took the programming steps, with my approval, it generated the code, executed it, generated modified files, and prompted me to download and review them.  All of this took place on OpenAI’s server, not my local computer.

In short, The Ask the Professor project did not just assist; it extended my capability. It searched, extracted, transformed, and delivered results—an entire workflow that would have taken days to perform manually, condensed into iterative cycles of refinement and automation.

Achilles' Heel: The Limits of Agentic AI

For all its advances, the Ask the Professor project was not without its challenges. In pushing the boundaries of what agentic AI could do, we also exposed its limitations—limitations that became apparent not just in its performance, but in its dependencies on human oversight.

While the Ask the Professor project highlights impressive advancements in how LLMs can execute structured tasks, it also exposed critical limitations in the journey. However, many of these limitations were mitigated over time, again through careful iteration and structured refinements.

  • Dependency on Human Judgment: Each stage of transformation—anonymization, cross-referencing, and structured processing—still depended on my strategic intervention. GPT acted as a powerful tool, even suggesting ways forward, but not acting as an autonomous agent. It required my oversight to identify hallucinations, correct anonymization logic, and prompt more nuanced outputs.
  • Memory and Continuity Gaps: Despite repeated interactions, the model often failed to maintain name-role consistency or remember context across documents. Anonymization of student names was particularly challenging, but a solution was eventually developed through multiple iterations of name extraction, replacement logic, and systematic document updates. This iterative process not only resolved the inconsistencies but set the stage for a more robust anonymization workflow going forward.
  • Susceptibility to Hallucinations: During the early stages of the project, the assistant sometimes fabricated links or summarized content that did not exist in the files. This was corrected by refining the context instructions and limiting the model's scope strictly to the uploaded documents. As we honed its access to structured data, hallucinations dropped off significantly, demonstrating the importance of well-defined boundaries for LLM behavior.

These limitations suggest that while the Ask the Professor project pushed boundaries of GPT-driven task execution, it remains bounded by the need for human discernment and real-time correction.

Key Takeaways and Looking Ahead

  • LLMs are capable of more than reactive responses; with structured guidance, they can execute multi-step workflows.
  • However, their agentic qualities are still bounded by the need for oversight and strategic redirection.
  • True AI collaboration still relies heavily on human expertise to identify hallucinations, ensure continuity, and validate outcomes.

Looking forward, the Ask the Professor project will continue to evolve, with sharper integration of agentic capabilities, refined anonymization processes, and deeper continuity across conversation threads. The goal is not just a more capable assistant—but a true conversational partner that mirrors the rigor and reflection of real-world mentorship.

We are not just building a chatbot; we are architecting a digital aide-de-camp—an agent that learns, iterates, and collaborates in real time. The line between assistant and agent is beginning to blur, and the Ask the Professor project is at the frontier of that transformation. As we refine its agentic capabilities, I can’t help but wonder: if this is what’s possible now, what might the next iteration reveal?



[1] Note that this draft co-produced by iterating with ChatGPT4, based on two days of experience working on the Ask the Professor project.  In addition to edits, I have added some footnotes.  To review a log of the conversation with GPT-4 or the Python code it produced, send me a note.

[2] The Ask the Professor project is a work in progress that hopefully will be released on my Blog soon so all readers can test it.  Meanwhile, you can get a flavor from the earlier MVP edition, here: https://eghapp.blogspot.com/2024/10/the-happgpt-professor-chatbot-test.html

[3] Reid Hoffman, “Impromptu: Amplifying Our Humanity Through AI,” Kindle Edition, 2023, https://www.amazon.com/Impromptu-Amplifying-Our-Humanity-Through-ebook/dp/B0BYG9V1RN/ 

and Ethan Mollick, “Co-Intelligence: Living and Working with AI,” Kindle Edition, 2024, https://www.amazon.com/Co-Intelligence-Living-Working-Ethan-Mollick-ebook/dp/B0CM8TRWK3/

Also see Ethan Mollick,  “On Jagged AGI: o3, Gemini 2.5, and everything after,” Apr 20, 2025, https://www.oneusefulthing.org/p/on-jagged-agi-o3-gemini-25-and-everything  .  Watch the video!

[4] For a simple overview of Agentic AI, see IBM’s recent blog post by Cole Stryker, “What is agentic AI?”, https://www.ibm.com/think/topics/agentic-ai .  For a more in-depth discussion see Edwin Lisowski, “AI Agents vs Agentic AI: What’s the Difference and Why Does It Matter?” Medium, Dec 18, 2024,  https://medium.com/@elisowski/ai-agents-vs-agentic-ai-whats-the-difference-and-why-does-it-matter-03159ee8c2b4


Wednesday, November 20, 2024

DR Tech Group Beta Test

The "Ask the DR Tech" chatbot is based on a collection of disaster response technology leaders work. To use the chatbot, click on the Plus sign in the lower right of your screen. Here are some starter questions to ask: 
  1. What are the top 5 questions a responder should ask before responding to a disaster?
  2. What are some ways to restore communications in post-disaster areas? 
  3. What is a go-kit? 
  4. What are some considerations with satellite communications post-disaster? 
If you use this feature, please comment on the results.  What's working well and not so well?  Thanks for your feedback.

Page Title

"The postings on this site are my own and don't necessarily represent positions, strategies or opinions of any of the organizations with which I am associated."

Friday, October 18, 2024

The Personas Project

The Personas Project grew out of a brief vision paper I wrote in November, 2023.  I was thinking about how I might use AI to alternatively share things I have learned over the years. I had been working on various books and outlines.  I had outlined eight different book projects and amassed a trove of supporting documents, lectures and presentations I had written. And I thought, I think I'm going to need another lifetime to complete eight books. But I have all of this content and material that I wanted to include in the books. Well, what if I was able to put that content into a personal large language model, and then have a ChatGPT-like front-end to it, and allow people to pose questions? I could also include a list of recommended questions to get started, but then people could pose their own questions and have a conversation with my books, so to speak.


I started thinking about the two classes I taught, because they were during the pandemic, so everything was recorded on Zoom. I took all of those recordings each year, and I fed them into Otter.AI and generated transcripts, as I've been doing for my weekly student meetings since then. I fed the transcripts into a book draft. And for the rough draft for each class I had over a 450 page manuscript that has basically been generated from the lectures, the guest speakers, and the student discussions during the class. I had to go through and anonymize everything, of course, so that I'm protecting everybody's name. So I thought, wouldn't it be interesting to query those conversations? 

And what about all of the work when I was a CIO? What about those presentations and documents that were written on IT strategy, NetHope and things like that.  And then further back to prior jobs. I have 45 years experience in IT, and I'm a packrat; I saved everything, and I've got hundreds of documents. So I thought, well, why not have something where you can feed in all of that information and then chat with it?

And it actually solves two important problems that we have with some of the language models now. One is it reduces hallucinations, because it's just your information that's in there as the basis. It doesn't eliminate hallucinations. And you can see that in the transcripts that I can share on the testing of this.  And it also solves the copyright problem because they're my documents, my information. And I thought, well, isn't this really a special case of what corporations want to do? They want to take their documents and allow internal and external conversations to flourish.

For example, take the help desk and the knowledge base that's produced from all of the help desk calls. Or in my former student's case, he's looking at all of the documentation that's available to patients and clients in a healthcare system. Can you put that into a model and have a chatbot to talk with that? 

The former student sent me a note catching up at one point. I think it was a best wishes for Halloween note, if I remember right. He asked, was there anything I needed some help on? And I said, well, listen, I've been toying with this idea. And I sent him a copy of the vision paper and said, what do you think? Is this something interesting? And he said, yeah, he could see some value in this, even personally, because his wife's working on a PhD and she's gathering research in supply chain management. And wouldn't it be useful to help with summarizing research and providing query opportunities into that? 

So I said to him, well, how about if we could do a minimum viable product, to show for this?  So he developed an MVP to show how this could work. We then created a team of student volunteers from my Data4Good group, demoed it to them, and began a "what if" discussion.  And the timing was great because my former student, in his investigation and discovery, found some products, like AnythingLLM and Ollama that had already provided the pieces for what we wanted to do and that we would need to assemble and integrate --all without any coding requirements!  

He also came up with the concept of Personas.  My concern was that some of my files would be appropriate for one audience, personal files for another, and classroom files for another.  So he suggested creating three datasets and “HappGPT” chatbots for my different audiences, which I’ve since called “Ask the CIO,” “Ask the Professor,” and “Ask Grandpa.”  

For two of these datasets, we’re releasing a test version on this Blog:

  1. “Ask the Professor” is HappGPT-Professor, here.
  2. “Ask the CIO” is HappGPT-Professional, here.
  3. “Ask the DR Tech” group beta is here.

The test sites are limited to 100 queries per day, so don’t get carried away 🙂.  Please leave some comments on what works well and not so well.  I appreciate your feedback.  Let the conversation begin!



"The postings on this site are my own and don't necessarily represent positions, strategies or opinions of any of the organizations with which I am associated."

Friday, October 11, 2024

HappGPT Professional Chatbot Test

The "Ask the CIO" chatbot is called HappGPT-Professional. To use the chatbot, click on the Plus sign in the lower right of your screen. Here are some starter questions to ask: 
  1. What are the top 10 things a project manager needs to know? 
  2. How should a CIO manage the IT project portfolio? 
  3. Write a blog post about the value of conversations in IT leadership. 
If you use this feature, please comment on the results.  What's working well and not so well?  Thanks for your feedback.

Page Title

"The postings on this site are my own and don't necessarily represent positions, strategies or opinions of any of the organizations with which I am associated."

The HappGPT Professor Chatbot Test

The "Ask the Professor" chatbot is called HappGPT-Professor. To use the chatbot, click on the Plus sign in the lower right of your screen. Here are some starter questions to ask: 
  1. What are the main objectives for crisis informatics?
  2. What are the top five take-aways from the course on IT Leadership & Management?
  3. Generate a syllabus for a new IT management course about conversations as a way of knowing.
If you use this feature, please comment on the results.  What's working well and not so well?  Thanks for your feedback.


Page Title "The postings on this site are my own and don't necessarily represent positions, strategies or opinions of any of the organizations with which I am associated."

Wednesday, October 9, 2024

ABC Resilience

As we watch with growing concern the hurricane season unfold in Florida and the Southeast US, we often ask, isn't there anything we can to to prevent all this destruction and loss of life? The warning system at the National Hurricane Center continues to improve. And the response progeams at FEMA and the Red Cross are also growing to meet the challenge. But what can we, in our local communities, do to be more resilient in the face of such diasters? How can we bounce back?  That's the core meaning of resiliency.

I asked the new AI Chatbot my students are helping develop, based on the information provided in my Crisis Informatics course. Following some iteration through multiple drafts, here's what HappGPT had to say[1]:

Hurricane Ready: A, B, and C Plans for Navigating Uncertainty

Hurricanes pose a significant threat to coastal communities, bringing devastating winds, torrential rains, and storm surges that can wreak havoc. While we can’t control the weather, we can prepare for it. A critical component of community resilience is ABC planning, which outlines a three-tiered approach for responding to hurricane events:

A-Plan: The Ideal Scenario

The A-Plan outlines the ideal response to a hurricane threat, assuming ample time for preparation and readily available resources. This might involve:

Early Warning System: A robust system for receiving and disseminating hurricane warnings, including local broadcasts, community alerts, and social media communication. 
Evacuation Plan: A clear and well-rehearsed evacuation plan that identifies safe routes, designated shelters, and transportation options. 
Resource Mobilization: Preparation and coordination of resources, including emergency supplies, first responders, and community volunteers.

B-Plan: Adapting to Challenges

The B-Plan is activated when the A-Plan becomes impractical due to changing circumstances, limited time, or resource constraints. This might involve:

Adaptive Evacuation: Adapting the evacuation plan to accommodate the changing situation, perhaps with shorter evacuation times or different routes. 
Resource Prioritization: Focusing on essential resources and allocating them based on immediate needs and urgency. 
Alternative Sheltering: Identifying additional shelter options for residents who are unable to evacuate or have limited access to official shelters.

C-Plan: Navigating Chaos

The C-Plan is not just a backup plan; it’s a framework for navigating decision-making in the face of uncertainty and chaos, after the A-Plan and B-Plan have failed. It’s about establishing a process for making critical decisions in a fluid and unpredictable environment. This might involve:

Decision-Making Processes: Pre-determining a decision-making hierarchy or structure for critical situations, including roles and responsibilities. 
Information Gathering: Identifying key information sources and methods for gathering critical data in the aftermath of a hurricane. 
Community Collaboration: Fostering a culture of collaborative decision-making and leveraging the collective knowledge and resources of community members.
Adaptive Planning: Constantly reassessing the situation, adjusting plans based on new information, and remaining agile in the face of evolving challenges.

Personal Go-Kit: Your Individual Resilience

While community-level planning is essential, individual preparedness is equally crucial. Building a personal “go-kit” can significantly enhance your ability to survive and thrive during a hurricane:

Back-up Communications: Include a weather radio, a hand-crank radio, and extra batteries. Consider a satellite phone or a two-way radio for communication when cell service is down. 
Escape Plan: Develop a plan for your family, including designated meeting points and alternate escape routes in case your primary route is blocked. 
Food and Water: Enough non-perishable food and water to last for at least three days. Consider a water filtration system for additional safety. 
Emergency Supplies: Include a first-aid kit, essential medications, flashlights, duct tape, a multi-tool, a waterproof bag, and cash.

The Power of ABC Planning and Individual Action

By developing comprehensive A, B, and C plans, communities can strengthen their resilience, minimize damage, and ensure a quicker and more effective recovery process. Additionally, by building a personal go-kit, individuals can empower themselves to face a hurricane with greater confidence and preparedness.

ABC planning, combined with individual preparedness, fosters a culture of readiness and empowers communities to weather the storm and emerge stronger than ever before.
______________________________

[1] A beta test version of HappGPT has been posted to this Blog, here: https://eghapp.blogspot.com/2024/10/happgpt-chatbot-test.html .  Please pose your follow-up questions there and leave comments on what's working well and not so well.

"The postings on this site are my own and don't necessarily represent positions, strategies or opinions of any of the organizations with which I am associated."