A 30-year-old decision tool showed up in last week’s team meeting, and it’s more relevant now than ever
Last Monday morning, one of our
graduate team members at the Data4Good Center was walking us through the
hosting options for two AI projects we’re building. He had done his homework.
He’d evaluated four different cloud platforms, dug into their free tiers,
compared their GPU access, tested their startup times, and mapped out the cost
implications of each. He was articulate and thorough.
And he was stuck.
The problem wasn’t a lack of
information. It was too much information pulling in too many directions. One
platform had generous free resources but was nearly impossible to get
allocated. Another had faster provisioning and better GPU integration but a stingier
free tier. A third was simple and cheap but possibly not powerful enough. A
fourth had the right capabilities but startup times that would drive users
away.
Cost. Performance. Speed.
Complexity. User experience. Every option was strong on some factors and weak
on others. And we’re a volunteer team of students and graduates with a budget
measured in double digits per month.
I listened to all of this and
thought, I wrote an article about exactly this problem. In 1996.
A Bulletin from Another Century
In the mid-1990s, my brother
and I ran a consulting company called HPMD, and we produced a series of short
management articles for our clients called HPMD Bullets. Number six was titled
“Values Clarification,” and the premise was simple: when you have a complex
decision with competing factors, don’t wing it. Build a matrix.
The technique was borrowed from
the social sciences [1]. You list your
decision factors as rows and your options as columns. You weight each factor by
importance. Then you rate each option against each factor, multiply, and add up
the columns. The option with the highest weighted score isn’t necessarily the
answer, but it clarifies the decision in a way that gut instinct alone cannot.
The original article walked
through three examples: choosing between job offers, hiring candidates for a
product manager role, and selecting a vendor. Different decisions, same
technique. That was the point.
Back to Monday’s Meeting
So, I suggested to our team
member that he build a decision matrix. List the factors that matter—cost, GPU
availability, startup time, system complexity, ease of integration—as rows.
List the platform options as columns. Weight the factors. Rate each platform.
Do the math.
The conversation that followed
was more interesting than the matrix itself. Romanus, who co-leads our team,
said that when he presented technology decisions to senior management, arriving
with a weighted matrix was almost more valuable than arriving at the right
answer, because the matrix showed how you worked through the decision. It gave
leadership a way to follow your reasoning and challenge your assumptions. “We
should use Platform X because it’s better” is not a conversation. It’s an
assertion.
He also made a point about
failure: when a decision turns out to be wrong, the matrix lets you go back and
ask what happened. Was the premise flawed? Did you weight something
incorrectly? Miss a factor entirely? You can’t do that retrospectively without
documentation. As Romanus put it, it’s just good management hygiene.
What I’d Add Thirty Years Later
First, make it a team exercise.
When a team argues about whether GPU access should be weighted higher than
system simplicity, that argument is itself the point. The matrix gives
structure to a discussion that otherwise devolves into everyone advocating for
their preferred option.
Second, use it for project
triage. We’re preparing for a presentation to the NetHope Impact Data Working
Group in May, and that deadline means we need to decide what’s in the demo and
what gets pushed to phase two. List the candidate features as rows. Weight them
by impact, feasibility, and readiness. The features with the highest scores are
your phase one. In crisis response, we call this triage.
Third, and this is the 2026
addition, use AI to help build the matrix. Describe your decision to Claude or
ChatGPT. Ask it to suggest factors you might be missing. Ask it to challenge
your weightings. The AI won’t make the decision for you, but it can help you
think more completely about the problem. This is another instance of what I
call the conversational approach to AI: not asking for an answer but using the
back-and-forth to sharpen your own thinking.
A Skill Worth Practicing
I told the team something I
believe strongly: every technology person should have practice with this
technique. Whether you’re choosing cloud platforms, evaluating AI tools,
deciding which features to build first, or sorting out your own career options,
the discipline of listing your factors, weighting them honestly, and rating
your options is one of the most practical management skills I know.
It requires some brainstorming,
honest assessment, a spreadsheet, and the willingness to let the math challenge
your assumptions. That’s it.
The original 1996 Values
Clarification article and a blank Excel template are available for the asking.
If you’d like a copy, send me a note at ehapp@data4good.center.
Have you used a decision matrix
in your own work? What factors do you find hardest to weight? I’d love to hear
from you in the comments.
[1]
The values clarification method was developed by Louis Raths, Merrill Harmin,
and Sidney Simon, and published in their 1966 book Values and Teaching: Working with Values in the
Classroom. I first encountered it during a student internship at Planned
Parenthood in the 1970s, where it was used to help clients work through
personal decisions. The weighted matrix I describe here is my adaptation of
their framework for technology and management decisions.
Full disclosure: I
used Claude to help draft this post, drawing from a D4G team meeting
transcript, the original 1996 Values Clarification bulletin, and my own notes.
I provided the outline and edited the final copy you are reading. Another
collaborative use of AI.
