CASE STUDY
Design Strategy for a Federal Product
Timeline: 11 months
Problem Statement:
A federal product that serves millions of users needed to be reimagined to meet today’s needs. Both maintenance and manual intervention costs had ballooned and needed to be decreased. In addition to this, user sentiment had decreased due to the system complexity and level of effort to maintain.
Artifacts Delivered:
Plain language end to end product documentation
User Journey Mapping
Service Blueprint
Data field analysis & normalization for code writing
Eco-System Maps
Outcomes:
Re-designed business processes and flows to be streamlined
Cost savings for internal business and end-users
Long-term product definition
Eliminated silos - cross-product collaboration enabled
Team Structure:
Large program, on a team of about 20 working cross-collaboratively within an expansive product eco-system
Team of 6 designers
2 Product Owners
30+ internal stakeholders
Challenges:
Limited existing documentation
Designing the path forward while keeping the lights on
Limited resources and time
Heavy policy and regulations to account for
Large user base with existing processes
Kickoff
Joining this project I had never worked in the federal space before. It was both intimidating and exciting knowing that I had the opportunity to shape a product that affects millions of people a year and impacts their quality of life. With that in mind I knew that I needed to understand exactly what the product did, how users interacted with and and also how it sat within a larger federal eco-system of products.
As I talked with stakeholders and teammates who joined prior to me, I quickly realized there was a lot of knowing and little understanding. What I mean by that is most everyone I spoke to could rattle off a list of elements within the product or things related, but when it came to connecting the dots, the stories were either very different or folks didn’t know how or why things were set up the way they were. This realization led me down the path of finding clarity around the product as it existed, so that as we imagined the future both upstream and downstream dependencies were accounted for.
END TO END FLOW
To orient myself I began with mapping an end to end customer journey, from registering to use the product all the way until the end of their goal being accomplished. This exercise helped separate out a few things, first and foremost it helped us clearly define what was in our purview and where we’d need to cross-collaborate if there was a major blocker. Second it helped us get out of analysis paralysis with such a large problem to solve. With our segment of the user journey defined we could then zoom in even closer to where the largest known pain points were.
For each segment I defined:
Who is involved (internal and external users)
What are these users doing
Why are the users doing this - what is their end goal?
How what tools and processes are involved
Differentiating between
Complicated vs. Complex
This product had ALOT going on. Some things like the end-user type and goal were stable while others elements like federal policy and the surrounding eco-system could change fairly frequently.
This product was both complex AND complicated. A huge part of my work as I continued discovery was categorizing elements into one of these two buckets. As I socialized what I came to call “The Clarity Kit”, I emphasized was we knew to be true and stable - complicated but 100% possible for someone to learn and retain.
When it came to the complex elements I also documented them but made sure that these topics were discussed at a higher level in my documentation, and referred readers to where the source of truth lived. (there were entire departments that managed these items).
COMPLICATED
A lot to learn but elements are static and will not change.
COMPLEX
A lot to learn and elements are changing often
Defining the Vision
With a better understanding of our user journey, I talked with our product owners, users, and stakeholders about what they would like to see change in the product. These conversations were generally workshops that went through each segment of the user journey as it applied to them. Through these conversations it became clear fairly quickly that there was an overarching theme regardless of who you spoke with, that the system was over-engineered and expensive to maintain because of the complexity. With this general theme identified we could workshop through the specifics about what should happen in the future.
Workshop Theme Identification
What do we have the time and resources for now
What addresses our most urgent pain points
What do we need from technology and do we need to source something new
Zooming in a Level Deeper
Identifying areas of ambiguity
With our themes and journeys identified I felt it was important to create an eco-system map and service blueprint to visualize the relationship between people, processes, and technologies that affect both our front and back-end users. This exercise made us realize a number of blind spots including:
States and Statuses were wildly inconsistent across user groups
Different user groups used the product very differently
User configuration had a significant amount of tech debt
Our current architecture did not support the needs of the future state
Our team was very much still in a place of knowing and not understanding
Aligning the Team
Plain Language Architecture Workshop
As I worked through the service blueprint and was talking with developers about our initial proposed soution to the architecture it became clear very quickly that to put it bluntly… no one wanted to look dumb in a meeting. The product was complex and it was hard for folks to get their bearings while trying to simultaneously keep the lights on, understand the product as it existed, and also understand the future state.
So to get everyone aligned I ran a workshop with our product leads to clarify what all of the architectural pieces meant. In effect, rather than just saying we are moving to postgressql and REST and using a command processor and generator (insert more technical jargon here)… we said ‘we need to define the steps a user takes to reach their goal and those are are commands.’ Obviously this retelling is an over simplification of what we were doing, but this conversation was a key moment in the project where it felt like a major unlocking moment happened. It was no longer just our Solution Architect socializing the solution, it was our entire team. This clear communication sped up development time because people understood the work to be done and our senior stakeholders trusted the process because they could also now understand the value we were delivering.
Defining the Elements and Vocabulary
The second major unlocking moment was getting really crisp about the vocabulary used to identify elements related to the product. After more than one meeting of realizing we were spiraling over semantics, talking about the exact same thing but using different words, I set up a number of workshops to get team alignment on what we would call different things. An example of this is Response vs. Resolution vs. Result. All 3 were being used interchangeably and it was causing major confusion. Ultimately we determined we only needed Response and Resolution - there were different subsets to each but this one designation saved a lot of headache moving forward. I continued this exercise anywhere the problem recurred until we had complete alignment. Additionally, because this product sat between other federal products this newly clarified vocabulary was carried over to multiple other products, creating a ripple effect that began a movement towards clarity.
Enabling the Solution through Data
Defining the data fields available and needed
With so much defined and understood at this point in the design process, I was able to really dive into the details of implementation. We knew what data we had and what data we were missing, so my first step was to understand if the data we needed was actually available and we just weren’t pulling it in. The next step was normalizing the data we had to be ingestible for our new architecture. The way it was organized in the existing architecture required a significant amount of manual labor. To normalize the data I took the business logic, and defined a set of equations that would equal a specific outcome. After defining the equations I was able to map these to thousands of lines of data that could then easily be converted to code. Moving forward this will save a significant amount of time; if a business rule changes there can now be one update rather than piecemeal making updates.
Gathering User Feedback
Understanding ALL users needs
It’s important to call out that throughout this process we were in constant communication with the agencies that use this product daily. I set up a few feedback avenues to gather feedback. Something to consider when gathering feedback in the federal space is the Paperwork Reduction Act. This limits the mechanisms you can employ when gathering feedback. Below are a few options that we landed on and were very successful.
Recurring calls with large volume users
Presentation sessions that were open to any user. At the end of the presentation users could ask questions or give feedback
Email
With feedback in a few different locations I gathered everything into a digital whiteboard tool to sort into common themes. With all of the feedback sorted I was able to guide the team through a prioritization workshop of what needed to change, where clarification was needed, and what we should consider for post MVP.
Exploring Opportunity for AI/ML to Optimize
AI is intelligent not psychic and should be designed to perform accordingly
Parallel to the work described above, a separate team came to me looking for support on how to think about AI/ML in supporting the product I was directly working on but also the surrounding products. This was incredibly interesting to me because I was already very familiar with the surrounding eco-system and where there was room for improvement. For context this was not my first time designing for AI so I had a good idea of where to start. Ultimately AI is there to enhance your solution, not solve everything. Successful implementation of AI/ML starts with clear documentation of the desired behavior and the problem to be solved.
Knowing that AI is effectively another tool in our “toolbox” I ran a similar workshop to what I would with any new product or initiative. The activities are as follows:
Align on the Current Understanding of the problem and the high level desired outcome
Group Activity - All participants add stickies on their ideas for what
success looks like
what the anticipated blockers are to reaching success
how will we measure success
what the key business drivers are
With everyones thoughts on the board I was able to define common major and minor themes. The next activity is to take each of these major and minor themes and define
What we know
What we understand
How we will measure success
By the end of the workshop we had a clear direction on next steps and had team alignment on the desired goal. Running these workshops is one of my favorite things to do because it gives everyone involved a voice and gets clear next steps quickly.
Zooming Out
Finding similarities & discrepancies to repeat the process
With implementation well underway there is now appetite to repeat this process with more products within the eco-system. The clarity kit became a team and program favorite, as it described every element of the product clearly in plain language. With this clear documentation future work can be approached from a place of understanding - therefore accounting for upstream and downstream dependencies. This knowledge helps to reduce risk and potential rework which is always a win.
Tools used:
Mural
Confluence
Tableau
Adobe XD