Product teams, do you know your product?

Product teams, do you know your product?

Findings on how tech teams manage product knowledge and why none of it works. About browsing Jira tickets, Figma boards, app code, and Confluence.

ยท

12 min read

A thing that still amazes me about working in tech is the speed at which things are constantly evolving. And this is definitely also true for how we build applications as product teams.

In the different software projects I have worked on in the past 15 years - in startups and bigger companies - we used a lot of up-and-coming methodologies, tried new tools, and have seen workflows and collaboration evolve.

At the same time, I'd go on to say, that the main steps of how we built things have basically been the same. I mean, it feels like it's a totally different game today, but if we look at what we essentially do - that's still what we do.

One way to put what we do is the following.

We describe what we want it to do.
We build what it will do, and
We verify what it actually does.

While this sounds a bit silly, it's of course plausible that these are separate things. If you build a house, someone plans it, other people build it, and in the end, it's recommendable that someone also checks what was done.

But unlike a house, our digital products keep on changing over time... a lot. In fact, what we mostly do is change things instead of building something from scratch. So

We describe how we want to change what's already there.
We change what we built, so it will do something else now, and
We verify what it actually does.

Looking at it from this angle, it becomes obvious how essential it is in every step of the process to really understand the current state and behavior of our applications.

And this affects everyone on the product team. Product managers, product owners, developers, QA, and designers. And even many roles beyond in a product-focused company, like product marketers, customer support, or executives.

Product knowledge is important across the organization.

So how well do we do that? How accessible are these product insights? How much effort is involved with understanding features and verifying functionality?

I talked to dozens of teams of different sizes and from different industries to learn more about it. The findings eventually led me on my current journey with appviewer.io to build a better alternative.

But let's look more closely at the main pillars of how teams do this today.

Browsing tickets (& emojis)

ticket sys.JPG

Projects live in Jira. So Product teams work in Jira. People tend to not like it ... well, hate it...a lot. Some are clutching at straws just to see it improve. But they are mostly left with Jira.

Product managers create new tickets for features that seem most probable to move the needle for user and business metrics. They fill the backlog. They group them into sprints. They assign them to people. Tickets move to 'Done'.
This is one way to look at it, the workflow perspective.

Another way to look at it is, that product managers write little snippets of requirements for new features or changes. Everything being built is described in a ticket.
That's the requirements perspective.

Great. We're looking for places to figure out what the application is doing. Let's look at the place where it was written what it's supposed to be doing in the first place. Everything being built is described in a ticket.

Of course, only in theory a ticket system is a great way of navigating the current set of requirements. It's the feature set - specified across 100s or 1.000s of text snippets.

For starters, it's often incomplete. It just seems way easier sometimes to shout things to the next room or discuss last-minute changes via Zoom or Slack.

Leaving that aside, reading a ticket and determining its' consensus can involve skimming over 30 comments (and emoji reactions ๐Ÿ‘) as well. And you usually have to make sense of multiple tickets in the past, because it's not like one ticket replaces another 1:1. Part of the old one is likely to still be in effect.

jira comments.jpg

And how do you even find these tickets? Sure, there is full-text search, and you can label tickets. But ehm...
I have talked to companies that do look up tickets like that trying to connect the dots and make sense of what's been built. Results may vary.

It's not hard to agree that these task management systems are primarily built for the workflow perspective.

Who else knows what's going on? Sure, the developers.

If you want to dig into the current functionality, you care about details, the edge cases, special scenarios. The mental model of a software engineer works on this level of detail.

And they are usually involved in the specification of new requirements anyway. Is it feasible? Are there limitations? But mostly, what they need to verify is, does this fit into what's already there? Anything we forgot to consider?

Developers seem to remember these things.
Most of the time, they don't remember any of it. But they can dig into the actual code. That's mostly what it is.

Code a.k.a. working with text files like it's 1964

Sometimes I find it surprising that we actually still code programs. I mean in the sense that it's people filling text files with words that mean something to a type of machine and we feed these files to one or many of those machines.
It's not quite like walking in the shoes of Ada Lovelace. But it feels like we got stuck at what came right after punch cards.

Then again there are really good reasons that that's still what we do. Sure, there is the whole notion of no-code and low-code tools that's challenging that. But key features of text-based systems are hard to beat.

It's very flexible. You can often pick from various toolchains, which of course have evolved a lot. It's very extensible when it comes to the code itself or custom tooling around it. Tools can also work very universally, independent of languages, like git for versioning and diffing. Most importantly it inherently supports an engineer's most valuable tool on the belt: copy & paste.

vscode-dev.png

Having said this, code may be great for building applications. It might not be so great for understanding them.

To grasp the story of a book, you have to read the whole thing. Now say you want to understand the character Amy. She might have been introduced on pages 86-88, 92, and 112-116 with several appearances after that until she leaves the plot on page 725. Not everything in there is relevant, so you filter and condense it in your head.
How was her relationship with her brother? Only some of the previous snippets might be relevant here. Plus you might need to read up on the brother's perspective as well to answer that question.

Reading code is kind of like that.

It's also nothing like that.
It has abstractions. Multiple plot lines. It uses chapters from other books. The protagonists may vary. And the dialogs are incomplete.
And these are big books.

You get a good glossary.
You get good cross-references.
You work with it every day.
But it can be a hassle.

Someone else wrote it? Yikes.
You wrote it yourself? Yikes.

And it's an inaccessible black box to the non-developers of the team. Sure, you could print it out. Or grant access.
But it's a questionable allocation of time digging into it.

Even if they took coding courses, like I previously said, I doubt this would actually enable them to know substantially more about their application or improve work with developers for that matter.

But as it stands, the codebase is a primary source for teams checking the current functionality. With developers being the de facto gatekeepers.

Design boards _v2_final

Everything mentioned so far was about functionality. Teams also need to know what their app looks like in various scenarios.
Design boards created in something like Figma (the anti-Jira), sometimes with a dedicated handoff to developers with tools like Zeplin. No one wants to go back to the times when design specifications came in the form of jpeg-files. We got passed this. Everything being built is specified on a design board.

Great. We're looking for places to figure out what the application looks like. Let's look at the place where it was specified what it's supposed to look like in the first place. Everything being built is specified on a design board.

figma.JPG

Of course, only in theory design boards are a great way of navigating the current set of visuals. There is simply a lot of them. It adds up. They span different scenarios, various screen sizes, and even multiple revisions of the same thing (It's v2_revised_final.pdf all over again). It's hard to keep it all organized.

Nevertheless, it is visual. And visual is always great. Teams even draw user flows. Which is great, since we aim to work user-centric.

And indeed, I talked to teams that saw their design boards as one main part of their product documentation. And they did put in the additional effort to tame the chaos as much as possible.

What remains hard to fix though is completeness. Sure, you don't need to see your screens on 50 different screen sizes to know what's going on, but maybe more than two or three?!
How about different scenarios and edge cases? Sometimes unrealistically mushed together in a single design. Others simply missing.

What is apparently also hard to fix is accuracy.
Everything being built is specified on a design board. What's also being built, are those small deviations and last-minute changes discussed in a ticket comment, in a Slack message, mentioned on Zoom, or shouted across the hallway.

These design tools are great, and they keep on getting better. But they are purpose-built for a different use case - not for understanding your product.

Manual testing (& guessing)

You can always also check things manually. Simply using the actual application is the quickest way to look at a particular screen or feature.
For a web app, open your browser and go to your testing or production environment.
If your product is a mobile app, grab that device and install that test version.

Unless you WFH and you don't have that device.

"I sometimes had to wait a couple of days until I was back in the office.
But it also happened that someone had taken the device. I then had to get her on a video call and asked her to test what I wanted to test and share it with me."

That's an interview response that got some extra text marker highlighting.

Now, there are some tools to improve this. And also plenty of situations where this is actually not a big issue. You look at your app, navigate to the desired screen and play with the feature you're interested in to see what's happening. Easy as that.

coinview-app-S8XDzdF4Tf0-unsplash.jpg

There are also situations where you deal with another issue. Some scenarios and use cases are cumbersome or even impossible to replicate.

Imagine you have a fintech application that behaves differently depending on the state of the current user. Were the terms accepted? Has the profile picture been uploaded? Was the id verification completed? Was it interrupted? Was there an issue with the id's expiration date? Is this a first-time user?
But teams are smart:

"We have a list of about 100 test accounts, where each has a different state in the database. People can recognize the state by the email address itself. We name them something like .
We reset those users every night."

That's uh...
It seems to work for them. I mean there are some obvious problems that come with it. But it is a very pragmatic approach. With possibly a lot of bad alternatives.

Engineering and QA teams also have automated tests in place. They do ensure a certain behavior. So we could just read the specified tests and understand how our app behaves in different scenarios, right?
In theory yes. In practice, absolutely not. First of all, it's code again. Often a lot.
And these testing frameworks are built to raise warnings during the DevOps flow if regressions occur. They are not designed to provide product insights.

So manual testing.
If you can recreate a use case and you see what your app does, you mostly get a pretty accurate picture of what it does. Unless you can't see what your app does. Then you are pretty much left in the dark.

Are events being fired?
What kind of data did it send?
Did it locally store that value?

Usually, developers need to verify that.

On top, manual checks might work for specific scenarios. And sometimes that's good enough. Other times, you need an overall overview. Like when onboarding new colleagues. Should they just log in, or better check out the 20-screen registration flow?
Random exploration will only get people so far.

At last, being able to manually look at or demonstrate things, can be merely a matter of access rights. Founders, executives, salespeople, customer support, and maybe even customers themselves, might simply not have access to the right devices, latest test versions, or test environments.
So how do teams share product knowledge across the organization, and beyond?

Product wikis for everyone

Confluence, Dropbox Paper, Google Docs. Product Documentation comes in many forms. It's often part of an overall knowledge management system, sitting next to organizational charts, guidelines, and how-tos.

What sets it apart, is that, unlike other procedures or org charts, our digital products keep on changing over time... a lot.

So you could just throw in the towel and resort to everything above.
Most companies try anyway.

If the application is vital to the company, maybe even at the core of its business, it's necessary to ensure that different departments and teams understand its features and are aware of recent or upcoming releases.

confluence features 2.png

And product wikis tick a lot of boxes.

The documentation's sole purpose is to provide product insights.
It's not built for task management or design handoffs.
And it's not code (usually, sometimes the lines get blurry between product and technical docs).
Developers are not bothered with questions. It's self-learn material.
And people don't need to have a device or manually explore the application.
Everything is just nicely described in a human-readable way. User flows, edge cases, business logic.
Accessible across the company.

And people find ways to check for accuracy

"I always check the date of the latest edit. If it's within the last two or three weeks, I'll assume it's up to date."

Ok, uhm. That's not quite the source of truth we were looking for, is it?

If people can't reliably trust it, they will be more likely to abandon it entirely. They will want to simply reduce the risk of wasting time on specifying or discussing something on a wrong basis.
So if it can't be maintained to a degree that ensures a very high level of reliability, effort will be put into something that's effectively more like 0% useful to most people.

If the matter is important, they just throw in the towel and resort to everything above.
Or they call someone. Usually, they just do that.

Wrapping up

The different approaches have been mentioned again and again with slight variations in the interviews I conducted. And also its shortcomings.
Teams have to settle with fragmented and scattered product knowledge. Which causes friction and additional effort in day-to-day workflows. Often involving valuable engineering time. Time spent explaining features is time not spent building features.

Given the importance of up-to-date product insights for many roles and processes, shouldn't we try harder to find better ways?

I'm convinced there are better ways. Of course, I am. If you are curious, follow our journey at appviewer.io.

And DM me for thoughts, feedback, and suggestions.


Photos by Alvaro Reyes, CoinView App on Unsplash, cookie_studio on Freepik, Otrade Figma Community file by Sobakhul Munir Siroj, Atlassian Jira/Confluence, Microsoft VS Code

ย