Leading alignment on product development process and cross-functional collaboration

LaunchGood, a crowdfunding startup I worked at, had 2 designers and 3 engineers supporting the entire platform. As they started to expand their product portfolio, leadership realized the product team needed to expand significantly to support its rapid growth.

The product team ended up getting more designers and engineers, with more products to work on. The team was fully remote, spread across the globe. However, communication and having a clear focus on what the product team should be working on was becoming a challenge with a rapidly expanded team with so many new members.

Suggesting a process

I had a Scrum master certification and was generally interested in helping the team work better together, as in college I had helped scale a small student club to 25 officers across 4 teams.

Before we started this initiative, there was no process for product development whatsoever, not even code reviews. Big yikes.

I suggested implementing an agile process, because we needed regular touchpoints with the team while letting them do their thing, and needed the ability to quickly change direction on our work depending on business needs/goals across multiple products.

Establishing consistency for a remote team

During the first part of our annual team retreat, we took the opportunity to really set the groundwork for getting everyone on the same page. We spent the time aligning on clear expectations for how the process would work - things such as when standups would be, who would lead them, how stories would be written and organized in Jira, when we would have retros and sprint planning meetings, etc.

Note: this was focused primarily for development.

We also had to consider how we would do things on Github, for example feature branches and code reviews. We had a senior software engineer from BBC that helped us understand how they were doing things there as a mature team for reference.

Another thing to consider was how to handle communication. For example, if you can't make standup, how should your update be shared; or if having conversations on slack, please use threads to make it easier for everyone to read messages.

Dry run with the process

Now that we had some basic structure defined to help the team communicate and do work, we spent the next part of our team retreat doing a test run of the things we talked about in person, and thought through any adjustments that made more sense.

Using the process for a bit

After the team retreat was over, the process started for real with the team going back to being remote. While we definitely felt like it resulted in better teamwork and quality of work, we realized we had some other problems to solve.

It became clear that we needed to consider a more cross-functional approach to address what we should work on and how much.

A more cross-functional approach

A few things we had identified:

  • We had trouble figuring out how to balance multiple products with their own timelines and priorities with 1 product team.
  • Lack of technical feasibility assessement upfront before committing to major initiatives
  • We needed a better way for engineers to keep up to date with our designs and catch any potential issues before the sprint started
  • How to prioritize urgent bug fixes

As a team (design, development, leadership, operations, marketing), we ideated as well as got inspiration from Basecamp on how they structured their work. We started doing the following to help with the above problems:

  • We started doing feature definition meetings where we had engineers, designers, and business work together to think about the general features and technical feasibility. Things that could be worked on immediately were given to design to flesh out and then get into the sprint backlog for the developers to work on.
  • We also had design review meetings with the engineers so they could understand our design and point out any technical issues before it was already in the sprint.
  • Focusing on 1 product per 6 week period, with 80% of stories for the product, and 20% allocated to extra things like bugs or general fixes. This helped us take 10 steps in 1 direction instead of 1 step in 10 directions, and was also crucial for important products like Ramadan Challenge.
  • Assigning product managers to each product, to help inform design and development of the priorities per product per sprint.
  • Better calculating the "true" availability of the product team and using that to inform a cap of work per sprint. (Ie hours planned to work per week minus team meetings). This cap was also adjusted based on certain situations such as past velocity and any foreseen events/product launches.
  • Hardcapping the product team's story capacity per sprint. This forced business to pick and choose what was important over the next sprint or so instead of just saying do what you can.
  • Creating a channel called #dev-requests, where anything urgent needed to be considered. If we took something on, we exchanged it for something similar in size so the overall work for the sprint didn't increase.
  • Expanding sprint prioritization to include the different PMs across products to help inform priority of various stories.

These changes helped the product team (now inclusive of product management) to be more focused and have more realistic expectations for product work. It also helped improve the percentage of work completed per sprint, and made the team more comfortable with the work assigned. The team also had its first 100% completed sprint!

We now needed to think about how to measure the work we were doing and ensure we were going in the right direction.

KPIs, Hypotheses, and User Research

I took a Udemy course lead by a Sr. PM at SoundCloud and a startup founder on product management in order to get a more solid education.

I realized that we needed to better understand our users needs, as well as work on validating assumptions we were currently making about users by way of our product decisions. Additionally, we weren't working towards monitoring certain metrics to have a more realtime feel for the product (except for funds raised).

At the time we also didn't really have time nor a great way figured out on how to do remote user research, but I worked with another business lead on creating surveys. For campaign management where the team had a closer relationship with those users (campaign creators), I was able to actually do some quick user interviews. This was the first time we had really started trying to do pro-active user research.

Additionally, my fellow designers stumbled across FullStory, a really awesome tool for monitoring and observing user behavior on the site which also allowed us to track certain funnels such as onboarding. This helped us improve the product right away as we saw certain issues, whether it be usability problems or bugs. It also helped us start tracking certain metrics such as conversions from onboarding, or donation conversions etc.

I also suggested tracking assumptions and hypotheses, and for Ramadan Challenge we actually piloted this out for super small things like changing a button or something. This got us into the mindset of being aware of what assumption we were making that led us to wanting to make the change, making the change, measuring the result, and then acting upon the information.

Many products, many PMs

At this point, leadership had a roadmap of products to work on, and with business leads/product experts/PMs assigned to each product now, we wanted to make it easy for each PM to work with these tools/methods.

Across products, we created what we called PODs (Product Overview Documents). This was meant as a way for everyone to be aligned on the goal of the product, key metrics we wanted to measures, release dates, assumptions, data points, etc. It was a template word document that the PM and the team could go and fill out while working on the product.

Where to next?

There are many directions in which the way we worked at LaunchGood could evolve. I personally would have liked to focus on figuring out better ways to do more user research upfront, as well as on getting user feedback before the feature or thing was committed to a sprint.

Another interesting area is the topic of asynchronous work. While the team was already remote, being synchronous when possible did help align the team early on, but with so many timezones, and a now more mature team, it would have been interesting to try out asynchronous work.

Looking back

This was certainly one of the most interesting things I've had to do in my career, as I really enjoyed trying to solve the problem of how should our team work, and what's the best way to get there. It's a great feeling when you see the work coming together and the team rolling.

I learned a few things as I reflected on the experience:

  • You need to get team buy-in before changing the way something is done, because it affects more than just you, and people will probably come up something better or bring up something you hadn't considered.
  • Don't blindly copy everything you see when looking for inspiration for a process. Take what's good and relevant for your team and situation. For example, we didn't copy Basecamp's 100% focus on 1 chunk with 2 week cooldown, we did 80/20 because we couldn't necessarily afford the break and didn't have a separate dev team for maintenance.
  • Try visually showing something if you're explaining a new concept.
  • Always keep in mind the bigger picture - helping the team work better so the impact from the product they make is better.
  • Take advantage of customer support - you can learn plenty from there. I actually took a couple of customer support shifts to see what that was like, and it was really interesting.
  • Looking back, the process evolved iteratively. Try to solve one problem at a time and keep building upon and improving what you have. Fixing process problems takes a long time and you can't fix everything at once.