Let's assume that you know what you want to achieve. The risks have been analyzed, and the idea has been judged to be a good one. What's next? Time to get down to business! Where to start? What not to forget? You will find the answers to these and several other questions in this article. A story based on facts.
A few months ago I wrote about strategy, building and developing an IT product - how to make decisions, analyze market opportunities, what to do first, how to assess whether the idea is good and whether it's worth implementing. I wouldn't be myself if I didn't stop with dry theory and didn't provide JustJoinIt readers with the real product meat, which is this process in practice :)
- There is no one easy path - a product roller coaster.
- Experimentation is an ongoing process - why you cannot settle on your laurels?
- The role of the Discovery process - why is it so important?
- Validation of an idea - why is it that the more sweat is poured in training, the less blood there is in the ring?
- Implementation (experimental phased release) - why is it not always worth going all the way?
- An iterative approach to difficult challenges - or how not to go crazy?
There is no single straight path - the product roller coaster
Late last year, we at Tidio had some ideas about revamping the billing model for chatbots with our clients. We wanted to move from a usage-based to a performance-based model (spoiler: it didn't work out anyway, but it's not all bad! - an even better idea is currently being implemented). However, we didn't have accurate data on how visitors to our users' sites were interacting with the bots. If we didn't know that, what were the customers themselves supposed to say? We started with the basics, namely Discovery. After a few iterations, not only were we able to gather a ton of valuable quantitative data on how automation works, but more importantly, we were able to satisfy a need from users who wanted the data we were providing to be able to help automate customer service (and drive up sales) even better.
That's how chatbot analytics came about, which I want to discuss in this case study. A real roller coaster. Quite a challenge, but most of all a space to conduct an interesting process, thanks to which we pushed the whole product in a new direction.
Experimentation is an ongoing process - why you cannot settle on your laurels
Everyone is probably familiar with the sci-fi movie example of time travel, which implies that if time is not one-dimensional like a line, but two-dimensional like a piece of paper, suddenly there is enough space to direct the time loop into the past. The same is true of the process of making a product. Granted, we don't go back in time, but we are able to take a step (or many steps) back at each stage. Product management, as opposed to project management, does not have a time limit (unless the product fails) so the process must be flexible enough to move around freely.
Previously, I presented a framework based on Hypothesis Driven Development based in turn on a Kanban board that facilitates agile product management. It was a simplified form of visualization of how Product Manager can manage the process of creating new functionalities. Inspired by the method of creative problem solving (Design Thinking) I developed this flow with the Double Diamond model, which on the one hand focuses on a deep understanding of the problem and on the other hand on finding the best solution.
Such a flow allows to work wisely - proper elaboration of the user's problem not only makes the designed functionality actually useful, but also enables proper planning of the work, more accurate pricing, and also lowers the total cost of the initiative.
I mentioned at the outset that the initial goal was not achieved whereas the functionality itself was supposed to be only a part of bigger changes. Does it mean that we failed? On the contrary! Moving nimbly between the realm of solutions and the realm of problems is the key to success. We come up with an idea, we dig into the details, we start doing something, and we revise. It's not uncommon to take a turn. This is where the Discovery process comes in handy.
The role of the Discovery process - why is it so important?
The Product Discovery process, is an iterative (repeatable) process of reducing uncertainty around a problem or idea to make sure the right product is created for the right audience. It is not a cure-all or a 100% guarantee of success. But you can use it to minimize uncertainty and focus on what's important.
The most important element of the above definition of the Discovery process is "repeatability." It is a continuous process of improvement, not a one-time study. Ideas need to be verified on a continuous basis, and feedback from the market on product elements that are already working should be analyzed, validated, and reused in new implementations. Such feedback should inspire us to discover new concepts in our original idea. I can confidently say that this is indeed the essence of agile product development, where one of the most important principles is the inspection of what we have already done and adaptation to the current situation.
Constant contact with customers definitely facilitates and accelerates the process of gathering insights, thanks to which the Discovery process can last continuously - 24/7. Thanks to this, all new product initiatives have a chance to be created in accordance with users' expectations. Below is an example, which was used to work out the problem at the initial stage of work on chatbot analytics.
Implementing new functionality for users and leaving them to their own devices can be quite risky. Even before introducing it to a larger audience, it's a good idea to talk to selected customers about their impressions and further expectations. As we know, the Discovery process is about continuous improvement, not a one-off test. The Discovery process is ultimately about continuous improvement, not a one-time survey. It gives us a chance to find out how the new functionality is received. In the context of chatbot analytics, it helped to create and redesign solutions better suited to the needs and expectations of the representatives of the target group. In addition, it prevented us from incurring costs for the development of a solution that would not be attractive to customers - as evidenced by the positive feedback with a few suggestions that we gathered during the interviews before presenting the functionality to a wider audience.
Validation of an idea - why is it that the more sweat is poured in training, the less blood there is in the ring?
I have recently presented an interesting tool - Validation Field, which allows you to structure the process of validating an idea, taking into account the definition of success metrics and also at this stage, making decisions on the next steps (assuming a positive, negative and neutral scenario).
I'm a big fan of conceptual work because it forces you to look at an issue from a broader perspective. They say that if you can't fit an idea on one page, it's a bad idea. In this case, simplicity is definitely the key (of course the problem can be complex, but at an abstract level we should be able to present it in the proverbial elevator pitch within 30 seconds).
That's why, even before any work is done by the development team, it's worth spending some time thinking about assumptions, expected results, and thinking about next steps. A perfect example of the fact that the more sweat is poured out in training, the less blood there is in the ring.😇
This and similar templates are great tools to quickly connect all the dots.
Implementation (Experimental phased release) - why is it not always worth going all the way?
When developing large, global products, it is a good practice to make new features available to users gradually (e.g., making features available behind a feature flag). This way you can avoid unwanted mistakes in production, but most importantly you can correct the set course early.
Practice shows that even the best, most detailed test scenarios do not always give a 100% guarantee that we have not missed something. After all, we are only human and mistakes happen. What's more, if the product allows its users to interfere in the logic of the tool's operation (e.g. through settings - in our case, the flow of the bot's operation), there will most likely be some customers who will do something that may perplex us. Users often unknowingly use functionality in a different way than intended (perhaps poor education is to blame?). There is no other option than to identify such edge cases empirically.
However, the greatest value is the space for interviews with users (as I wrote above). After all, it is for them that products are built and new functionalities implemented. They say it's easy to build a good product. What is difficult is to build a product that people will love, that will satisfy their needs and solve their problems. Until the solution is delivered to end customers, we move around hypotheses and assumptions. Usability testing is the primary source of information on whether users will easily find the right information or complete a particular process. It is also a chance to see how they read the content provided to them and how they navigate the new version of the website. Conclusions, in turn, allow you to implement improvements that will ultimately satisfy everyone.
Is it worth implementing new functionalities at a rapid pace? Definitely yes. Is it worth implementing them at any cost? Definitely not. As long as there is a clear increment and uncertainties are reduced, we are moving at a good pace.
The iterative approach as a way of dealing with difficult challenges, or how not to go crazy.
In product initiatives, where the goal often consists of vague assumptions and there are many methods to achieve it, there is only one way to keep from going crazy--an iterative approach is essential. It assumes that you should create a detailed plan only for the nearest period of time, the so-called iteration. In a given iteration, it is expected to achieve specific goals, to create certain parts of the product. Depending on the results of work, you can also plan the next iteration. There are no rigid rules as to the duration, it can be, for example, a month, and the iterations themselves can be numerous. It is difficult to estimate with a high degree of certainty how long it will take to create a complex solution on which many programmers are working.
Such an approach to product management makes the action plan more and more detailed step by step during implementation.
Each subsequent iteration is also a chance to draw conclusions and implement improvements. This whole process, thanks to interesting techniques, can also be a lot of fun and serve team integration. It is worth experimenting with different approaches, draw conclusions, and then only build the best products, most expected by users.