Building Quality Software Part 2
Users love it when software engineers have adopted DevOps
Have you read the previous blog yet?: Building Quality Software, Part 1
Alex’s calendar is chock-full of meetings, one-on-one calls, and deadlines. It is proving quite a challenge, working in three different teams at once, and Alex would prefer some focus over the current scattered chaos. But it is also plain to see that all three teams can use assistance, especially the Public Website team that Alex joined only two weeks ago. As the Product Owner has made clear multiple times, the “WOTY" (Website of the Year) is a big deal, and there are only three months left until the judges decide on the winner. The development team is not sure what work needs to be done to increase their chances of winning.
“It’s quite simple,” the most senior developer says during the Sprint Planning. We’re being scored on only one thing: user satisfaction. That’s what we need to focus on.”
The Scrum Master laughs. “So, I guess the Product Owner just has to add a story to the Backlog, name it ‘Satisfaction’, and we’ll be done by the end of this sprint.”
“I know you’re joking,” one of the front-end developers sighs, “but that is the best solution we’ve got. We’ll just have to wing it.”
“I’m sorry,” a third developer says, with some confusion in his voice. “Why are we panicking? I think we’ve done a lot to make sure we have the best possible website. We generate almost no errors, and we can solve the few errors we see in record time. And we have virtually no downtime. We use queue-based solutions for when downtime might cause data loss. Our website is, frankly, awesome.” “And we’re making lots of profits,” the Product Owner adds with a sly smile.
“I agree, I guess,” Alex says. “But none of what you just mentioned is what the judges will be looking at. And, let’s be fair, it’s not what our actual users are judging us on, either.”
Making users love you
In the last entry in the series, I claimed that quality software requires goals and metrics to achieve those goals, and I discussed three areas where goals and metrics were paramount to determining, establishing, and maintaining quality. Those three areas were: correctness, measured in the number of defects; reliability, measured in the amount of downtime; and costs, measured using the ratio between profits and costs.
That brings me to one of today’s hot takes: none of that matters to your users. Nobody loves software for its correctness, reliability, or profitability. The base assumption of your users will be that your application is one hundred percent correct and reliable. And they do not care one whit whether the application is making its parent company money or running in the red. High correctness, reliability and profitability will never make your users love your application. (Yet, quite unfairly, an application with high defect density or terrible uptime will aggravate users to such a degree that they’ll grow to hate it).
So, what makes a user love your application? User satisfaction is often (though not always) the most important predictor of application success. In this second part of my discussion of quality software, we’ll be discussing three concerns, as we did in the first part. However, this time, it will be concerned that often correlate positively with user satisfaction: performance, usability, and integrability. I will describe each of these concerns and, of course, provide some metrics to track them.
But before we can do that, I need to retrace my steps a bit. Last time, while discussing reliability, I briefly mentioned DevOps. This time, we need to focus on it in greater detail. DevOps has a lot of benefits, but in the context of this blog, one particular benefit stands out: the use of DevOps is linked to higher user satisfaction. This is why I feel I need to explain DevOps in greater depth than in part 1, as I feel that far too many developers are not fully certain of the what’s and why’s of DevOps.
The crucial feedback loop
DevOps is a combination of two trades: development and operations. This combination is possible because of two reasons. First, a focus on developmental and operational efficiency and automation. The use of tooling, continuous integration, continuous deployment, and informed planning are central to making DevOps work. Second, an emphasis on collaboration and communication between (or even the merger of) development teams and IT operations teams. In the end, the goal of adopting DevOps is to deliver software faster and more reliably.
The DevOps cycle involves several stages, each with its own set of tools and processes. The most important part of DevOps – at least to me – is the feedback loop. In DevOps, the development cycle should have a constantly recurring, self-strengthening nature. Each stage feeds into the next stage, creating more understanding, and quicker, more fitting responses, which feeds into the next stage again. The only way this cycle can keep feeding itself with such speed is due to automation of the software development process, which is another of the DevOps flagships.
The first stage is planning, which involves gathering feedback and planning for future improvements. This is done using tools like Jira, Trello, or Asana, which help track progress and prioritize tasks. The problem for most teams is that the planning is done based on personal preferences of product owners, vague business goals, or worse, playing catch-up to already missed deadlines.
Next up is building the planned changes for your software. This involves writing code and reviewing code changes. DevOps requires developers to use version control systems like Git to manage changes. Automation in this stage mainly takes the form of keeping complexity as low as possible for the developers.
The next stage is continuous integration, which involves compiling (a part of) the entire code base into executable, releasable files. In the context of DevOps, CI indicates that this process is done automatically and whenever changes are introduced. Build tools like Maven, Gradle, or Ant to automate this process, making it faster and more reliable. Build automation also helps ensure that all dependencies are included and that the resulting code is consistent across different environments. This stage also involves verifying that the software works as intended. This can be done using a variety of testing tools, including unit tests, integration tests, and end-to-end tests. In DevOps, test automation is a key part of this stage, as it makes it easier to test the software quickly and consistently. Manual testing is not fast enough for a feedback loop.
The fourth stage is deploy, which involves releasing the software to production environments. Again, to keep the feedback loop fast-paced, deployments should be automated. One could use Jenkins or CircleCI, for example. The automation also ensures that the software is installed consistently and correctly.
After deployment, we reach the stage called operate. People using the software (and people maintaining the software). For a true feedback loop, we need to gather as much of this real-world use and turn it into insights that can help us with our planning. This creates continuous feedback, the final stage. Well, as much as there can be a final stage in a cycle. This feedback consists of technical data, like logging, system health, application downtime, and errors (typically called monitoring), but also functional data, like user behaviour (typically called analytics). I often work with the Azure cloud, so I have a lot of experience with Azure Monitor and the Log Analytics Workspace for monitoring. For analytics, I would advise to look at Matomo and Microsoft Clarity.
DevOps states that one cannot create value when operations are too separated from development, as they are part of the same cycle.
When not to DevOps
DevOps is, in my opinion, almost a necessary practice to create quality software. But necessary or not, it won’t always work. I’d like to illustrate this point by comparing DevOps to Agile. Strictly speaking, DevOps is not a part of the Agile supergroup. But DevOps is especially effective in conjunction with (certain) Agile methodologies, such as Lean, SCRUM, and Kanban. Agile is not a miracle fix, and it’s not always fit for your software project. Pictured below is one variant of the Stacey Complexity Matrix, detailing when Agile, or specifically SCRUM or Kanban, will be a fit for your development. In a very predictable context, Agile methodologies will create more overhead, leading to its costs outweighing its benefits.
Something similar is going on with DevOps. In certain contexts, it works. In others, it doesn’t. I’ve identified five contexts when DevOps will not be a good fit for a software project. Unfortunately, there’s no spiffy matrix that I can use to represent this visually. Nonetheless, it’s important information.
First, if your company's software development process relies on outdated systems or cumbersome manual procedures, the prospect of implementing DevOps can seem overwhelming. Switching overnight to a DevOps approach might not be feasible for organizations entrenched in monolithic software for years. Instead, a phased approach is recommended, gradually introducing DevOps principles and practices to pave the way for transformation.
Second, in sectors like finance, healthcare, and government, compliance and security regulations are of utmost importance. Disrupting established processes can jeopardize compliance and create security vulnerabilities, leading to disastrous outcomes. Companies operating in regulated industries must strictly adhere to procedures and protocols, conducting thorough assessments before implementing any changes.
Third, amid mergers, acquisitions, or divestitures, the impact on software development processes can be substantial. Organizations must reassess their software delivery methods to align with new business goals and objectives. In such scenarios, undertaking a DevOps transformation may not be the most optimal allocation of resources. Thus, companies need to evaluate the return on investment before embarking on a DevOps journey.
Organizations facing software crises may be tempted to rush into a DevOps transformation. However, it might not be the most effective path forward. In these situations, identifying the root cause of the software development issues is crucial. Once the problem is pinpointed, adopting specific DevOps processes or tools to address the issue can be more efficient than a complete transformation.
And finally, for companies with infrequent software releases, a comprehensive DevOps approach might not yield substantial benefits. DevOps transformations aim to enhance the efficiency, agility, and automation of software release and delivery. Yet, if a company only releases software a few times per year or quarter, the investment of effort and resources into a DevOps transformation may not deliver significant returns.
Now, you might think it’s weird that I claimed DevOps is necessary for quality software and also claim that not all projects are a good fit for DevOps. Does this mean that if DevOps is not a fit, you cannot deliver quality software? To which I’d say: correct. If you find yourself in a situation where you cannot adopt DevOps, there is a high chance that you either cannot deliver quality or that you are not expected to deliver quality. Another hot take. I’m on a roll!
Why users are happier with DevOps
Last time, we discussed correctness, reliability, and costs are concerns for the software engineer. As I mentioned earlier, this time, I want to discuss performance, usability, and integrability. Three concerns that directly affect the software user. Traditional, non-DevOps software development often manages to take the first three concerns into account but fails to consider the second set. That’s because the first set is traditional development concerns, while the second set only tends to become (visibly) important to the operational teams.
By combing Dev and Ops, we not only increase the probability that teams take all concerns into account, but also increase their ability to accurately measure them and use real-world operational data to create insights. It is easier to determine beforehand what an application should provide as an output to the given user input. It is harder to determine beforehand how long an application should take to generate that output, or the preferred way for the user to provide the input. DevOps helps teams with these matters.
As such, I feel software engineers must always look to adopt DevOps and focus on the automation of their software development process. You need not be part of the operations team, but you are responsible for providing the tools and setting up the process to monitor and analyze the application during the operations stage. If the feedback loop is not closed, the software engineer needs to be the one to notice this and find a way to close it. If the software development process takes too long, the software engineer is the logical person to speed it up. And, most importantly: if user satisfaction is low, the software engineer is doing something wrong.
Now, given all that, let’s look at the three concerns I introduced earlier, starting with performance.
The need for speed
Performance is a measure of how well a software application meets user requirements by analyzing how much time and resources it utilizes for providing the service. Users always want faster software, but optimizing performance beyond a certain point may not even be perceivable by humans. The key to achieving optimal performance is to set realistic performance goals that take into account both customer demands and technical feasibility.
One common method of measuring performance is load testing, which simulates user traffic on a system to see how well it performs under heavy load. Another important metric is response time, which measures the amount of time it takes for a system to respond to a user request. Other important metrics include throughput, which measures the number of transactions the software can handle in a given period of time, and resource utilization, which measures how much CPU, memory, and other resources the software is using.
To optimize performance, software engineers can use a variety of techniques such as caching (but do mind privacy and security concerns), parallelism, and optimization algorithms. However, it is important to know when to use these techniques and when to focus on optimizations that add more value. Performance optimizations should target specific parts of the application where real-world data indicates they are necessary.
Another performance optimization in the context of user satisfaction is, ironically, not optimizing it at all. The illusion of performance can be even more important than actual performance. Techniques like progress bars, async processes, and queueing can convince users an application is fast, even if it technically isn’t.
Rage clicks, dead clicks, and quick backs
Usability is the measure of whether the end-user finds using the software intuitive and easy. User interface (UI) design plays a critical role in ensuring that software is user-friendly and intuitive. Customers expect software to be easy to use and navigate, and failure to meet these expectations can result in frustration and reduced productivity.
Another important aspect of usability is accessibility, which refers to the ability of the software to be used by people with disabilities or other special needs. This is an important enough topic that I’ll return to it in more detail in a later blog.
Usability is a more subjective measure than performance, but there are still ways to quantify it. One common method of measuring usability is through manual user testing and questionnaires, which involves observing how users interact with the software to identify areas of difficulty or confusion. I find that these are – at best – a complimentary way of measuring usability, as this process takes a lot of time and is hard to automate and thus, has no place within the feedback loop. A more DevOps-y metric is the number of clicks it takes to complete a task, with fewer clicks generally indicating better usability.
Some more metrics can be found in the field of analytics. These metrics assume a point-and-click-based interface. Rage clicks are when users repeatedly click or tap on an element in a website or app, indicating frustration. Rage clicks can be caused by many things, like lacking performance, broken functionality, or unclear design. Whatever the cause, rage clicks always indicate frustration and decreased user satisfaction. Similarly, dead clicks, meaning users clicking on non-interactive elements, indicate some sort of poor or unexpected user experience, usually misleading interface design. Finally, quick backs are clicks that lead users away from the current page to another (part of the) website. The user doesn't find the current content useful and returns to the original page or website under a certain threshold of time. This is indicative of a bad user journey, bad clarity in naming or representation, or missing expected content.
To optimize usability, software engineers can use a variety of techniques such as user-centered design, prototyping, and heuristic evaluation. Tools like Figma can greatly help in visualizing, and this in turn increases the amount of thought put into a design.
No integrability is grating
Finally, integrability is how easy it is to integrate the software with other required systems to increase functionality and/or the amount of control over integration. As more and more systems become API-based, integrability is becoming a critical factor in software development. Users expect software to integrate seamlessly with other systems, and failure to meet these expectations can result in reduced functionality, reduced productivity, and – yes – reduced user satisfaction. Great integrability, however, can increase user engagement greatly.
It is important to note that integration, while it seems a two-way affair, often takes the form of a provider-consumer relationship. Either you want others to provide you with something, or you want others to consume what you provide. It is of the utmost importance that software engineers are very aware of which of these two roles the application (or application feature) should take. Generally, providers need to allow for very generic integration, while consumers can build specifically for a certain provider.
So, how do you know how integral your application is with other systems? An important metric is the integration lead time for new integrations. In other words, the amount of time that passes between an official declaration or wish to integrate and the moment the integration is realized. Another important metric to consider is the number of APIs or other integration points that the software offers, as well as the ease of use and documentation of those APIs. Another important metric is the ease of implementation, with simpler and more standardized integrations generally indicating better integrability.
To optimize integrability, software engineers can use a variety of techniques such as open standards, API-first design, and component testing. Finally, in the case of integration, it is often wise to focus on extendibility and generic, customizable implementations over specific, unique interfaces. The feature toggle pattern can be especially helpful in this case. These optimizations are especially important to consider in the planning stage before the integration is first made.
Quality Software
So, what does this all mean? How do you, as a software engineer, create and maintain quality software? The short answer: with lots and lots of effort. The slightly longer answer: it requires a delicate balance of correctness, reliability, cost optimization, performance, usability, and integrability, as well as many other user wishes, customer demands and technical requirements. This balance is only maintainable when a software engineer can check the following boxes:
- They are aware of the different technical and functional concerns for their software;
- They discuss and determine metrics and measure constantly whether the targets are being met;
- They help frame all the activities of the team as ways to increase the scores on the various metrics to increase both accountability and value;
- They understand DevOps and adopt it, if possible;
- They can explain all the above to others if needed.
Or, in other words: quality software requires knowing about quality and software.
Website of the Year
A week later, Alex and the Public Website team are once again in a meeting. Alex has spent a lot of time this week helping the Product Owner make sense of the available analytics for the website, and together with the rest of the teams, they created targets for several of the metrics they deemed important. Today, all that effort should pay off. The senior developer turns her laptop to the rest of the team. On the screen is a dashboard filled with bars, diagrams, and other visualizations. “Well, we knew that nothing particularly frustrated our users. But now, we know which parts they love as well.” She points to a list of the most visited pages.
“Even though these pages are already quite fast, we should focus on improving performance and especially the perceived performance on these pages. Users visit these pages often, and any gains we make here will provide exponential benefits. Secondly, people prefer our buttons over simple links. Almost all of the most visited pages are reached by navigation or by buttons, while some of the least visited pages are behind links.”
The Product Owner nods and then points to the diagram visualizing dead clicks. “These dead clicks indicate that people expect this text here to be a link. So why not make that a reality?”
Alex agrees and then adds: “Even though it won’t help us win the WOTY award, some of our most popular functionality is not present in the mobile app. That seems like a missed chance. We should update our API’s to allow the App team to use these features as well.”
Alex and the team have only two-and-a-half months left, but they do what they can. After the deadline is over and the judging is done, all they can do is wait for the results. The team is ecstatic to hear that they have placed second and were rated highest in many of the categories. Proudly, the Product Owner requests that the team add a banner to the website, to show the world their success, and the team plans a small get-together to celebrate.
It is during those celebrations that the project manager for one of Alex’s other teams, the Public API team, walks in unexpectedly. “Hey!” Alex says with a smile. “Came to join our festivities?” “No, I’m sorry. I have some bad news,” the project manager says.
“What is it?” Alex asks.
The project manager pinches the bridge of his nose, then sighs deeply, before dropping the bombshell. “We’re being audited.”
An overview
An overview of what has already been posted and what is still to come, here is a full overview:
Contact me
As always, I ask you to contact me. Send me your corrections, provide suggestions, ask your questions, deliver some hot takes of your own, or share a personal story. I promise to read it, and if possible, will include it in the series.
Building Quality Software, Part 2