Book cover
All rights reserved. Version for personal use only.
This web version is subjected to minor edits. To report errors or typos, use this form.

Home | Dark Mode | Cite

Software Engineering: A Modern Approach

Marco Tulio Valente

3 Requirements

The hardest single part of building a software system is deciding precisely what to build. – Frederick Brooks

This chapter begins with a presentation on the importance of software requirements and their different types (Section 3.1). Next, we characterize and present the activities that comprise what we call Requirements Engineering (Section 3.2). The next four sections (Sections 3.3 to 3.6) present a variety of techniques and documents used in the specification and validation of requirements. Section 3.3 focuses on user stories which are the principal instrument for defining requirements in agile methods. Following that, Section 3.4 elaborates on use cases, which are more detailed documents for expressing requirements. In Section 3.5, we explore the concept of Minimum Viable Product (MVP), a popular technique for rapidly validating requirements. To wrap up, Section 3.6 provides insights on A/B testing, a common practice for selecting the requirements of software products.

3.1 Introduction

Requirements define what a system should do and the constraints under which it should operate. What a system should do
falls under Functional Requirements. On the other hand, the constraints part is described by Non-Functional Requirements.

To more clearly illustrate the differences between these two types of requirements, let’s revisit the home-banking system example from Chapter 1. For such a system, the functional requirements include features like reporting the balance and statement of an account, processing transfers between accounts, executing bank slip payments, canceling debit cards, among others. In contrast, the non-functional requirements are tied to the quality attributes of the system, including performance, availability, security, portability, privacy, memory and disk usage, and more. Essentially, non-functional requirements refer to operational constraints. For example, it is not enough for our home-banking system to implement all the functionalities required by the bank. It also needs to have 99.9% availability—which thus acts as a constraint on its operation.

As Frederick Brooks emphasizes in the opening quote of this chapter, requirements specification is a critical stage in software development processes. For example, it is pointless to have a system with the best design, implemented in a modern programming language, using the best development process, with high test coverage, if it does not meet the needs of the users. Problems in the specification of requirements can also have high costs. The reason is that major rework might be required when we discover—after the system is implemented and deployed—that some requirements were specified incorrectly or that important requirements were not implemented. At worst, there is a risk of delivering a system that will be rejected by users because it does not solve their problems.

Functional requirements are frequently specified in natural language (i.e., in English, for example). Conversely, non-functional requirements are specified using metrics, as illustrated in the following table.

Non-Functional Req. Metric
Performance Transactions per second, response time, latency, throughput
Space Disk usage, RAM, cache usage
Reliability % of availability, Mean Time Between Failures (MTBF)
Robustness Time to recover after a failure (MTTR); probability of data loss after a failure
Usability User training time
Portability % of portable lines of code

Using metrics for defining non-functional requirements avoids nebulous specifications like the system should be fast and have high availability. Instead, it is recommended to define, for example, that the system should ensure 99.99% availability and 99% of the transactions conducted in any 5-minute window should have a maximum response time of 1 second.

Some authors, such as Ian Sommerville (link), also divide requirements into user requirements and system requirements. User requirements are high-level, non-technical, and usually written by users in natural language. Conversely, system requirements are more technical, precise and defined by developers. Often, a single user requirement expands into a set of system requirements. As an example, in our banking system, a user requirement like the system should allow funds transfer to another bank’s checking account via wire transfers would result in system requirements specifying the protocol that should be used for such transactions. Essentially, user requirements are closer to the problem while system requirements lean towards the solution.

3.2 Requirements Engineering

Requirements Engineering refers to activities such as the identification, analysis, specification, and maintenance of a system’s requirements. The term engineering is used to emphasize that these activities should be performed systematically throughout the system’s lifecycle, using well-defined techniques whenever possible.

The process of identifying, discovering, and understanding the system’s requirements is termed Requirements Elicitation. Elicitation, in this context, implies drawing out the main requirements of the system from discussions and interactions with developers and its stakeholders.

We can use various techniques for requirements elicitation, including conducting interviews with stakeholders, issuing questionnaires, reviewing organizational documents, organizing user workshops, creating prototypes, and analyzing usage scenarios. Other techniques rely on ethnographic studies. Ethnography, a term whose roots trace back to Anthropology, refers to studying a culture in its natural environment (ethnos, in Greek, means people or culture). For instance, to study a newly discovered indigenous tribe in Amazon, an anthropologist might move to the tribe’s location and spend months living amongst them and understanding their habits, customs, language, etc. Similarly, in the context of Requirements Engineering, ethnography is a technique for requirements elicitation that recommends developers to integrate into the work environment of the stakeholders and observe—typically for several days—how they perform their tasks. Note that this observation is silent, meaning that the developer should not interfere with or express personal views about the observed tasks and events.

Once requirements are elicited, they should be (1) documented, (2) validated, and (3) prioritized.

In Agile development, requirements are documented using user stories as previously discussed in Chapter 2. However, in some projects, a Requirements Specification Document might be necessary. This document describes the requirements of the software to be built—including functional and non-functional requirements— normally in natural language. In the 90s, the IEEE 830 Standard was proposed for writing such documents. This standard was suggested within the context of Waterfall-based models, which, as we studied in Chapter 2, has a separated phase for requirements specification. The main sections of the IEEE 830 standard are presented in the next figure.

Template of a requirement specification document following the IEEE 830 standard

After specification, requirements should be inspected to ensure they are correct, precise, complete, consistent, and verifiable, as described below:

  • Requirement should be correct. For example, an incorrect computation for the savings account returns in a banking system could result in either bank or client losses.

  • Requirements should be precise to avoid ambiguity. In fact, ambiguity occurs more frequently than we’d like when using natural language. For example, consider the following condition: to be approved, a student needs to score 60 points during the semester or score 60 points in the Special Exam and attend the classes regularly. Observe that it admits two different interpretations. Firstly: (60 points during the semester or 60 points in the Special Exam) and attend classes regularly. But it can also be interpreted as: 60 points during the semester or (60 points in the Special Exam and regular attendance). As shown, parentheses were used to remove ambiguity in the combination of the and and or operators.

  • Requirements should be complete to ensure all necessary features, especially the most relevant ones, are considered and are not forgotten.

  • Requirements must be consistent. Inconsistency arises when different stakeholders have distinct expectations—for example, one stakeholder expects an availability of 99.9%, but another believes 90% suffices.

  • Requirements should be verifiable, implying we can check their implementations. For example, just stating that a system should be user-friendly is vague; how can developers verify if they’ve met the customers’ expectations in this case?

Lastly, requirements must be prioritized. At times, the term requirements is taken literally, i.e., as a list of mandatory features and constraints in software systems. However, not everything specified by the customers will be implemented in the initial releases. For instance, budget and time constraints might cause the delay of certain requirements.

Furthermore, requirements can change, as the world changes. For example, in the banking system mentioned earlier, the rules for savings account returns should be updated every time they are changed by the responsible federal agency. Thus, if a requirements specification document exists, it should be updated, just like the source code. The ability to identify the requirements implemented by a given piece of code and vice versa (i.e., to map a particular requirement to the code implementing it) is called traceability.

Before concluding, it’s important to mention that Requirements Engineering is a multi-disciplinary activity. For instance, political factors might motivate certain stakeholders not to cooperate with requirement elicitation, particularly when this might threaten their status and power within the organization. Other stakeholders may simply do not have time to meet with developers to explain the system’s requirements. Moreover, a cognitive barrier between stakeholders and developers might also impact the elicitation of requirements. For example, since stakeholders are typically seasoned experts, they might use a specialized language, unfamiliar to developers.

Real World: To understand the challenges faced in Requirements Engineering, in 2016, about two dozen researchers organized a survey with 228 software-developing companies spread across 10 countries (link). When asked about the main problems faced in requirements specification, the ten most common answers were as follows (including the percentage of companies that cited each problem):

  • Incomplete or undocumented requirements (48%)
  • Communication flaws between developers and customers (41%)
  • Constantly changing requirements (33%)
  • Abstractly specified requirements (33%)
  • Time constraints (32%)
  • Communication flaws among team members (27%)
  • Difficulty to distinguish requirements from solutions (25%)
  • Insufficient support by customers (20%)
  • Inconsistent requirements (19%)
  • Weak access to customers’ or business information (18%)

3.2.1 Topics of Study

The following figure summarizes our studies on requirements so far, showing how requirements act as a bridge that links a real-world problem with a software that solves it. We will use this figure to motivate and introduce the topics we will study for the rest of this chapter.

Requirements are the bridge between real-world problems and their software solutions

First, the figure is useful for illustrating a common situation in Requirements Engineering: systems whose requirements change frequently or whose users cannot accurately specify what they want in the system. In fact, we’ve already mentioned this situation in Chapter 2, when we discussed Agile Methods. As the reader may recall, when requirements change frequently, and the system is a non-mission-critical one, it is not recommended to invest years drafting a detailed requirements document. There’s a risk the requirements will become outdated before the system is finalized—or a competitor can anticipate and build an equivalent system and dominate the market. In such cases, as we recommended in Chapter 2, we should use lightweight requirement specification documents—such as user stories—and incorporate a representative of the customers into the development team, to clarify and explain the requirements to the developers. Given the importance of such scenarios—systems with evolving, but non-critical requirements–we will start by studying user stories in Section 3.3.

On the other hand, some systems have relatively stable requirements. In these cases, it might be worth to invest in detailed requirement specifications. Certain companies, for instance, prefer to document all the system’s requirements before starting the development. Lastly, requirements can be demanded by certification organizations, especially for systems that deal with human lives, such as systems in the medical, transportation, or military domains. In Section 3.4, we will study use cases, which are comprehensive documents for specifying requirements.

A third scenario arises when we do not know if the proposed problem truly warrants a solution. That is, we might collect all the requirements of this problem and implement a system that solves it. However, the uncertainty remains whether the system will succeed and attract users. In these scenarios, an interesting approach is to take a step back and first test the relevance of the problem we intend to solve by software. A possible test involves building a Minimal Viable Product (MVP). An MVP is a functional system that can be used by real customers. However, it only includes the features necessary to prove its market feasibility, i.e, its ability to solve a problem faced by some customers. Given the contemporary importance of such scenarios—software for solving problems in unknown or uncertain markets—we will study more about MVPs in Section 3.5.

3.3 User Stories

Requirements documents produced by waterfall development processes can amount to hundreds of pages that sometimes require more than a year to complete. These documents often run into following problems: (1) they may become obsolete as requirements change during development; (2) descriptions in natural language tend to be ambiguous and incomplete; thus developers have to go back and talk to the customers again during the implementation phase to clarify doubts; (3) when these conversations do not happen, the risks are even higher: at the end of the implementation, customers mays conclude they do not want the system anymore, as their priorities changed, their vision of the business changed, the internal processes of their company changed, and so on. Therefore, a long initial phase of requirements specification is increasingly rare, at least in the case of commercial systems, like those being discussed in this book.

The professionals who proposed agile methods recognized—or suffered from—such problems and proposed a pragmatic technique to solve them, known as User Stories. As described by Ron Jeffries in a book on Agile Development (link), a user story has three parts, termed the three Cs:

User Story = Card + Conversations + Confirmation

Next, we explore each of these parts of a story:

  • Card, which is used by customers to write, in their language and in a few sentences, a feature they hope to see implemented in the system.

  • Conversations between customers and developers in order to allow the first to gain a better understanding of what is detailed on each card. As stated before, agile methods view on requirements is pragmatic: as textual requirements specifications are subjected to problems, they have been eliminated and replaced by verbal communication between developers and customers. Moreover, agile methods—as we studied in Chapter 2—define that a representative of the customers, also known as Product Owner or Product Manager, should be part of the team.

  • Confirmation, which is a high-level test—specified by the customer—to verify whether the story was implemented as expected. Therefore, it is not an automated test, like an unit test, for example. But a textual description of the scenarios, examples, and test cases that the customer will use to confirm the implementation of the story. These tests are also called acceptance tests. They should be written as soon as possible, preferably at the beginning of a sprint. Some authors recommend writing them on the back of the user story cards.

For this reason, requirement specifications using stories do not consist of just two or three sentences, as some critics of agile methods may claim. The correct way to interpret a user story is as follows: The story written on the card is a reminder from the customer’s representative to the developers. By creating this reminder, the representative declares they would like to see a certain feature implemented in the next sprints. In addition, they promise to be available during the sprints to explain the feature to the developers. Lastly, they will consider the story implemented as long as it meets the confirmation tests they specified.

From a developer’s standpoint, the process works like this: the customer’s representative is asking us to implement the story summarized on this card. Therefore, we will have to implement it in a next sprint. However, we can count on the support of the customer representative to discuss and clarify any doubts about the story. Additionally, the representative has defined the tests they will use at the sprint review meeting to consider the story implemented. Finally, we further agree that the representative cannot change their mind at the end of the sprint and use an entirely different test to assess our implementation.

In essence, when employing user stories, requirements engineering becomes a continuous activity occurring every day throughout the sprints. The traditional requirements document with hundreds of pages is replaced by regular conversations between developers and the customer representative. User stories emphasize verbal engagement over written communication, thus aligning with the following principles of the Agile Manifesto: (1) individuals and interactions over processes and tools; (2) working software over comprehensive documentation; (3) customer collaboration over contract negotiation; (4) responding to change over following a plan.

In more specific terms, user stories should have the following properties (whose initials form the acronym INVEST):

  • Stories must be independent: given two stories, it should be possible to implement them in any order. Ideally, there should be no dependencies between stories.

  • Stories must be negotiable. As we mentioned before, stories (the card) are invitations for conversations between customers and developers during a sprint. Therefore, both parties should be open to change and adapt their opinions as result of these discussions. Developers should be open to implement details not expressed or that do not fit on the story cards. But customers should also embrace technical arguments from developers, for example about the complexity of implementing some aspect of the story as initially planned.

  • Stories must add value to the customers’ business. Indeed, stories are proposed, written, and ranked by the customers according to the value they add to their business. For this reason, the idea of a technical story—such as the system must be implemented in JavaScript, using React and Node.js—does not make sense.

  • It should be possible to estimate the size of a story, i.e., to define the effort needed to implement it. Normally, this requires the story to be small, as we will discuss next. Estimation also becomes easier when the developers have experience in the system’s domain.

  • Stories must be small. In fact, complex and large stories—also known as epics—can exist but they should be placed at the bottom of the backlog, meaning they will not be implemented soon. On the other hand, stories at the top of the backlog should be short and small to facilitate understanding and estimating them. Assuming that a one-month sprint is used, it should be possible to implement any story in less than one week, for example.

  • Stories must be testable, that is, they should have clear acceptance tests. For example, the customer can pay with credit cards is testable, assuming we know the credit cards that will be used. On the other hand, the following story is a counter-example: a customer should not have to wait too long to have their purchase confirmed. Because this is a vague story, it does not have a clear acceptance test.

It is also recommended that you define the key users who will interact with the system before starting to write stories in order to avoid stories that only serve certain users. Once you have defined these user roles (or personas), stories are commonly written in the following format:

As a [user role], I want to [do something with the system]

We will show examples of stories in this format in the next section. But first, we would like to mention that a story writing workshop is usually carried out at the inception of a software project. This workshop gathers the system’s main users in a room, who then discuss the system’s objectives, its main features, and so on. At the end of the workshop, which can last a week depending on the size and relevance of the project, we should have a list of user stories for implementation over multiple sprints.

3.3.1 Example: Library Management System

In this section, we show examples of user stories for a library management system. They are associated with three types of users: students, instructors, and library staff.

First, we show stories suggested by students (see below). Any library user fits this role and can perform the operations described in these stories. Note that the stories are just a sentence and do not elaborate on how each operation should be implemented. For example, one of the stories defines that students should be able to search for books. However, many details are omitted, including the search criteria, available filters, limits on the number of search results, the layout of search and results pages, etc. But we should remember that a story is essentially a commitment: the customer representative assures they will be available to clarify these details with the developers during the sprint in which the story is implemented. When working with user stories, this verbal interaction between developers and the customer representative is key for successful requirements specification and implementation.

As a student, I want to borrow books

As a student, I want to return a book I borrowed

As a student, I want to renew my book loans

As a student, I want to search for books

As a student, I want to reserve books that are currently borrowed

As a student, I want to receive emails about new acquisitions

Next, we show the stories suggested by the instructors:

As an instructor, I want to borrow books for an extended period of time

As an instructor, I want to recommend books for acquisition

As an instructor, I want to donate books to the library

As an instructor, I want to return books in other libraries

Even though these stories originate from instructors, this doesn’t mean they are exclusive for this user group. For example, during the sprint, the customer representative (or Product Owner) may consider making the donation feature available to all users. The last story proposed by instructors—allowing books to be returned at any university library—can be classified as an epic, i.e., a complex story. This story refers to a scenario where an instructor borrows a book from the central library but want to return it to the library of a given department, or vice versa. Implementing this story is more complex because it requires integrating different library systems and having staff member to transport the books back to their original location.

Lastly, we share the stories proposed by the library staff members, typically concerning library organization and ensuring the library’s seamless operation:

As a staff member, I want to register new users

As a staff member, I want to add new books to the system

As a staff member, I want to discard damaged books

As a staff member, I want to access statistics about the collection

As a staff member, I want the system to send reminder emails to users with overdue books

As a staff member, I want the system to apply fines in the case of late book returns

To confirm the implementation of the search story, the customer representative specified that they would to test the following searches:

Search for books using the ISBN

Search for books using the author’s name

Search for books using the title

Search for books added to the library from a specific date onwards

The correct implementation of these types of searches will be demonstrated during the Sprint Review meeting, assuming the team is using Scrum.

As we mentioned, acceptance tests are specified by the customer representative (or Product Owner). This practice prevents a scenario known as gold plating. In Requirements Engineering, the expression describes the situation where developers decide on their own to elaborate some stories—or requirements, more generally—without the customer’s input. In a literal translation, developers are embellishing stories with layers of gold, when this will not generate value for users.

3.3.2 Frequently Asked Questions

Before we wrap up, and as usual in this book, let’s answer some questions about user stories:

How do we specify non-functional requirements using stories? This is a challenging issue when using agile methods. Indeed, the customer representative (or Product Owner) may write a story stating that the system’s maximum response time is one second. However, it doesn’t make sense to allocate this story to a given sprint as it should be a concern in each sprint of the project. Therefore, the best solution is to allow (and ask) the PO to write stories about non-functional requirements, but use them primarily to reinforce the done criteria for stories. For example, for the implementation of a story to be considered complete, it should pass a code review aimed at detecting performance problems. Before the code moves to production, a performance test can also be executed to ensure that the non-functional requirements are being met. In short, one can—and should—–write stories about non-functional requirements, but they do not go into the product backlog. Instead, they are used to refine the done criteria for stories.

Is it possible to create stories for studying a new technology? Conceptually, the answer is that one should not create stories exclusively for knowledge acquisition, as stories should always be written and prioritized by customers. And they should provide business value. Therefore, we should not break this principle and allow developers to create a story just to study the use of framework X in the web interface implementation. On the other hand, this study could be a task associated to the implementation of a certain story. In agile methods, tasks for knowledge acquisition or for creating a proof-of-concept implementation are called spikes.

3.4 Use Cases

Use Cases are textual documents used to specify requirements. As will be explored in this section, they offer more detailed descriptions than user stories and are typically used in Waterfall-based methods. Developers, also referred to as Requirement Engineers during this phase of development, write the use cases. They can rely on methods such as interviews with users for this purpose. Despite being written by developers, users should be able to read, understand, and validate use cases.

Use Cases are written from the perspective of an actor interacting with the system to achieve specific objectives. Usually, the actor is a human user, although it can also be another software or hardware component. In any case, the actor is an entity external to the system.

A use case enumerates the actions that an actor should perform to realize a specific operation. In fact, a use case defines two lists of steps. The first list represents the main flow, which are steps required to successfully complete an operation. That is, the main flow describes a scenario in which everything goes well, sometimes called the happy path. The second list defines extensions to the main flow, which represent alternatives for executing particular steps of the main flow or for handling errors. Both flows should be implemented by the system later. Next, we show a use case that specifies a transfer between accounts in a banking.

Transfer Values between Accounts

Actor: Bank Customer

Main Flow:

1 - Authenticate Customer

2 - Customer sets destination account and branch

3 - Customer sets the amount for transfer

4 - Customer sets the transfer date

5 - System executes the transfer

6 - System asks if the customer wants to make another transfer

Extensions:

2a - If incorrect account and branch, request new account and branch

3a - If transfer amount exceeds current balance, request new amount

4a - Date must be the current one or no more than one year in the future

5a - If the date is the current one, process transfer immediately

5b - If the request lies in the future, schedule the transfer

We will now use this example to detail other relevant points about use cases. Firstly, every use case must have a name that starts with a verb in the infinitive. Then, it should identify the main actor of the use case. A use case can also include another use case. In our example, step 1 of the main flow includes the Authenticate Customer use case. The syntax to handle inclusions is simple: just underline the included use case’s name. The semantics are also clear: all steps of the included use case must be executed before progressing. In other words, the semantics are similar to macros in programming languages.

Lastly, we will comment on the extensions, which serve two objectives:

  • To break down a step in the main flow. In our example, we used extensions to specify that the transfer must immediately be carried out if the informed date is the current one (extension 5a). Otherwise, we should schedule the transfer for the informed date (extension 5b).

  • To handle errors, exceptions, cancellations, etc. In our example, we used an extension to specify that a new amount should be requested if there isn’t enough balance for the transfer (extension 3a).

Due to the existence of extensions, we recommend avoiding decision commands (if) in the main section of use cases. When a decision between two normal behaviors is necessary, consider defining it as an extension. This is one of the reasons why extensions in real-world use cases frequently have more steps than the main sections. Our simple example almost illustrates this, with five extensions compared to six main steps.

Occasionally, descriptions of use cases have additional sections, such as: (1) the purpose of the use case; (2) pre-conditions, i.e., what must be true before the use case is executed; (3) post-conditions, i.e., what must be true after the use case is executed; and (4) a list of related use cases.

To conclude, some good practices for writing use cases include:

  • Use cases should be composed in clear and accessible language. A frequent suggestion is to write use cases as if you were in early elementary school. Ideally, the steps should describe the main actor performing a task, followed by a verb. For example, the customer inserts the card into the ATM. Conversely, when the system performs the action, write something as the system validates the inserted card.

  • Use cases should be small, with few steps, especially in the main flow, to facilitate understanding. Alistair Cockburn, the author of a well-known book on use cases (link, page 91), recommends having a maximum of nine steps in the main flow. He states the following: I rarely encounter a well-written use case with more than nine steps in the main success scenario. Therefore, if you are writing a use case and it starts to become complex, try breaking it down into smaller ones. Another alternative is to group some steps. For example, the steps user informs login and user informs password can be grouped into user informs login and password.

  • Use cases are not algorithms written in a pseudo-code language. Usually, they have a higher abstraction level than algorithms. They should be comprehensible to end-users, who should be able to read, understand, and discover problems in use cases. Thus, avoid commands like if, repeat until, etc. For example, instead of a repetition command, you might use a sentence like: the customer browses the catalog until finding the desired product.

  • Use cases should not deal with technological or design aspects. Moreover, they should not depend on the user interface that the main actor will use to interact with the system. For example, we should not write something like: the customer presses the green button to confirm the transfer. Remember that we are specifying requirements, and decisions about technology, design, architecture, and user interface are still not on our radar. The objective is to document what the system should do, not how it’s going to implement the specified steps.

  • Avoid trivial use cases, such as those with only CRUD (Create, Retrieve, Update, and Delete) operations. For instance, in an academic system, it doesn’t make sense to have use cases like Create Student, Retrieve Student, Update Student, and Delete Student. Instead, consider creating a use case like Manage Student and briefly explain that it includes the four operations. Since the semantics is clear, this can be accomplished in one or two sentences. Furthermore, the main flow does not need to be a list of actions. In certain situations, like the ones we are mentioning, it is more practical to use free text.

  • Use a consistent vocabulary across use cases. For example, avoid using the name Customer in one use case and User in another. In the book The Pragmatic Programmer (link, page 251), David Thomas and Andrew Hunt recommend creating a glossary, i.e., a document that lists the terms and vocabulary used in a project. According to the authors, it’s hard to succeed on a project if users and developers call the same thing by different names or, even worse, refer to different things by the same name.

3.4.1 Use Case Diagrams

In Chapter 4, we will study the UML graphical modeling language. However, we would like to anticipate and comment on one of the UML diagrams, known as the Use Case Diagram. This diagram serves as a visual catalog of use cases, depicting the actors of a system (illustrated as tiny figures) and the use cases (depicted as ellipses). Additionally, it shows two types of relationships: (1) linking an actor with a use case indicates the actor’s participation in a given scenario; (2) linking two use cases indicates that one use case includes or extends the other.

A simple use case diagram for our banking system is shown in the next figure. It features two actors: Customer and Manager. The Customer is involved in two use cases (Withdraw Money and Transfer Funds), while the Manager is the principal actor in the Open Account use case. The diagram also indicates that the Transfer Funds use case includes Authenticate Customer. Lastly, we can observe that the use cases are depicted within a rectangle, emphasizing the system boundaries. The two actors are situated outside this boundary.

Example of a UML Use Case Diagram

In-depth: In this book, we distinguish between use cases (textual documents for specifying requirements) and use case diagrams (visual catalogs of use cases, as proposed in UML). This same distinction is made by Craig Larman in his book about UML and design patterns (link, page 48). Larman asserts that use cases are text documents, not diagrams, and use case modeling is primarily an act of writing, not drawing. Martin Fowler expresses a similar view, recommending us to concentrate our energy on the text rather than on the diagram. Despite the fact that the UML has nothing to say about the use case text, it is the text that contains all the value in the technique (link, page 104). In fact, some authors, in an effort to eliminate confusion, opt for the term use scenarios instead of use cases.

3.4.2 Frequently Asked Questions

Let’s now answer two questions about use cases.

What is the difference between use cases and user stories? The simple answer is that use cases are more detailed and comprehensive requirement specifications than stories. A more elaborate explanation is provided by Mike Cohn in his book about stories (link, page 140). According to him, use cases are written in a format acceptable to both customers and developers so that each may read and agree to them. Their purpose is to document an agreement between the customer and the development team. Stories, on the other hand, are written to facilitate release and iteration planning, and to serve as placeholders for conversations about the users’ detailed needs.

What is the origin of the use case technique? Use cases were proposed in the late ’80s by Ivar Jacobson, one of the pioneers of UML and of the Unified Process (UP) (link). Specifically, use cases are one of the primary outputs of UP’s Elaboration phase. As mentioned in Chapter 2, UP emphasizes written communication between users and developers, using documents like use cases.

3.5 Minimum Viable Product (MVP)

The concept of MVP was popularized by Eric Ries in his book Lean Startup (link). The idea of Lean Startup was in turn inspired by the principles of the Lean Manufacturing movement, developed by Japanese automobile manufacturers, such as Toyota, since the 1950s. Kanban, as we studied in Chapter 2, is another software engineering technique based on this movement. One of the principles of Lean Manufacturing recommends eliminating waste in an assembly line or supply chain. For software companies, potential waste includes devoting years to gathering requirements and implementing a system that will not be used, because it solves a problem that is no longer relevant to users. Therefore, if a system is going to fail—by not being able to attract users or finding a market—it’s better to fail quickly, as the waste of resources will be less.

Software systems that do not attract interest can be produced by any company. However, they are more common in startups because, by definition, startups operate in environments of high uncertainty. However, the definition of a startup is not restricted to a company formed by two university students developing a new product in a garage. According to Ries (page 27 of his book), anyone who is creating a new product or business under conditions of extreme uncertainty is an entrepreneur whether he or she knows it or not and whether working in a government agency, a venture-backed company, a nonprofit, or a decidedly for-profit company with financial investors.

To clarify our scenario, suppose we intend to create a new system, but we are not sure it will attract users and be successful. As noted above, in such cases, it is not recommended to spend years defining the requirements and implementing this system, only to then conclude it will be a failure. On the other hand, it also doesn’t make sense to conduct market research to infer the system’s reception. As our requirements are different from any existing system, the results of this research may not be reliable.

Thus, one solution is to implement a system with the minimum set of requirements that is sufficient to test the viability of its development. In the Lean Startup terminology, this initial system is referred to as a Minimum Viable Product (MVP). It is also often said that an MVP’s goal is to test a business hypothesis.

Moreover, the Lean startup movement proposes a systematic and scientific method for building and verifying MVPs. This method consists of a cycle with three steps: build, measure, and learn (see the next figure). In the first step (build), one has a product idea and implements an MVP to test it. In the second step (measure), the MVP is made available to real customers to collect data on its usage. In the third step (learn), the collected data is analyzed, resulting in what is called validated learning.

MVP Validation

The knowledge derived from an MVP test can lead to the following decisions:

  • We may conclude that further tests with the MVP are needed, possibly changing its requirements, user interface, or target market. Therefore, the cycle is repeated, returning to the build step.

  • We may conclude that the test was successful, and therefore, a market for the system (a market fit) was found. Thus, it’s time to invest more resources to implement a feature-complete and robust system.

  • Lastly, the MVP might have failed after several attempts. This leaves two alternatives: (1) abandon the venture, particularly if there are no more financial resources to keep it alive; or (2) perform a pivot, that is, abandon the original vision and attempt a new MVP with major changes, such as a complete set of features or targeting a new market.

One key risk when taking these decisions is to rely on vanity metrics. These are superficial metrics that serve to inflate the egos of developers and product managers while offering limited insight to enhance market strategy. A typical example is the number of page views on an e-commerce site. While attracting millions of monthly visitors may be satisfying, it won’t necessarily translate to sales or profit. On the other hand, actionable metrics are the ones that can inform decisions about the MVP’s future. In our example, they include the conversion rate of visitors to buyers, the average order value, number of items sold per transaction, and customer acquisition costs, among others. By monitoring these metrics, we might discover that customers typically purchase only one item per transaction. Then, and as an actionable step, this could prompt the adoption of a recommendation system, for example. These systems suggest additional items during a transaction, potentially increasing the sales per order.

When assessing MVPs that involve product or service sales, funnel metrics are often used. These metrics measure the different levels users can interact with a system. A typical funnel might be broken down as follows:

  • Acquisition: number of customers who visited our system.
  • Activation: number of customers who created an account.
  • Retention: number of customers who returned after creating an account.
  • Revenue: number of customers who made a purchase.
  • Referral: number of customers who recommended the system to others.

3.5.1 MVP Examples

An MVP doesn’t need to be a software, implemented in a programming language, with databases, integrations with external systems, etc. Two examples of MVPs that are not systems are frequently mentioned in Lean Startup literature.

The first case is that of Zappos, one of the first companies that attempted to sell shoes on the Internet in the United States. In 1999, to check the viability of an online shoe store, the company’s founder conceived a simple and original MVP. He visited a few stores in his city, photographed several pairs of shoes, and created a simple web page where customers could select the shoes they wanted to buy. However, all backend processing was done manually, including payment processing, purchasing the shoes in city stores, and delivering them to customers. There was no system to automate these tasks. Despite that, with this manually-based MVP, the company’s founder quickly and cheaply validated his initial hypothesis, i.e., that there was indeed a demand for online shoe retail. Years later, Amazon acquired Zappos for over a billion dollars.

Dropbox, the cloud storage and file sharing service, provides another example of an MVP that did not involve making actual software available to users. To gather feedback on the product, one of the company’s founders recorded a simple 3-minute video demonstrating the features and advantages of the system they were building. The video went viral and helped increase the list of users interested in beta-testing the system. Another interesting fact is that the files used in this video had funny names of comic book characters. The goal was to attract early adopters, those people enthusiastic about new technologies and who are the first to test and buy new products. The MVP’s success confirmed their hypothesis that users were interested in installing a file synchronization and backup system.

However, MVPs can also be implemented as actual, albeit minimal, software apps. For example, in early 2018, our research group at UFMG started the implementation of an index of Brazilian papers in Computer Science. The first decision was to build an MVP, covering only papers published in about 15 Software Engineering conferences. In this initial version, the Python-implemented code had fewer than 200 lines. The charts displayed by the MVP, for instance, were Google Spreadsheets embedded in HTML pages. The index—originally called CoreBR—was announced and promoted on a mailing list that includes Brazilian Software Engineering professors. As it attracted significant interest, measured using metrics such as session duration, we decided to invest more time in its implementation. First, we changed the name to CSIndexbr (link). Then, we gradually expanded coverage to include another 20 research areas (in addition to Software Engineering) and nearly two hundred conferences. We also broadened our scope to include papers published in more than 170 journals. The number of Brazilian professors with articles increased from less than 100 to over 900. Lastly, the user interface moved from a set of Google spreadsheets to JavaScript-implemented charts.

3.5.2 Frequently Asked Questions

To conclude, let’s answer some questions about MVPs.

Should only startups use MVPs? Definitely not. As we’ve discussed in this section, MVPs are a mechanism for dealing with uncertainty. That is, when we don’t know if users will like and use a particular product. In the context of Software Engineering, this product is software. Of course, startups, by definition, are companies that operate in markets of extreme uncertainty. However, uncertainty and risk can also be important factors in software developed by various types of organizations, private or public; small, medium, or large; and from the most diverse sectors.

When is it not worthwhile to use MVPs? In a way, this question was answered in the previous one. When the market for a software product is stable and known, there is no need to validate business hypotheses and, therefore, to build MVPs. In mission-critical domains, it’s also less common to construct MVPs. For example, the idea of building an MVP to monitor ICU patients is out of the question.

What’s the difference between MVPs and prototypes? Prototyping is a known technique in Software Engineering for validating requirements. The difference between prototypes and MVPs lies in the three letters of the acronym, that is, both in the M, the V, and the P. Firstly, prototypes are not necessarily minimal systems. For example, they may include the entire interface of a system, with hundreds of screens. Secondly, prototypes are not necessarily used to check a system’s viability in terms of market fit. For instance, they may be built to demonstrate the system only to the executives of a contracting company. For such reasons, they are also not products that are are made available for use by any customer.

Is an MVP a low-quality product? This question is trickier to answer. On the one hand, an MVP should have only the minimal quality to evaluate a business hypothesis. For instance, the code doesn’t need to be easily maintainable and to use the most modern design and architectural patterns. In fact, any level of quality above the one necessary to start the build-measure-learn feedback loop is a waste. On the other hand, the quality shouldn’t be so low that it would negatively impact the user experience. For example, if an MVP is hosted on a server with significant availability issues, it might lead to false negatives. In other words, the business hypothesis may be falsely invalidated. In our particular example, the invalidation wouldn’t be due to the hypothesis itself, but rather because users were unable to access the system.

3.5.3 Building the First MVP

Lean startup doesn’t specify how to construct the first MVP of a system. In most cases, this isn’t a problem, as the developers and business people have a clear idea of the features and requirements that should be present in the MVP. Thus, they can quickly implement the first MVP and start the build-measure-learn cycle. On the other hand, in some cases, the definition of this first MVP might not be clear. In these cases, it’s recommended to build a prototype before implementing the first MVP.

Design Sprint is a method proposed by Jake Knapp, John Zeratsky, and Braden Kowitz for testing and validating new products using prototypes (link). The main characteristics of a design sprint—not to be confused with a Scrum sprint—are as follows:

  • Time-box: A design sprint lasts five days, beginning on Monday and ending on Friday. The aim is to quickly discover an initial solution to a problem.

  • Small and multidisciplinary teams: A design sprint brings together a multidisciplinary team of seven people. This number was chosen to encourage discussions—therefore, the team can’t be too small. However, it also aims to prevent endless debates—thus, the team can’t be too large. The team should include representatives from all areas involved with the problem under investigation, including marketing, sales, logistics, technology, etc. Finally, but equally important, a decision-maker should be part of the team, which could be, for example, the company owner.

  • Clear objectives and rules: The first three days of the design sprint aim to converge, then diverge, and finally, converge again. That is, on the first day, the team discusses and defines the problem to be solved. The goal is to ensure that, in the following days, the team will focus on solving the same problem (convergence). On the second day, potential solutions are proposed freely (divergence). On the third day, a winning solution is selected among the possible alternatives (convergence). The decision-maker has the final word in this choice, meaning that a design sprint is not a purely democratic process. On the fourth day, a prototype is implemented, which can be just a set of static HTML pages, without code or functionality. On the last day, the prototype is tested with five real customers, who will interact with it in individual sessions.

Before concluding, it’s important to mention that design sprints are not only used to create MVPs. The technique can be used to propose a solution for any problem. For example, a design sprint can be organized to redesign the interface of an existing system or to improve the services in a hotel.

3.6 A/B Testing

Given two versions of a system, A/B Testing (or split tests) is used to choose the one that receives the most interest from users. The two versions are identical except that one implements requirements A and the other implements requirements B, where A and B are mutually exclusive. In this context, we want to decide which requirements we will actually support in the system. To make this decision, versions A and B are released to distinct user groups. After that, we decide which version has attracted greater interest by users. Therefore, A/B Testing can be viewed as a data-driven approach for selecting requirements (or features) that will be supported in a system. The winning requirements remain in the system, and the other version is discarded.

For example, A/B Testing can be used when we have an MVP with requirements A and after a build-measure-learn cycle, we decide to test a new MVP with requirements B. Another common scenario is to test user interface components. For example, given two layouts of a website, an A/B test can be used to decide which one produces the best user engagement. We can also test the color or position of a button in the page, the messages used, the order of the items in a list, etc.

To perform an A/B test, we need two versions of a system, which we will call the control version (original system, requirements A) and the treatment version (requirements B). To give an example, suppose an e-commerce system. The control version uses a traditional recommendation algorithm, and the treatment version uses a novel and optimized algorithm. In this case, we can use an A/B test to define if the new recommendation algorithm is in fact better and, therefore, should be used by the system.

To run a test, we also need a metric to express the gains achieved with the treatment version. In our example, this metric can be the percentage of purchases originated from recommended links. The expectation is that the new recommendation algorithm will increase this percentage.

Finally, we need to instrument the system so that half of the customers use the control version and the other half use the treatment version. Equally important is the random assignment of these versions to users. Each time a user logs into the system, we randomly determine which version they will use. This can be achieved by the following code:

version = Math.random(); // random number between 0 and 1
if (version < 0.5)
   "execute the control version"
else
   "execute the treatment version"

After a sufficient number of accesses, we should finish the test and assess whether the treatment version has indeed increased the conversion rate. If so, we should release it to all users. Otherwise, we should continue using the control version.

The number of customers tested with each version, or sample size, is a vital aspect of A/B Testing. While the detailed statistical procedures for computing the size of this sample are out of the scope of this book, there are various A/B test sample size calculators accessible online. It must be noted, however, that such tests require a significantly large sample, usually only accessible to popular platforms such as e-commerce sites, search engines, social networks, or news portals.

As an example, suppose the customer conversion rate is 1% (for the control version), and that we want to test if the treatment adds a minimum gain of 10% in this rate. In this scenario, to have statistically relevant results with a 95% confidence level, the control and treatment groups must have at least 200,000 customers each. To explain further:

  • If after 200K accesses, version B increases the conversion rate by at least 10%, we have statistical confidence that this gain was caused by treatment B (in fact, we can be 95% confident). In this case, we say the test was successful and version B is the winner.

  • Otherwise, because version B did not achieve the intended conversion rate, the test has failed and version A is the winner.

The sample size required by an A/B test considerably decreases when we test higher conversion rates. In the previous example, if the conversion rate is 10% and the intended improvement is 25%, the sample size drops to 1,800 customers for each group. These values were estimated using the A/B test calculator from Optimizely (link).

In-Depth: In statistical terms, an A/B Test is modeled as a Hypothesis Test. In such tests, we start with a Null Hypothesis that represents the status quo. That is, the Null Hypothesis assumes that everything will remain the same and, therefore, in the context of A/B tests, that version B is not better than the current version of the system. On the other hand, the hypothesis that challenges the status quo is called the Alternative Hypothesis. Conventionally, we represent the Null Hypothesis as H0 and the Alternative Hypothesis as H1.

A Hypothesis Test is a decision-making procedure that starts with the assumption that H0 is true and then attempts to invalidate it. However, the statistical test used for this purpose is subjected to a margin of error. For instance, it might invalidate H0 even if it’s correct. In such cases, we say a Type I error or false positive has occurred because we incorrectly concluded that version B is better than version A.

Though we can’t avoid Type I Errors, we can estimate the probability of their occurrence. More specifically, in A/B Tests, there’s a parameter called Significance Level represented by the Greek letter α (alpha). It defines the probability of committing Type I Errors.

For example, suppose we set α at 5%. That implies a 5% chance of incorrectly rejecting H0. In this book, rather than α, we used the parameter (1 - α), which is the probability of correctly rejecting H0. Typically, we call this parameter the Confidence Level. We made this decision because (1 - α) is the most common input parameter of A/B Test sample size calculators available online.

3.6.1 Frequently Asked Questions

Here are some questions and clarifications on A/B testing.

Can I test more than two variations? Yes, the methodology we explained adapts to more than two versions. Just divide the accesses into three random groups, for example, if you want to test three versions of a system. These tests, with more than one treatment, are called A/B/n Tests.

Can I conclude the A/B test early if it shows the expected gain? No, this is a common and serious mistake. If the sample size is 200,000 users, the test—for each group—can only be concluded when we reach this number of users, to ensure statistical significance. A potential mistake developers make when beginning to use A/B testing is to conclude the test on the first day the expected gain is reached, without testing the rest of the sample.

What is an A/A test? It’s a test in which both the control and treatment groups execute the same version of the system. Therefore, assuming a 95% statistical confidence, they should almost always fail, as version A cannot be better than itself. A/A tests are recommended for testing and validating the procedures and methodological decisions followed in an A/B test. Some authors even recommend not starting A/B tests before performing A/A tests. (link). If the A/A tests do not fail, we should debug the test setup until we discover the root cause that is making us to conclude that version A is better than itself.

What is the origin of the terms control and treatment groups? The terms originate in the medical field, more specifically in randomized control experiments. For example, to introduce a new drug to the market, pharmaceutical companies must conduct this type of experiment. They choose two samples, called control and treatment. The participants in the control sample receive a placebo, while the participants in the treatment sample receive the drug. After the test, results are compared to assess the drug’s effectiveness. Randomized control experiments are a scientifically accepted method to prove causality.

Real World: A/B tests are widely used by all major Internet companies. Below, we reproduce testimonials from developers and scientists of three companies about these tests:

  • At Facebook (now Meta), A/B testing is an experimental approach to finding what users want, rather than trying to elicit requirements in advance and writing specifications. Moreover, it allows for situations where users use new features in unexpected ways. Among other things, this enables engineers to learn about the diversity of users, and appreciate their different approaches and views of Facebook. (link)

  • At Netflix, if not enough people hover over a new element, a new experiment might move the element to a new location on the screen. If all experiments show a lack of interest, the new feature is deleted. (link)

  • At Microsoft, specifically on the Bing search service, the use of controlled experiments has grown exponentially over time, with over 200 concurrent experiments now running on any given day. The Bing Experimentation System is credited with having accelerated innovation and increased annual revenues by hundreds of millions of dollars, by allowing us to find and focus on key ideas evaluated through thousands of controlled experiments. (link)

Bibliography

Mike Cohn. User Stories Applied: For Agile Software Development. Addison-Wesley, 2004.

Alistair Cockburn. Writing Effective Use Cases. Addison-Wesley, 2000.

Eric Ries. The Lean Startup: How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Business, 2011.

Jake Knapp, John Zeratsky, Braden Kowitz. Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days. Simon & Schuster, 2016.

Ian Sommerville. Software Engineering. Pearson, 10th edition, 2019.

Hans van Vliet. Software Engineering: Principles and Practice. Wiley, 2008.

Exercises

1. Mark True (T) or False (F).

( ) Requirements Engineering, like other Software Engineering activities, needs to be adapted to the needs of the project, product, and teams.

( ) When gathering and analyzing requirements, developers collaborate with customers to gain knowledge about the application domain, system requirements, performance standards, hardware constraints, and more.

( ) As the collected information comes from various perspectives, the emerging requirements are always consistent.

( ) Requirements validation involves confirming whether the requirements in fact define the intended system. This process is critical because errors in a requirements document can lead to significant rework costs.

2. List at least five methods for eliciting requirements.

3. What are the three parts of a user story? Describe your answer using the 3C’s acronym.

4. Assuming a social network like Instagram: (1) Write five user stories for this network from the standpoint of a typical user; (2) Now, think of another user role and write at least two stories related to it.

5. In Software Engineering, anti-patterns are non-recommended solutions for a certain problem. Describe at least five anti-patterns for user stories. In other words, describe story patterns that are not recommended or that do not have recommendable properties.

6. Specify an epic user story for a system of your choice.

7. In the context of requirements, define the term gold plating.

8. Write a use case for a Library Management System (similar to the one we used in Section 3.3.1).

9. The following use case has only the normal flow. Write then some extensions for it.

Buy Book

Actor: Online store user

Main Flow

  1. User browses the book catalogue

  2. User selects books and adds them to the shopping cart

  3. User decides to checkout

  4. User informs delivery address

  5. User informs type of delivery

  6. User selects payment mode

  7. User confirms order

10. For each of the following requirements specification and/or validation techniques, describe a system where its use is recommended: (1) user stories; (2) use cases; (3) MVPs.

11. How does a Minimum Viable Product (MVP) differ from the first version of a product developed using an agile method, such as XP or Scrum?

12. The paper Failures to be celebrated: an analysis of major pivots of software startups (link) covers nearly 50 software startup pivots. In Section 2.3, the paper categorizes common types of pivots. Read this section, identify at least five pivot types, and provide a brief explanation of each.

13. Suppose we’re in 2008, before Spotify existed. You decided to create a startup to offer a music streaming service on the Internet. Thus, as a first step, you implemented an MVP.

  1. What are the features of this MVP?

  2. What hardware and operating system should the MVP be developed for?

  3. Draw a simple sketch of the MVP’s user interface.

  4. What metrics would you use to assess the success or failure of your MVP?

14. Assume you are managing an e-commerce system. In the current system (version A), the shopping cart message reads Add to Cart. You plan to conduct an A/B test with a new message, Buy Now (version B).

  1. What metric would you use for the conversion rate in this test?

  2. If the original system has a conversion rate of 5% and you want to test a 1% increase with the new message (version B), what should the sample size be for each version? To answer this, use an A/B test sample size calculator, like the one suggested in Section 3.6.