How to Make Software when I @ rosie

The Development projects in JIRA use a a lightweight style of Agile Scrum management to allow us to efficiently and effectively plan and release software on a regular basis. To support this, it is important that the following procedures and conventions be followed to avoid miscommunication.app

This document currently applies to the ROS, RA, and SSO projects.ide

First, read this: https://en.wikipedia.org/wiki/Scrum_(software_development)ui

Ground Rules

  • Developers and designers are only expected to take action on issues that are "Open" or "Reopened" and assigned to them. Mentioning someone on a ticket does not indicate that they need to take action and considered informational.
  • Tickets should be short and atomic. If a ticket takes more than a short paragraph and/or a few bullet points to describe, it's probably multiple tickets.
  • The Description field completely describes what the ticket is supposed to do. If a ticket changes, update the description and mention what and why in a Comment; developers will not implement functionality that is described in comments.
  • Use issue priorities responsibly (see below).

Creating a Ticket

When creating a ticket, the important decisions are:this

  • What type of ticket this is?
  • How high priority is this ticket?
  • Who should I assign it to?

Once you've figured these out, all that remains is to write a concise, descriptive, and comprehensive ticket.atom

Ticket Types

Bug

Something's not working the way it is supposed to. Bugs should be a description of what is not working, how to reproduce the bug, and what the behavior is expected to be (not what you want it to be).idea

Required Fields

  • Affects Version: The version of the software affected by the bug. For bugs affecting production, this is the released version on product. For regressions during testing, this is the version being tested.
  • Fix Version: This is the version where the bug will be fixed. Set this to 'BUGS' for newly reported issues.

User Story

A User Story is a business language description of a small functionality requirement within the application. The Wikipedia article contains some examples of formatting and some philosophy behind the use of User Stories.spa

During the Sprint Planning process, the development team will estimate the difficulty of User Stories based on Story Points; anything that is too complex (i.e. more than 13 points) will need to be broken down into multiple User Stories, which can be grouped using an Epic. A 13 point story represents an estimated 1 day of work, which is the absolute upper limit of the complexity that should be covered by a single Story.rest

Examplescode

  • "As a customer, I want to be able to save my credit card during the checkout process so I can checkout faster next time".
  • "As a retailer, I want to receive a notification of which orders are paid when I am sent an ACH so I can reconcile my accounts."

Requirement

Because user stories are intentionally high level and intended to capture core business functionality, we will use the "Requirement" ticket type to capture functional and non-functional requirements around a User Story. For a story to be completed, all requirements must pass testing.component

Design Element

Design elements belong to User Stories, and describe a visual element, as created by the Creative team to fulfill the User Story and its requirements. Design elements should be atomic and easily testable, such as a button or a widget. For larger-scale design constructs, the layout and the sub-elements should be broken down into separate design elements.

Examples

  • The design of a saved address.
  • The design of the "Add Address" form.
  • The layout of the "Address" widget, describing how the various components work together.

Behavior

A behavior describes the results of a given action taken by a user.

Technical Task

Technical tasks are used by the development team to track non-user-facing elements of completing user stories, such as database and code changes.

Feature Ticket Priority Levels

These apply to User Stories about new features; they are intended to roughly capture the business value of a given User Story to the company. These do not strictly correlate to the Rank of a ticket within the project, but will be used to influence Sprint planning.

Priority
Description
Blocker Don't use this for new features.
Critical Don't use this for new features.
Major This feature is going to rock the socks off of our Users, or measurably reduce our support burden
Minor This is definitely neat.
Trivial This would be nice to have, but is not expected to be earth shaking for us our users.

Bug Ticket Priority Levels

Priority
Description
Blocker Production is down or cannot complete basic functionality. Fix timeline: Drop Everything.
Critical Production performance or behavior is substantially impaired, but workaround are possible to continue business. Fix Timeline: hotfix, same day.
Major This bug results in a loss of function. Fix timeline: hotfix, 1-2 days.
Minor This bug can be worked around but results in operational friction. Fix timeline: 1-2 releases.
Trivial Removing this bug would be a nice to have. Fix timeline: who knows.

Story Points

Task effort is estimated using "points". These are an arbitrary representation of the relative perceived value or difficulty of a ticket. Points are used because in practice humans are terrible at estimating things. The point scale is based on a rounded Fibonnaci sequence, representing the increased uncertainty that goes along with estimating larger chunks of work. In general, User Stories worth more than 13 points are considered to be too complex to be accepted for a Sprint. Story point assignments are at the sole discretion of the Development Team (this includes Creative).

Points
Approximate Meaning
1 Trivial change requiring minimal effort to implement
2 Minor change
3 Need a bathroom break.
5 Caffeine required.
8 Roughly equivalent to a half day of work.
13 This is a full day project for a pair of developers
21 This is too big, but close to being doable
100 I can't even.

 

How Stories Become Software

Documenting new Stories, Epics, Bugs, and Requirements in JIRA is an ongoing process. Over time, these tickets will build up what is known as the Backlog, which is essentially a to-do list for software development. The Backlog is maintained by theProduct Owner, which may initially be a shared responsibility (implementation pending). The Product Owner is responsible for prioritizing the backlog based upon input from various stakeholders, including Customers, Creative, Biz Dev, and Development.

Sprint Planning

Every two weeks a Sprint Planning Meeting will be held to determine what we will be working on for the next period of time. This meeting will last no longer than two hours, and will accomplish the following:

  • Prioritize the Backlog - the priority of the Stories in the backlog will be set based on stakeholder input. Business Value Point should be assigned prior to this for Stories near the top of the backlog.
  • Estimate the Backlog - the development team will estimate the difficulty of each item considered for the Sprint (in Story Points).
  • Establish the Sprint Backlog - the development team will add the appropriate amount of work from the top of the backlog to the Sprint Backlog, the body of work they will aim to accomplish with the next Sprint.

User stories that are "too big" will be sent back to the Reporter to be broken down, usually into several stories as part of an Epic.

Sprint

Each Sprint will last for two weeks, during which the Development Team (which includes software developers and designers) will work to Burndown the Sprint Backlog. This is tracked on JIRA via the "Work" pane of an Agile Board.

During the course of a Sprint, ideally no new Stories are added; in practice, bugs and urgent requirements will arise. When new items are added to an in-progress Sprint, a mini Sprint Planning meeting will be held to review the scope of the Sprint. As a result, other Stories may be removed from the Sprint to make room.

Reporters of User Stories ("Customers" in Agile parlance) are encouraged to review Stories marked as complete by Development during the course of the Sprint. In general, the development team will deploy nightly builds during the Sprint process (as per usual, availability and uptime guarantees are not made for the development server).

Sprint Review

At the Sprint Review, the Development Team will:

  • Review what was accomplished
  • Review what was not accomplished
  • Demo any completed work from the Sprint to the rest of the company/stakeholders.

Sprint Retrospective

The Sprint Retrospective allows the team to make continuous process improvements, and focuses on:

  • What went well during the Sprint?
  • What could be improved in the next Sprint?

Testing and Release

Each Sprint results in a Potentially Shippable Increment (PSI). Our current policy is to ship every PSI that we produce after appropriate testing. Because the testing process can vary in length, the time between releases may not match the Sprint period.

One flaw with the current Rosie testing procedure is that it is very disruptive to the ongoing development process, as questions and regressions tend to distract developers from concentrating on new work. To avoid this going forward, the testing process will now take a more measured approach:

  1. The testing team will review the tickets in the release and ensure that they are appropriately implemented. If all the requirements for a User Story is met, it is said to be "Accepted".
  2. The testing team will identify any Regressions (non-specified changes in behavior between the current software and the new release) and document these.
  3. The development team will review rejected User Stories and Regressions on a daily or other appropriate basis, not in real time.
  4. Once all User Stories are accepted, the final testing script will be run. Assuming this passes, the Release Candidate will be shipped to production.

As we continue to robustify our automated testing procedures during the development process, the goal is to greatly reduce regressions and make most testing about validating that User Stories are properly implemented. Initially, it is expected that this testing procedure will take between 2 and 5 days to fully complete.