Mosaic Ways of Working

1. Roles

In terms of Scrum we are covering roles as follows:

Product Owner

A team of people:

  • Salman = focus on Project business requirements, does Demos to users
  • Ruth = focus on Service business requirements
  • Salman and Ruth together regulate backlog management and provide sprint plans
  • Matt = focus on Platform technical requirements, Platform Owner
  • ? = focus on Project management and reporting, e.g. Sprint Reviews

Developers

In Scrum, a Developer is anyone who contributes to the delivery of backlog items

  • Joe = UI/UX Designer
  • John = Software Developer
  • Sam = Software Developer
  • Martin = Software Developer
  • Janos = Software Developer
  • Andrew = Software Developer
  • Chris = Tester

Scrum Master

Process-watching and trouble-shooting/removing obstacles to delivery =

   Ruth and Matt for technical delivery

   Rachel and Gemini for Project Management issues

Effective stand-ups = PMs will lead start-of-day stand-ups at 9.45am

  • Ensure Sprint burndown chart and backlog is showing on the TV screen
  • Call the start of the Start of Day, get people together
  • Review Sprint burndown as a daily review of project progress
  • Referring to the backlog, make sure items from each person move along to cover:
    • What was done yesterday
    • What is to be done today
    • Any obstacles that need addressing – these should be taken offline and discussed after the stand-up to keep the session brief.
      • Attendees to make a conscious effort to flag issues/impediments
  • Ensure that impediments identified for resolution after the Stand-Up are taken forward

2. Definitions

Statuses

New – requirement approved by the business for adding to the backlog

Approved – item ready to work on, see Definition of Approved below

Committed – item added to sprint

In progress – item being worked on

Code review – build done and ready for review by a different developer

Review – code review passed and item ready to demo to PO and Tester

Test – Ready for testing, see Definition of Ready to Test below

Failed test – passed back to Devs for correction

Done – item complete and ready to ship, see Definition of Done below

May add a 'blocked' flag somehow

Definition of Approved

What are the criteria for approving a backlog item?

  • It has an estimate against it - reviewed and revised if necessary
  • Customer priority has been assessed
  • It has enough information for Developers to complete work on
  • Devs/PO have decided whether to deploy (feature flag) or release the feature
  • Tester has confirmed in refinement meetings that the Acceptance Criteria are suitable for testing
  • Tester has determined if an automated test should be added for the new feature

BA + Designer supply information, but it is a shared responsibility for the team to agree to approve (or not) an item.

Changes to Item Styles have proved particularly problematic: refinements of Item Style items must include representatives of each of the project role areas before approval can be given

Rachel/Gemini, Emma and Joe are responsible for identifying 2 sprints ahead what needs UI input 

Ready to Test

  • Item build is complete and Devs have unit tested and reviewed code
  • Accessibility tested by Devs where relevant
  • Tester and BA have received demo of the feature (following stand-ups)
  • Product Owner has signed it off

Does NOT require a Test Case to be written but we may review this in future if things hang around too long in Test

Testers deploy to the Devaccept environment. Testers raise bugs in whatever is the 'current' sprint and the Devs relocate if needed.

Definition of Done

Item

  • An item cannot be marked as done if it has any outstanding S1, S2, P1, or P2 bugs. In non-live code, these are defined in relation to what the impact would have been if live.
  • Where an item has such bugs outstanding at the end of the sprint, it is not classified as done, but is carried forward to the next sprint.
  • If the item has S3 or P3 bugs open at the end of a sprint, it can be marked Done and just the bugs carried forward. (We may choose to deploy, but not generally release such items.)
  • If the item has S4 or P4 bugs open at the end of a sprint, it can be marked Done and the bugs either carried forward or moved to the triage area of the general backlog (where they might be deemed won't-fix items), depending on how useful/easy it is deemed to fix them.

S = Severity, i.e. the impact of the issue; P = Priority, in terms of how much the organisation cares about it. By default, Severity and Priority ratings are set the same, and Severity is used to review state. Where Priority has a different value this must be taken into account.

Sprint level

  • All backlog items are built
  • Tested successfully against acceptance criteria
  • Technically documented where needed
  • Automated test updated where relevant
  • New automated test has been added to backlog if necessary (i.e. for future sprint regression testing)

Deployment to Production

  • Readiness to deploy assessed (on Tuesdays at 2pm)
    • All code in the build has a backlog item in the Sprint backlog with state of Done
  • Passed regression testing
  • Release notification email sent to users
  • Release notes produced
    • Include backlog ID no. [this not needed?]
    • To cover items being released, but not those only being deployed at this point
    • Link to user documentation
  • Support handover to 2nd line re updates given:
    • Website updates/documentation done or in-hand to agreed schedule, noting items that may be feature-flagged off for release later
    • ITLC update done or planned
  • Support handover to 3rd line Support Developers

Does NOT include Acceptance by users – UAT is not a formal stage in Scrum.

Regression testing is carried out by the Tester during Projects. Outside of projects this is done by the 3rd Line Support Developers - they may also assist when needed or to cover absence.

3. Estimation approach

Estimates cover development work only, i.e. excludes Testing.

T-Shirt sizes for development work (days): T-shirt sizes UI/UX work (days):
XS=1-2 XS=0.5
S=3-5 S=0.5-2
M=5-8 M=2-4
L=8-13 L=4-8
XL=13-26 XL=8+
XXL =>26 -

 

At the same time that T-shirt estimates are given, backlog items are to be assessed for complexity in terms of the amount of effort likely to be needed to refine them. This to consider Platform / Technical / Design / Testing issues, each flagged as High, Medium, Low effort.

4. Sprint Planning

Aim for fortnightly releases on Thursday mornings in Week 1 of a sprint, releasing code from the previous Sprint.

  • Dev build to finish at the end of the Tuesday of week 2 of a sprint
  • Have a build progress review checkpoint at the same time with Devs and Testers, to check degree of confidence that the sprint will finish on time
  • Ruth and Emma to hold a sprint closure meeting at 11.30 am on the final day of the sprint
  • Followed by pre-planning for Ruth and Emma to draft initial sprint items proposed for the next sprint
  • Followed by Sprint planning on noon on Friday of week 2 of the sprint to include Devs and Tester

Emma to keep up with traffic light ratings on backlog items, so that it is clear what they are when we are planning the sprints

5. Managing the backlog

The backlog is established from the set of user requirements the PO/BA has agreed with the client. The items must be approved in principle for adding to the Mosaic Platform by the Service Owner (Matt) and Delivery Manager (Ruth).

A note of the overall complexity to be added by the PO/BA as the first line of the Description of each backlog item - see traffic lights added at point of estimation, above.

More monitoring of issues to pass to PMs from Ruth - but PMs to keep Ruth informed of issues when they arise

All team members are responsible for:

  • Keeping state up-to-date
  • Ordering backlog items

The principles to follow for ordering backlog items are as follows.

Sprint Backlog

Any Critical or High Severity or Priority 1 or 2 bugs at the top - as these will require an out-of-band release and so take precedence over Project work

Backlog items in priority order - they will not move position irrespective of state; these are what the Sprint is to deliver

Project and Service bugs, tagged accordingly, in order of Severity, i.e. Medium followed by Low bugs

Within each bug grouping, bugs are then ordered by State, with newest states at the bottom and states nearest to the bug being Done at the top

All Done bugs, of whatever severity, are moved to the bottom of the Sprint backlog - as they are finished they are no longer of interest, which frees up screen space to see active items at the top of the Sprint screen

NB. Moving the order of items in the Sprint view can have an unexpected impact in terms of re-ordering the project backlog view, so this should be checked 

Project Backlog

All items in the current sprint should be at the top of the Project Backlog, in the same order as the Sprint View. Adding items to a sprint by dragging them into the Sprint folder from the Project Backlog view, or changing the iteration path in this view, will not re-order the items, so they need to be moved manually to appear in the correct place.

In particular, any bugs added to the current sprint from the Triage area of the backlog, should be moved out of the Triage area and into the Sprint area. This is the only time that bugs appear above/within the list of backlog items: all other bugs (i.e. those not being worked on in the current sprint) are below the list of backlog items.

Groupings of backlog items and bugs in the Backlog view are delineated by placeholder category items. These are, in order:

  • backlog item groupings: [tba]
  • bug groupings: [tba]

7. Sprint Reviews

Sprint Reviews have 2 elements:

  • Report on the release burndown - i.e. overall progress against the project backlog: run by one of the PMs
  • Demo of features being released in the Sprint: run by the PO (business focus)

Everyone will attend Sprint Reviews. This includes all the internal team, so that the project team (devs, testers, etc) can get an update on the overall project progress and the service team can see the new functionality being delivered. The business users will be there to see what the team have achieved this sprint and get an update on overall progress, risks and issues.

This is the main method for verifying User Acceptance. Users will also have access to try things out in the Demo environment  before deployment to Production. The Stakeholder Rep may also be given a 'stakeholder' view of Azure DevOps.

For the UAS/Bodleian project, the UAS stakeholders attending will be most interested in demos, rather than velocity (Gemini will be reporting to the Project Board on this separately), whereas the Bodleian stakeholders are more likely to be interested in trying out new features on Demo.

The PO (business focus) will send out a list of what will be covered in the Review to stakeholders in advance, so that they can attend if they are interested in seeing them.

Sprint reviews will happen on Tuesdays (following the end of the sprint) at 11.00am for 45 mins.

8. Sprint Retrospectives

These have been working well

Emma to lead and Ruth to pick-up actions list - PMs to monitor that actions are carried out

Either will run a whole retrospective session when the other is away

Retrospectives will happen shortly after the Sprint review, at 1.30pm on the same day for 30 mins.

9. Logging, Reviewing and Approving bugs

Project bugs

Logged in the current sprint. Reviewed and approved by Emma as PO (business user).

Service bugs

Logged in the Triage area. Reviewed and approved at weekly triage meetings of Support Devs, Customer Success Analyst, +/- Project Devs. Ruth as PO (Service users) to approve any items where this is not evident from existing functionality.

At the Triage meeting bugs are allocated to bug groupings in order of their Severity and pulled into the current sprint by the Service Devs as capacity allows. Any High or Critical bugs are added straight away and will have priority over Project work.

Supported devices

Desktop: On Windows 10 – latest versions of Chrome and Edge. On Windows 7 – IE11. On Mac – latest version of Safari.

Mobile/devices: On iOS – latest version of Safari. On Android – latest version of Chrome.

Best endeavours will be made to ensure other modern browsers are ‘supported’ but this will not be certified.

Bug logging key information

Summary

Replication Steps

Priority

Environment

Severity

Browser/OS Affected

Detected in Release/Version

Area of System found

Description

Defect Type

Attachments

 Linked Backlog Items

 

  • SUMMARY/TITLE – anyone reading this must immediately know what the defect is about
  • PRIORITY – how quickly must it be addressed
  • SEVERITY – impact it has on the end user
  • Severity and priority can be mutually exclusive – a defect might have a high severity but a low priority, e.g., one that affects <1% of users but causes the application to become completely unusable
  • DESCRIPTION – clear reproduction steps with attachments if necessary
  • ENVIRONMENT/BROWSER/OS INFORMATION – enables developers to know where they can reproduce the issue and if it is isolated to a specific browser
  • It also indicates the impact of the issue – a defect on a live environment is more serious than one on a dev or testing environment