Skip to content
This repository has been archived by the owner on Jul 12, 2019. It is now read-only.

Requirements for Requirements #7

Open
6 tasks
elizabethsjudd opened this issue Jan 25, 2019 · 11 comments
Open
6 tasks

Requirements for Requirements #7

elizabethsjudd opened this issue Jan 25, 2019 · 11 comments

Comments

@elizabethsjudd
Copy link

elizabethsjudd commented Jan 25, 2019

Summary

We need to be consistent with how we document requirements. There are currently multiple streams of discussions around how requirements should be written so it would be best to collaborate together to determine what this is.

Acceptance criteria for good requirements

  • Parsable format that can be validated via tests
  • Support user needs/stories
  • Support functional/technical requirements
  • Defined in an readable format that can be used in a regulated code (QMS) environment
  • Users are defined (confirmation our users are general, mouse, keyboard, screen reader)
  • Written requirements allow for unique requirements for each user type
@elizabethsjudd
Copy link
Author

@mattrosno @dakahn @scottnath - please use this to document the work we are doing with outside sources for defining requirements.

@scottnath
Copy link

add ac:

  • Users are defined (confirmation our users are general, mouse, keyboard, screen reader)
  • written requirements allow for unique requirements for each user type

@mattrosno
Copy link
Member

@elizabethsjudd @scottnath can you share an example QMS environment where it'd be important to consume component requirements? An example from your industry's unique needs would help understand that requirement.


What's the distinction between a general user and the other three users? For example:

  • General user: I want to select a button
  • Mouse user: I want to click the button
  • Keyboard user: Sighted usage - I want to tab to the button and select it on enter
  • Screen user: Non-sighted usage - I want to tab to the button and select it on enter

Ignoring the correctness of this example, how do we test for the general user when they are really mouse, keyboard (sighted), or screen reader? Or is it more that mouse, keyboard, and screen reader users inherit general user capabilities (e.g. for the keyboard user, those keyboard tests are specific to keyboard usage and don't include general user tests)?


Where parsable component requirements can serve many purposes, knowing that a large number of design system users are non-technical, we may need to also have written requirements in sentence form that act more as an executive summary of component requirements.

For example, it may be easier for a designer to read the key requirements of how a component behaves, and if more information is needed to evaluate edge cases, specific user interactions, etc., then they can drill into the parsable user stories.

@mattrosno mattrosno changed the title Requirements for requirements Requirements for Requirements Jan 25, 2019
@scottnath
Copy link

general user vs the other three types

I think we could interchange "general user" with "general use". Regardless, you are correct about inheritance from general to the other user-types. The general use dictates what happens (action is performed when a button is triggered). The other three users-types describe how they trigger the button and/or describe what their unique experience is.

Feature: Button component
  As a user
  I want to be able to access and understand a button's purpose
  So that I can successfully perform a button's action

  @contextDependent
  Scenario: Trigger button
    Given there is an enabled button 
     When I trigger said button
     Then said button's action is performed

Feature: Button via mouse
  As a mouse user...

  @contextDependent
  Scenario: Triggering button via mouse
    When I mouseover a button
    Then I can click said button to perform said button's action

Feature: Button via keyboard
  As a keyboard user...

  @contextDependent
  Scenario: Trigger button
    Given that I am focused on a button
     When I want to trigger said button
     Then I can press `SPACE`
      And I can press `ENTER`
      And the button's action is performed
      And the focus may change from the button

Feature: Button via screen reader
  As a screen reader user...

  @contextDependent
  Scenario: Trigger button
    Given I have accessed an enabled button
     When I trigger said button
     Then I am informed of the cursor's location
      And said location may have changed

So the above gives us at least these tests

general: button can be triggered
mouse: button has a mouseover and button can be triggered on click
keyboard: button can receive focus and button can be triggered via specific keys
screen reader: test for response reader receives after a button is triggered

written requirements in sentence form

There is no doubt that individual disciplines (design/QA/OM/Dev) may want a different format for written requirements to use amongst themselves. Certainly this gherkin:

    Given I have accessed an enabled button
     When I trigger said button
     Then I am informed of the cursor's location
      And said location may have changed

is captured in this sentence:

When a screen-reader user triggers an enabled button, they are informed of the cursor's location and should have understanding the location may have changed.

as well as being captured by these numbered steps:

  1. Enabled button is triggered by screen reader user
  2. Screen reader announces cursor's location
  3. Announcement should be clear when describing a changed location

The primary issue is that we need a single source of truth for requirements.

Therefore, it would need to be understood that if the spec had changes to the original requirements, that would also require these disciplines to update their versions. They would need whomever is in charge of their version of the requirements to understand the format chosen by this spec repo and to keep their translation up to date.

This luckily means that the spec repo must only be concerned with maintaining the canonical set of requirements. The canonical requirements should be something that can be understood by the most users - in this case our users are probably:

  • Developers
  • Designers
  • OMs
  • System architects
  • Computers (requirements should be programmatically parseable for use via code/tests)

@elizabethsjudd
Copy link
Author

@mattrosno I have some meetings coming up this week and next around QMS and requirements. Right now WH is not consistent across the whole business and we are actively working on getting everyone in sync. Through these conversations, there are multiple levels of requirements each with different formats and I'm also working to determine where PAL/Carbon can best align in these efforts. As I find out more and there is more formal decisions made, I will add it to this issue.

@mattrosno
Copy link
Member

checkbox-requirements

Large image warning... may want to save image and view outside of GH... I wanted to quickly look at three ways to describe a checkbox and visualize overlap. This isn't perfect, but hopefully enough to help.

All three of these describe checkbox for different purposes:

  • W3C WAI-ARIA: what is a checkbox and how do you accessibly use checkbox?
  • Carbon Usage: what is a checkbox and when should I use it vs. another component?
  • Gherkin: how does a checkbox work (in detail) given multiple user types?

Overlap between WAI-ARIA and Carbon is yellow. WAI-ARIA and Gherkin blue. Places reflected in both Carbon and Gherkin green.

Two approaches

We're evaluating two approaches to written requirements so far. The first approach is through behavioral Gherkin syntax. The other through rules as demonstrated in #10.

Rules (or sets of rules) are nice because:

  • Don't have to maintain separate requirements and validate coverage of those separate requirements
  • Each rule can have an identifier - in spec test reports we could link to the failing rules that would be published
  • Each rule can have metadata, giving the ability to define severity, cross-reference standards, etc.
  • Validation of each mounted component is not dependent on Karma/Mocha/Chai/etc.

Behavioral language is nice because:

  • You can fully describe features with supporting justification, scenarios, specific actions and multiple outcomes

Blended approach

If we find the behavioral requirements necessary to specify everything a component implementor would need to adhere to the spec (visuals aside), maybe there's a modified approach to incorporate best of both approaches? E.g. tag or comment each Gherkin scenario with its accompanying rule?

Gherkin is great at describing "what is a component and exactly how does it behave". Rules as opposed to describe/its are great being testing library agnostic with better reporting capabilities for when spec tests are failing to help guide the developer.

Design users

Back to that image above - it's clear to me that yes a designer could read and understand the Gherkin file, but not well. It's far too big as it primarily serves testing purposes. I'd almost want to define two designer user types, component designer and designer using components.

If going Gherkin for component requirements, I could see this happening. Carbon website references spec version in footer just like:

Vanilla Components version ^9.70.0
React Components version 6.85.0
Last updated February 6, 2019

The Carbon website speaks to the designer using components by translating spec Gherkin as necessary to describe why decisions were made, when to use this component vs. another, etc. As the spec evolves, it's on just the component designer to know the spec Gherkin syntax and keep the Carbon website current. And of course, the spec is there for anybody to reference.

@elizabethsjudd
Copy link
Author

When comparing these three examples, I feel like you are missing the "big" picture of what PAL is providing:

  • W3C WAI-ARIA: PAL provides this information on the UX notes and a11y notes for a given component. These pages are intended for designers, developers, and testers that are meant to be "scannable".
  • Carbon usage: PAL provides this information on the UX notes. When to use a component is a guideline not a requirement. Again this page is meant mainly for designers, is design centric (talks about design decisions/reasoning) and easy to read. It's instructions for the "users of carbon" not the "users of the component"
  • Gherkin: In PAL this is the "source of truth" and is instructions for how "users of the component" should experience it. It doesn't care what what the actual visualization is (hence the existence of the UX notes). If dev just wrote "rules" then there isn't documented collaboration with design. We've found that doing this collaboration helps a lot with dev creating designs desired expectations (for all users) from the beginning instead of having to re-work a component. The Gherkin is meant to describe "how" a user does something. (With a new structure that is being worked on to align with QMS, the Feature is the "what a user does" and aligns to "user needs" in the UX documentation). Gherkin can also be parsed and verify that "rules" align with the requirements so the maintenance is minimal and we can trust they are in sync. I do think we could extend our Gherkin to basically create the a11y notes pages as well since it's very standardized but we just haven't gotten there yet. The only things that have to be updated manually are the requirements and UX notes, everything else is automated and verified to be in sync. (And who knows, if there is standards in the UX notes then we could probably even parse that as well to verify being in sync)

@mattrosno
Copy link
Member

@elizabethsjudd I should have clarified that for that above comparison I was just looking at the Gherkin aspect of PAL, not everything that PAL specifies (UX notes, a11y notes, etc.) Just zooming out for a second to see what clarify that brings.

High-level guidelines for designers and developers regarding UX notes, a11y notes, visual appearance... at this point I'm expecting those to be presented in the Carbon website (carbon-website repo) given the audience, although if those guidelines are specific to a spec version, maybe they live in the spec as a dependency in the website.

I look forward to learning more about how QMS influences our component requirements format.

@elizabethsjudd
Copy link
Author

@mattrosno I would agree that the "UX notes" would stay in the carbon-website. It should be aligned to what ever version the design kit is.

My point was that without looking at everything PAL is providing you're comparing apples and cars. I wanted you to zoom out beyond the Gherkin to actually see at a high-level what is provided to our users and that the Gherkin is only one aspect but it's an important piece because it acts as a single source of truth and drives all of the other documentation (most of which is automated).

FYI: For QMS we are NOT going to be able to use sentence case. We need something that is parsable and traceable which paragraphs on a website does not provide.

@mattrosno
Copy link
Member

I think a QMS example would help to help us understand the requirement. E.g. would components be traced at product build, during audits, etc.? Would it be looking at a specific web page or user flow and determining every user and functional requirement that contributed to that web page or user flow? An example would help answer questions like those.

@elizabethsjudd
Copy link
Author

elizabethsjudd commented Feb 7, 2019

@mattrosno An example of QMS is difficult to provide as QMS (means quality management system) is basically a large database of version controlled documents for a single application. To create and access the documentation you have to go through extensive training for the system (currently WH has 2 different systems used by our market segments but are in the process of aligning them). Here is the high-level work workflow of building in a regulated code environment:

  1. a need is defined in QMS document (this need may be maintanance/bug, new feature, new application, etc)
  2. the need is broken down in to user, system, and compliance needs in QMS document
  3. design performs usability tests based on the needs using wireframes/prototypes/high-fidelity mockups
  4. design input documentation is created from test results in QMS document
  5. needs and design input are analyzed for general issues, risks, and hazards in QMS document
  6. requirements are updated to resolve each issue, risk, and/or hazard in a new version of the needs QMS document
  7. steps 3 - 6 are repeated until risks and hazards are acceptable
  8. functional requirements are created and mapped to each need in a QMS document
  9. developers build code to QMS documented requirements
  10. automated and manual tests are executed and mapped to each QMS requirement
    (there MUST be at least 1 test for each requirement)
  11. An application is ready to move to a higher stage in the release process
  12. Complete regression test is performed
  13. Repeat steps 1 - 12 for EVERY change to source code.

When an application is ready to be moved to a new release stage, a complete review of all the documentation needs to be performed. If anything is missing, they must go back to the appropriate step and start again.

If an application is audited (internally or externally), there must be trace-able documentation for tests that ALL requirements have been properly tested.

So that's a lot of work.... we know and Carbon or PAL are not expected to be "regulated" environments.

When a regulated environment uses a third-party tool (or internal tool that they do not own), that tool is classified as either "trusted" or "not-trusted". They must lock the version number of the tool they are using as an update is considered a code change.

Non-trusted tools require the application to perform steps 1 - 13 for each piece of the tool used by the application. They are typically avoided because it requires a lot of work. Currently, both Carbon and PAL are non-trusted tools. PAL has a POC for becoming a trusted tool with the introduction of our latest requirements test setup.

To be a trusted tool, there needs to be trace-able and version controlled documentation of requirements and tests. This allows the application to reference this documentation in their own without having to go through all 1 - 13 steps. When updating to a newer version of the tool, the documentation is simply reviewed instead of re-performing all of the tests, saving a lot of work for the application. Being a trusted tools allows application developers to focus their time on meaningful work instead of re-testing reusable components across each of their applications.

If an application were to deviate implementation of a component the tool would become "untrusted". By providing the exportable tests, we are also increasing alignment with the documented requirements in the application itself and providing tests to the developers to they can again focus on more meaningful tests and make sure that when an audit is performed nothing is overlooked.

PAL's POC to be a trusted source provides:

  • requirements that map to user needs --> Gherkin
  • tests for each requirement --> our exported tests
  • verification that requirements all have tests --> our test parser that Scott just built

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants