Skip to content
This repository has been archived by the owner on Jul 12, 2019. It is now read-only.

[RFC] feat(accordion): add initial rules #10

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

joshblack
Copy link
Contributor

@joshblack joshblack commented Jan 31, 2019

Adds in a proof-of-concept for authoring rules for the Accordion component. Meant primarily for discussion and demonstration.

When setting out to build this, there were a couple of questions I wanted to have answered:

  • How does one run the spec on their components?
    • How does one run the spec on a specific component?
    • How does one run specific rules?
  • How does an individual write rules for a component?
  • How does an individual test rules they write for a component?
    • This might sound a bet meta given that we would be testing the spec but there are situations where it'd be worthwhile, especially for developer experience of error messages
  • How should errors be reported? Should we report on enhancements/optional requirements?
  • How do we support ambiguity in the spec? For example, supporting enter or space for a command.

With this PR, I'm hoping to answer the first three questions and will leave some thoughts on the latter, as well.

How does one run the spec on their components?

When using @carbon/spec, one will be able to import createRunner or a default runner to incorporate in their test suite. This runner does not require any testing tools or environment and can run in any DOM-like environment like jsdom. The pattern for using these tools in a test environment would roughly follow:

import { createRunner, rules } from '@carbon/spec';

test('spec', async () => {
  const runner = createRunner(rules);

  runner.beforeEach(context => {
    // Use context to render the component and scenario in your framework
    // and return it
    return renderedMarkupInDOM;
  });

  runner.afterEach(node => {
    node.parentNode.removeChild(node);
  });

  const report = await runner.run();
  expect(report.violations.length).toBe(0);
});

Most likely, this pattern would be superseded by running rules in a component test file. For example,

import { createRunner, rules } from '@carbon/spec'

test('component', async () => {
  const runner = createRunner(rules.component);

  runner.beforeEach(context => {
    // Use context to render the component and scenario in your framework
    // and return it
    return renderedMarkupInDOM;
  });

  runner.afterEach(node => {
    node.parentNode.removeChild(node);
  });

  const report = await runner.run();
  expect(report.violations.length).toBe(0);
});

If you are focusing on a new library and want to incrementally test your implementation, you can use only and exclude to filter the rules that are run. For example:

import { createRunner, rules } from '@carbon/spec'

test('component', async () => {
  const runner = createRunner(rules.component, {
    only: ['component.html'],
  });

  runner.beforeEach(context => {
    // Use context to render the component and scenario in your framework
    // and return it
    return renderedMarkupInDOM;
  });

  runner.afterEach(node => {
    node.parentNode.removeChild(node);
  });

  const report = await runner.run();
  expect(report.violations.length).toBe(0);
});

An example of this pattern can be found in src/rules/__tests__/accordion-test.js.

How does an individual write rules for a component?

Rules are located under src/rules. There is an index entrypoint at src/rules/index.js for all rules. Component rules are located under src/rules/component-name.js. Tests for a component are located under src/rules/__tests__/component-name-test.js.

In a specific component rule file, there should be an export named rules that is of type Array<Rule>. Each Rule looks like:

type Rule = {
  // The unique identifier for the rule
  id: String,
  // A description of the rule
  description: String,
  // Optional URL to reference for the rule
  reference?: String,
  // Severity level of the rule
  level: Level,
  // Any context for this rule, will hold domain model
  context: any,
  // Used by the runner to validate that a given HTMLElement adheres to the rule
  validate(root: HTMLElement, context: any): Array<ValidationError> | void,
};

type Level = 'error' | 'warning' | 'optional';

The rules are run alongside the runner code shown above. Before each rule, the runner will call beforeEach with the given rule context. This hook should return a valid HTMLElement that is rendered in the DOM. Once beforeEach is completed, validate is called for the given rule. Rules can report validation errors by returning a single ValidationError or an array of ValidationErrors.

In general, there will be several categories of rules for a component:

  • Verifying markup
  • Verifying markup changes after interacting with a component
  • Verifying different input options to change component state
  • Verifying aria attributes

Verifying markup

To verify markup, there is a diff utility located under src/tools/diff.js. This utility takes in an expected tree and the actual tree and returns back an array of validation errors if there are any mismatches.

The expected tree should be built up using either createExpected from src/tools/createElement.js, or spec from src/tools/html.js. Both tools are equivalent in their output, however spec allows the creation of a tree using tagged template literals while createExpected uses function calls. In practice, this looks like:

createExpected('h1', { class: 'some-class' }, 'Some content');
// Versus
spec`<h1 class="some-class">Some content</h1>`;

These helpers generate a tree of expected values that we use when compared against an HTMLElement that is rendered in the DOM.

However, createExpected and spec are not tied to having valid DOM values for attributes. Instead, one can pass a custom matcher for a property. This matcher has the signature:

interface Matcher {
  (node: HTMLElement): Array<ValidationError> | undefined
}

For example, we might want to test that the implementation specifies an attribute aria-controls that has a value that matches the id of another node. Using this custom matcher strategy we could verify that the implementation does this as expected. We are currently using this custom matcher strategy in accordion to verify aria-controls:

createExpected(
  'button',
  {
    class: 'bx--accordion__heading',
    'aria-expanded': 'false',
    'aria-controls': node => {
      const attribute = node.getAttribute('aria-controls');
      const id = node.parentNode.childNodes[1].id;
      if (attribute !== id) {
        return [
          [
            'Expected the value for `aria-controls` to match the id of the panel',
            attribute,
            id,
          ],
        ];
      }
    },
  },
  createExpected('svg', {
    class: 'bx--accordion__arrow',
  })
)

In the case of accordion, we know that the node containing aria-controls is a sibling to the content panel with the id. We can then use browser APIs to verify that they match using this custom matcher.

Verifying markup changes after interacting with a component

Often times, we will want to write rules that check the rendered HTML after an interaction occurs. For example, with a menu or accordion we would want to verify that after clicking a node that the aria-expanded value has been properly updated.

Inside of a rule, we can use the default DOM APIs for finding an HTMLElement, calling .click() on it, and then making an assertion on if aria-expanded. There is an example of this in src/rules/accordion.js:

{
    validate(root, context) {
      const headers = root.querySelectorAll('.bx--accordion__heading');

      for (let i = 0; i < context.children.length; i++) {
        const header = headers[i];
        header.click();

        if (header.getAttribute('aria-expanded') === 'false') {
          return new ValidationError([
            'Expected aria-expanded to be true after the heading is clicked',
          ]);
        }
      }
    },
}

Here, we are using context to determine how many accordion headers we need to verify. We then will go through each header, click it, and verify the expansion state. If there is a mismatch, we report it using ValidationError.

How does an individual test rules they write for a component?

Surprisingly, this section is very similar to the runner section. We can test our rules by using createRunner and filter by a test id using the only option. We test our accordion spec inside of src/rules/__tests__/accordion-test.js using this strategy.

Open Questions

  • How natural does it feel to write expected trees using createExpected or spec? Can it be improved?
  • How could we improve upon ValidationError to capture more information? Currently, violations that are reported contain little information to help debug the error
  • How should we report on XOR situations where an implementor could choose strategy A or B?

FAQ

How does error reporting work if there are multiple errors in a rule?

Currently, the strategy is that we will fail fast with our checking. If we encounter an error in a rule, we will exit early so that implementor is not overwhelmed with errors.

Is the format of tests in src/rules/__tests__/accordion-test.js what will be followed?

What currently exists in the test file is a minimal set of rules to verify the behaviors discussed above. I expect a good number of the checks to be covered by utilities rather than verifying each component spec accurately reports on things like HTML mismatches.

@joshblack joshblack requested a review from mattrosno January 31, 2019 21:06
@joshblack
Copy link
Contributor Author

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant