Skip to content

Attribute reporting checks in testing framework #756

@cecille

Description

@cecille

The current set of certification tests mostly use read actions to verify device changes. This does not match the normal production flow for controllers, which mostly rely on subscriptions for change notifications rather than polling reads.

We want to retrofit the existing tests to verify subscription reporting as a side effect of the existing read tests in order to ensure that attributes being read are properly reported.

Proposed high level design

  • at the start of each test, do a wildcard subscription to the device (setup_test)
  • build an initial cache of attribute values at this point
  • when reports are received, update the cache of reported values
  • at each read instance (probably in read_single_attribute_check_success) validate that the read value matches the cached value, or the cached value is updated within the subscription ceiling.
  • shutdown the subscription on test teardown

Potential problems

Tests that re-subscribe with keepSubscriptions=false

Some options:

  • deal with the tests individually - better if there's only a few
  • have some kind of reporting mechanism that disables these checks if such a subscription is detected?
  • mint another fabric specifically for this (has implications for RR-1.1, and whatever other test it is that tries to fill the fabric table - SC something?) - we would want to make this a long-running fabric so it doesn't fill up admin_json. This would be a problem for fabric scoped attributes, so this option is less desirable if there are only a few tests with keepSubscriptions=false.

Changes omitted (C) and quieter report (Q) attributes

  • need to find these in the DM files and omit them from checks
  • for now you can use the most recent XML files, though there are other plans to have the DM files for the specific device available as part of the PICS effort - a problem for later

Real failures

  • There are definitely some clusters that don't report properly and those will need to be identified and fixed (in separate PRs)

Reads that don't go through read_single_attribute_check_success

  • maybe we just swap these tests to use this function? In a separate PR? Depends on how widespread this is.

Open questions, needs further thought:

Test plans

  • we need a way to indicate that we're doing this in the test plans in a way that doesn't require modifying every test plan

Widespread support

  • once there's a proof of concept and the bugs are fixed, it would be good to present this to the CSG (along with a list of anything that got fixed as a result of this effort)

YAML tests

  • how do we do this in YAML? Do we need to? Or is most complex logic handled in python nowadays?

Backwards compatibility

  • how do we deal with older devices that may not have the fixes you implement? We need to balance asking for improvement from devices vs. not unfairly punishing manufacturers

Sub-issues

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

In Progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions