python-kasa/devtools
Teemu R f9b5003da2
Add support for tapo bulbs (#558)
* Add support for tapo light bulbs

* Use TapoDevice for on/off

* Add tapobulbs to discovery

* Add partial support for effects

Activating the effect does not work as I thought it would,
but this implements rest of the interface from SmartLightStrip.

* Add missing __init__ for tapo package

* Make mypy happy

* Add docstrings to make ruff happy

* Implement state_information and has_emeter

* Import tapoplug from kasa.tapo package

* Add tapo L530 fixture

* Enable tests for L530 fixture

* Make ruff happy

* Update fixture filename

* Raise exceptions on invalid parameters

* Return results in a wrapped dict

* Implement set_*

* Reorganize bulbs to iot&smart, fix tests for smarts

* Fix linting

* Fix BULBS_LIGHT_STRIP back to LIGHT_STRIPS
2023-12-05 20:07:10 +01:00
..
bench Add benchmarks for speedups (#473) 2023-07-02 01:03:50 +02:00
check_readme_vs_fixtures.py Add support for tapo bulbs (#558) 2023-12-05 20:07:10 +01:00
create_module_fixtures.py Add devtools script to create module fixtures (#404) 2023-08-27 19:53:36 +02:00
dump_devinfo.py Re-add regional suffix to TAPO/SMART fixtures (#566) 2023-12-05 16:45:09 +01:00
parse_pcap.py Use ruff and ruff format (#534) 2023-10-29 23:15:42 +01:00
perftest.py Use ruff and ruff format (#534) 2023-10-29 23:15:42 +01:00
README.md Add devtools script to create module fixtures (#404) 2023-08-27 19:53:36 +02:00

Tools for developers

This directory contains some simple scripts that can be useful for developers.

dump_devinfo

  • Queries the device and returns a fixture that can be added to the test suite
Usage: dump_devinfo.py [OPTIONS] HOST

  Generate devinfo file for given device.

Options:
  -d, --debug
  --help       Show this message and exit.

create_module_fixtures

  • Queries the device for all supported modules and outputs module-based fixture files for each device.
  • This could be used to create fixture files for module-specific tests, but it might also be useful for other use-cases.
Usage: create_module_fixtures.py [OPTIONS] OUTPUTDIR

  Create module fixtures for given host/network.

Arguments:
  OUTPUTDIR  [required]

Options:
  --host TEXT
  --network TEXT
  --install-completion  Install completion for the current shell.
  --show-completion     Show completion for the current shell, to copy it or
                        customize the installation.
  --help                Show this message and exit.

parse_pcap

  • Requires dpkt (pip install dpkt)
  • Reads a pcap file and prints out the device communications
Usage: parse_pcap.py [OPTIONS] FILE

  Parse pcap file and pretty print the communications and some statistics.

Options:
  --help  Show this message and exit.

perftest

  • Runs several rounds of update cycles for the given list of addresses, and prints out statistics about the performance
Usage: perf_test.py [OPTIONS] [ADDRS]...

Options:
  --rounds INTEGER
  --help            Show this message and exit.
$ python perf_test.py 192.168.xx.x 192.168.xx.y 192.168.xx.z 192.168.xx.f
Running 5 rounds on ('192.168.xx.x', '192.168.xx.y', '192.168.xx.z', '192.168.xx.f')
=== Testing using gather on all devices ===
              took
             count      mean       std      min       25%       50%       75%       max
type
concurrently   5.0  0.097161  0.045544  0.05260  0.055332  0.088811  0.143082  0.145981
sequential     5.0  0.150506  0.005798  0.14162  0.149065  0.150499  0.155579  0.155768
=== Testing per-device performance ===
                           took
                          count      mean       std       min       25%       50%       75%       max
id
<id>-HS110(EU)   5.0  0.044917  0.014984  0.035836  0.037728  0.037950  0.041610  0.071458
<id>-KL130(EU)   5.0  0.067626  0.032027  0.046451  0.046797  0.048406  0.076136  0.120342
<id>-HS110(EU)   5.0  0.055700  0.016174  0.042086  0.045578  0.048905  0.059869  0.082064
<id>-KP303(UK)   5.0  0.010298  0.003765  0.007773  0.007968  0.008546  0.010439  0.016763

benchmark

  • Benchmark the protocol
% python3 devtools/bench/benchmark.py
New parser, parsing 100000 messages took 0.6339647499989951 seconds
Old parser, parsing 100000 messages took 9.473990250000497 seconds