⚡ scheduled, data-write, http 🏷️ tag1, tag2, tag3 🔧 InfluxDB 3 Core, InfluxDB 3 Enterprise
Brief description of what the plugin does and its primary use case. Include the trigger types supported (write, scheduled, HTTP) and main functionality. Mention any special features or capabilities that distinguish this plugin. Add a fourth sentence if needed for additional context.
Plugin parameters may be specified as key-value pairs in the --trigger-arguments flag (CLI) or in the trigger_arguments field (API) when creating a trigger. Some plugins support TOML configuration files, which can be specified using the plugin's config_file_path parameter.
If a plugin supports multiple trigger specifications, some parameters may depend on the trigger specification that you use.
This plugin includes a JSON metadata schema in its docstring that defines supported trigger types and configuration parameters. This metadata enables the InfluxDB 3 Explorer UI to display and configure the plugin.
| Parameter | Type | Default | Description |
|---|---|---|---|
parameter_name |
string | required | Description of the parameter |
another_param |
integer | required | Description with any constraints or requirements |
| Parameter | Type | Default | Description |
|---|---|---|---|
optional_param |
boolean | "false" | Description of optional parameter |
timeout |
integer | 30 | Connection timeout in seconds |
| Parameter | Type | Default | Description |
|---|---|---|---|
category_param |
string | "default" | Parameters grouped by functionality |
| Parameter | Type | Default | Description |
|---|---|---|---|
config_file_path |
string | none | TOML config file path relative to PLUGIN_DIR (required for TOML configuration) |
To use a TOML configuration file, set the PLUGIN_DIR environment variable and specify the config_file_path in the trigger arguments. This is in addition to the --plugin-dir flag when starting InfluxDB 3.
- plugin_config_scheduler.toml - for scheduled triggers
- plugin_config_data_writes.toml - for data write triggers
For more information on using TOML configuration files, see the Using TOML Configuration Files section in the influxdb3_plugins/README.md.
The plugin requires [specific data format or schema requirements].
- InfluxDB 3 Core/Enterprise: with the Processing Engine enabled
- Python packages:
package_name(for specific functionality)
-
Start InfluxDB 3 with the Processing Engine enabled (
--plugin-dir /path/to/plugins):influxdb3 serve \ --node-id node0 \ --object-store file \ --data-dir ~/.influxdb3 \ --plugin-dir ~/.plugins
-
Install required Python packages (if any):
influxdb3 install package package_name
Run the plugin periodically on historical data:
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/plugin_name/plugin_name.py \
--trigger-spec "every:1h" \
--trigger-arguments 'parameter_name=value,another_param=100' \
scheduled_trigger_nameProcess data as it's written:
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/plugin_name/plugin_name.py \
--trigger-spec "all_tables" \
--trigger-arguments 'parameter_name=value,another_param=100' \
write_trigger_nameProcess data via HTTP requests:
influxdb3 create trigger \
--database mydb \
--plugin-filename gh:influxdata/plugin_name/plugin_name.py \
--trigger-spec "request:endpoint" \
--trigger-arguments 'parameter_name=value,another_param=100' \
http_trigger_name[Description of what this example demonstrates]
# Create the trigger
influxdb3 create trigger \
--database weather \
--plugin-filename gh:influxdata/plugin_name/plugin_name.py \
--trigger-spec "every:30m" \
--trigger-arguments 'parameter_name=value,another_param=100' \
example_trigger
# Write test data
influxdb3 write \
--database weather \
"measurement,tag=value field=22.5"
# Query results (after trigger runs)
influxdb3 query \
--database weather \
"SELECT * FROM result_measurement"tag | field | time
----|-------|-----
value | 22.5 | 2024-01-01T00:00:00Z
Transformation details:
- Before:
field=22.5(original value) - After:
field=22.5(processed value with description of changes)
[Description of what this example demonstrates]
# Create trigger with different configuration
influxdb3 create trigger \
--database sensors \
--plugin-filename gh:influxdata/plugin_name/plugin_name.py \
--trigger-spec "all_tables" \
--trigger-arguments 'parameter_name=different_value,optional_param=true' \
another_trigger
# Write data with specific format
influxdb3 write \
--database sensors \
"raw_data,device=sensor1 value1=20.1,value2=45.2"
# Query processed data
influxdb3 query \
--database sensors \
"SELECT * FROM processed_data"device | value1 | value2 | time
-------|--------|--------|-----
sensor1 | 20.1 | 45.2 | 2024-01-01T00:00:00Z
Processing details:
- Before:
value1=20.1,value2=45.2 - After: [Description of any transformations applied]
[Add more examples as needed to demonstrate different features]
plugin_name.py: Main plugin code containing handlers for trigger typesplugin_config_scheduler.toml: Example TOML configuration for scheduled triggersplugin_config_data_writes.toml: Example TOML configuration for data write triggers
Handles scheduled trigger execution. Queries historical data within the specified window and applies processing logic.
Key operations:
- Parses configuration from arguments
- Queries source data with filters
- Applies processing logic
- Writes results to target measurement
Handles real-time data processing during writes. Processes incoming data batches and applies transformations before writing.
Key operations:
- Filters relevant table batches
- Applies processing to each row
- Writes to target measurement immediately
Handles HTTP-triggered processing. Processes data sent via HTTP requests.
Key operations:
- Parses request body
- Validates input data
- Applies processing logic
- Returns response
- Data Validation: Checks input data format and required fields
- Processing: Applies the main plugin logic to transform/analyze data
- Output Generation: Formats results and metadata
- Error Handling: Manages exceptions and provides meaningful error messages
Plugin Module
├── process_scheduled_call() # Scheduled trigger handler
├── process_writes() # Data write trigger handler
├── process_http_request() # HTTP trigger handler
├── validate_config() # Configuration validation
├── apply_processing() # Core processing logic
└── helper_functions() # Utility functions
Solution: Check that all required parameters are provided in the trigger arguments. Verify parameter names match exactly (case-sensitive).
Solution: Ensure the plugin file has execute permissions:
chmod +x ~/.plugins/plugin_name.pySolution: Install required Python packages:
influxdb3 install package package_nameSolution:
- Check that source measurement contains data
- Verify trigger is enabled and running
- Check logs for errors (logs are stored in the trigger's database):
influxdb3 query \ --database YOUR_DATABASE \ "SELECT * FROM system.processing_engine_logs WHERE trigger_name = 'your_trigger_name' ORDER BY event_time DESC"
- Enable debug logging: Add
debug=trueto trigger arguments - Use dry run mode: Set
dry_run=trueto test without writing data - Check field names: Use
SHOW FIELD KEYS FROM measurementto verify field names - Test with small windows: Use short time windows for testing (e.g.,
window=1h) - Monitor resource usage: Check CPU and memory usage during processing
- Processing large datasets may require increased memory
- Use filters to process only relevant data
- Batch size affects memory usage and processing speed
- Consider using specific_fields to limit processing scope
- Cache frequently accessed data when possible
For questions or comments about this plugin, please open an issue in the influxdb3_plugins repository.
After making changes to this README, sync to the documentation site:
- Commit your changes to the influxdb3_plugins repository
- Look for the sync reminder - A comment will appear on your commit with a sync link
- Click the link - This opens a pre-filled form to trigger the docs-v2 sync
- Submit the sync request - A workflow will validate, transform, and create a PR
The documentation site will be automatically updated with your changes after review.
When using this template:
- Replace
Plugin Namewith the actual plugin name - Update emoji metadata with appropriate trigger types and tags
- Fill in all parameter tables with actual configuration options
- Provide real, working examples with expected output
- Include actual function names and signatures
- Add plugin-specific troubleshooting scenarios
- Remove any sections that don't apply to your plugin
- Remove this "Template Usage Notes" section from the final README
- Description: 2-4 sentences, be specific about capabilities
- Configuration: Group parameters logically, mark required clearly
- Examples: At least 2 complete, working examples
- Expected output: Show actual output format
- Troubleshooting: Include plugin-specific issues and solutions
- Code overview: Document main functions and logic flow