Compare commits

..

229 commits

Author SHA1 Message Date
github-actions[bot]
994eecdd3d docs: add version snapshot v0.27.0 and cleanup old versions [skip ci] 2026-03-29 18:59:51 +00:00
Julian Pawlowski
4d822030f9 fix(ci): bump Python to 3.14 and pin uv venv to setup-python interpreter
homeassistant==2026.3.4 requires Python>=3.14.2. The lint workflow was
specifying Python 3.13, and uv venv was ignoring actions/setup-python and
picking up the system Python (3.14.0) instead.

Changes:
- lint.yml: python-version 3.13 → 3.14
- bootstrap: uv venv now uses $(which python) to respect
  actions/setup-python and local pyenv/asdf setups

Impact: lint workflow no longer fails with Python version unsatisfiable
dependency error when installing homeassistant.
2026-03-29 18:55:23 +00:00
Julian Pawlowski
b92becdf8f chore(release): bump version to 0.27.0 2026-03-29 18:49:21 +00:00
Julian Pawlowski
566ccf4017 fix(scripts): anchor grep pattern to prevent false tag match
grep -q "refs/tags/$TAG" matched substrings, so v0.27.0b0
would block release of v0.27.0. Changed to "refs/tags/${TAG}$"
to require exact end-of-line match.
2026-03-29 18:49:18 +00:00
Julian Pawlowski
0381749e6f fix(interval_pool): fix DST spring-forward causing missing tomorrow intervals
_get_cached_intervals() used fixed-offset datetimes from fromisoformat()
for iteration. When start and end boundaries span a DST transition (e.g.,
+01:00 CET → +02:00 CEST), the loop's end check compared UTC values,
stopping 1 hour early on spring-forward days.

This caused the last 4 quarter-hourly intervals of "tomorrow" to be
missing, making the binary sensor "Tomorrow data available" show Off
even when full data was present.

Changed iteration to use naive local timestamps, matching the index key
format (timezone stripped via [:19]). The end boundary comparison now
works correctly regardless of DST transitions.

Impact: Binary sensor "Tomorrow data available" now correctly shows On
on DST spring-forward days. Affects all European users on the last
Sunday of March each year.
2026-03-29 18:42:27 +00:00
Julian Pawlowski
00a653396c fix(translations): update API token instructions to use placeholder for Tibber URL 2026-03-29 18:19:42 +00:00
Julian Pawlowski
dbe73452f7 fix(devcontainer): update Python version to 3.14 in devcontainer configuration
fix(pyproject): require Python version 3.14 in project settings
2026-03-29 18:19:33 +00:00
Julian Pawlowski
9123903b7f fix(bootstrap): update default Home Assistant version to 2026.3.4 2026-03-29 18:04:50 +00:00
dependabot[bot]
5cab2a37b0
chore(deps): bump actions/deploy-pages from 4 to 5 (#95)
Some checks failed
Deploy Docusaurus Documentation (Dual Sites) / Build and Deploy Documentation Sites (push) Has been cancelled
Bumps [actions/deploy-pages](https://github.com/actions/deploy-pages) from 4 to 5.
- [Release notes](https://github.com/actions/deploy-pages/releases)
- [Commits](https://github.com/actions/deploy-pages/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/deploy-pages
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-28 15:52:47 +01:00
dependabot[bot]
e796660112
chore(deps): bump actions/configure-pages from 5 to 6 (#96)
Bumps [actions/configure-pages](https://github.com/actions/configure-pages) from 5 to 6.
- [Release notes](https://github.com/actions/configure-pages/releases)
- [Commits](https://github.com/actions/configure-pages/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/configure-pages
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-28 15:51:35 +01:00
dependabot[bot]
719344e11f
chore(deps-dev): bump setuptools from 82.0.0 to 82.0.1 (#88)
Some checks failed
Lint / Ruff (push) Has been cancelled
Validate / Hassfest validation (push) Has been cancelled
Validate / HACS validation (push) Has been cancelled
Bumps [setuptools](https://github.com/pypa/setuptools) from 82.0.0 to 82.0.1.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v82.0.0...v82.0.1)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-version: 82.0.1
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-22 21:12:08 +01:00
dependabot[bot]
a59096eeff
chore(deps): bump astral-sh/setup-uv from 7.3.1 to 7.6.0 (#92)
Bumps [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) from 7.3.1 to 7.6.0.
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](5a095e7a20...37802adc94)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: 7.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-22 21:11:41 +01:00
dependabot[bot]
afd626af05
chore(deps): bump home-assistant/actions (#93)
Bumps [home-assistant/actions](https://github.com/home-assistant/actions) from dce0e860c68256ef2902ece06afa5401eb4674e1 to d56d093b9ab8d2105bc0cb6ee9bcc0ef4ec8b96d.
- [Release notes](https://github.com/home-assistant/actions/releases)
- [Commits](dce0e860c6...d56d093b9a)

---
updated-dependencies:
- dependency-name: home-assistant/actions
  dependency-version: d56d093b9ab8d2105bc0cb6ee9bcc0ef4ec8b96d
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-22 21:11:16 +01:00
dependabot[bot]
e429dcf945
chore(deps): bump astral-sh/setup-uv from 7.3.0 to 7.3.1 (#87)
Some checks failed
Lint / Ruff (push) Has been cancelled
Validate / Hassfest validation (push) Has been cancelled
Validate / HACS validation (push) Has been cancelled
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-28 11:48:42 +01:00
dependabot[bot]
86c28acead
chore(deps): bump home-assistant/actions (#86)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-28 11:48:27 +01:00
Julian Pawlowski
92520051e4 fix(devcontainer): remove unused VS Code extensions from configuration
Some checks failed
Lint / Ruff (push) Has been cancelled
Validate / Hassfest validation (push) Has been cancelled
Validate / HACS validation (push) Has been cancelled
2026-02-16 16:25:16 +00:00
dependabot[bot]
ee7fc623a7
chore(deps-dev): bump setuptools from 80.10.2 to 82.0.0 (#85)
Some checks failed
Lint / Ruff (push) Has been cancelled
Validate / Hassfest validation (push) Has been cancelled
Validate / HACS validation (push) Has been cancelled
2026-02-10 16:14:58 +01:00
dependabot[bot]
da64cc4805
chore(deps): bump astral-sh/setup-uv from 7.2.1 to 7.3.0 (#84)
Some checks failed
Lint / Ruff (push) Has been cancelled
Validate / Hassfest validation (push) Has been cancelled
Validate / HACS validation (push) Has been cancelled
2026-02-07 11:27:27 +01:00
dependabot[bot]
981089fe68
chore(deps): update ruff requirement (#83)
Some checks failed
Lint / Ruff (push) Has been cancelled
Validate / Hassfest validation (push) Has been cancelled
Validate / HACS validation (push) Has been cancelled
2026-02-04 07:56:44 +01:00
dependabot[bot]
d3f3975204
chore(deps): bump astral-sh/setup-uv from 7.2.0 to 7.2.1 (#81)
Some checks failed
Lint / Ruff (push) Has been cancelled
Validate / Hassfest validation (push) Has been cancelled
Validate / HACS validation (push) Has been cancelled
Bumps [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) from 7.2.0 to 7.2.1.
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](61cb8a9741...803947b9bd)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: 7.2.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 22:39:55 +01:00
dependabot[bot]
49cdb2c28a
chore(deps-dev): bump setuptools from 80.10.1 to 80.10.2 (#80) 2026-01-27 09:02:44 +01:00
dependabot[bot]
73b7f0b2ca
chore(deps): bump home-assistant/actions (#79) 2026-01-27 09:02:27 +01:00
dependabot[bot]
152f104ef0
chore(deps): bump actions/setup-python from 6.1.0 to 6.2.0 (#78) 2026-01-23 08:51:12 +01:00
dependabot[bot]
72b42460a0
chore(deps-dev): bump setuptools from 80.9.0 to 80.10.1 (#77) 2026-01-22 06:57:04 +01:00
Julian Pawlowski
1bf031ba19 fix(options_flow): enhance translation handling for config fields and update language fallback 2026-01-21 18:35:19 +00:00
Julian Pawlowski
89880c7755 chore(release): bump version to 0.27.0b0 2026-01-21 17:37:35 +00:00
Julian Pawlowski
631cebeb55 feat(config_flow): show override warnings when config entities control settings
When runtime config override entities (number/switch) are enabled,
the Options Flow now displays warning indicators at the top of each
affected section. Users see which fields are being managed by config
entities and can still edit the base values if needed.

Changes:
- Add ConstantSelector warnings in Best Price/Peak Price sections
- Implement multi-language support for override warnings (de, en, nb, nl, sv)
- Add _get_override_translations() to load translated field labels
- Add _get_active_overrides() to detect enabled override entities
- Extend get_best_price_schema/get_peak_price_schema with translations param
- Add 14 number/switch config entities for runtime period tuning
- Document runtime configuration entities in user docs

Warning format adapts to overridden fields:
- Single: "⚠️ Flexibility controlled by config entity"
- Multiple: "⚠️ Flexibility and Minimum Distance controlled by config entity"

Impact: Users can now dynamically adjust period calculation parameters
via Home Assistant automations, scripts, or dashboards without entering
the Options Flow. Clear UI indicators show which settings are currently
overridden.
2026-01-21 17:36:51 +00:00
Julian Pawlowski
cc75bc53ee feat(services): add average indicator for hourly resolution in charts
Add visual indicators to distinguish hourly aggregated data from
original 15-minute interval data in ApexCharts output.

Changes:
- Chart title: Append localized suffix like "(Ø hourly)" / "(Ø stündlich)"
- Y-axis label: Append "(Ø)" suffix, e.g., "øre/kWh (Ø)"

The suffix pattern avoids confusion with Scandinavian currency symbols
(øre/öre) which look similar to the average symbol (Ø) when used as prefix.

Added hourly_suffix translations for all 5 languages (en, de, sv, nb, nl).

Impact: Users can now clearly see when a chart displays averaged hourly
data rather than original 15-minute prices.
2026-01-20 16:44:18 +00:00
Julian Pawlowski
b541f7b15e feat(apexcharts): add legend toggle for best/peak price overlays
Implement clickable legend items to show/hide best/peak price period
overlays in generated ApexCharts YAML configuration.

Legend behavior by configuration:
- Only best price: No legend (overlay always visible)
- Only peak price: Legend shown, peak toggleable (starts hidden)
- Both enabled: Legend shown, both toggleable (best visible, peak hidden)

Changes:
- Best price overlay: in_legend only when peak also enabled
- Peak price overlay: always in_legend with hidden_by_default: true
- Enable experimental.hidden_by_default when peak price active
- Price level series (LOW/NORMAL/HIGH): hidden from legend when
  overlays active, visible otherwise (preserves easy legend enable)
- Add triangle icons (▼/▲) before overlay names for visual distinction
- Custom legend markers (size: 0) only when overlays active
- Increased itemMargin for better visual separation

Impact: Users can toggle best/peak price period visibility directly
in the chart via legend click. Without overlays, legend behavior
unchanged - users can still enable it by setting show: true.
2026-01-20 16:27:14 +00:00
Julian Pawlowski
2f36c73c18 feat(services): add hourly resolution option for chart data services
Add resolution parameter to get_chartdata and get_apexcharts_yaml services,
allowing users to choose between original 15-minute intervals or aggregated
hourly values for chart visualization.

Implementation uses rolling 5-interval window aggregation (-2, -1, 0, +1, +2
around :00 of each hour = 60 minutes total), matching the sensor rolling
hour methodology. Respects user's CONF_AVERAGE_SENSOR_DISPLAY setting for
mean vs median calculation.

Changes:
- formatters.py: Add aggregate_to_hourly() function preserving original
  field names (startsAt, total, level, rating_level) for unified processing
- get_chartdata.py: Pre-aggregate data before processing when resolution is
  'hourly', enabling same code path for filters/insert_nulls/connect_segments
- get_apexcharts_yaml.py: Add resolution parameter, pass to all 4 get_chartdata
  service calls in generated JavaScript
- services.yaml: Add resolution field with interval/hourly selector
- icons.json: Add section icons for get_apexcharts_yaml fields
- translations: Add highlight_peak_price and resolution field translations
  for all 5 languages (en, de, sv, nb, nl)

Impact: Users can now generate cleaner charts with 24 hourly data points
instead of 96 quarter-hourly intervals. The unified processing approach
ensures all chart features (filters, null insertion, segment connection)
work identically for both resolutions.
2026-01-20 15:51:34 +00:00
Julian Pawlowski
1b22ce3f2a feat(config_flow): add entity status checks to options flow pages
Added dynamic warnings when users configure settings for sensors that
are currently disabled. This improves UX by informing users that their
configuration changes won't have any visible effect until they enable
the relevant sensors.

Changes:
- Created entity_check.py helper module with sensor-to-step mappings
- Added check_relevant_entities_enabled() to detect disabled sensors
- Integrated warnings into 6 options flow steps (price_rating,
  price_level, best_price, peak_price, price_trend, volatility)
- Made Chart Data Export info page content-aware: shows configuration
  guide when sensor is enabled, shows enablement instructions when disabled
- Updated all 5 translation files (de, en, nb, nl, sv) with dynamic
  placeholders {entity_warning} and {sensor_status_info}

Impact: Users now receive clear feedback when configuring settings for
disabled sensors, reducing confusion about why changes aren't visible.
Chart Data Export page now provides context-appropriate guidance.
2026-01-20 13:59:07 +00:00
Julian Pawlowski
5fc1f4db33 feat(sensors): add 5-level price trend scale with configurable thresholds
Extends trend sensors from 3-level (rising/stable/falling) to 5-level scale
(strongly_rising/rising/stable/falling/strongly_falling) for finer granularity.

Changes:
- Add PRICE_TREND_MAPPING with integer values (-2, -1, 0, +1, +2) matching
  PRICE_LEVEL_MAPPING pattern for consistent automation comparisons
- Add configurable thresholds for strongly_rising (default: 6%) and
  strongly_falling (default: -6%) independent from base thresholds
- Update calculate_price_trend() to return 3-tuple: (trend_state, diff_pct, trend_value)
- Add trend_value attribute to all trend sensors for numeric comparisons
- Update sensor entity descriptions with 5-level options
- Add validation with cross-checks (strongly_rising > rising, etc.)
- Update icons: chevron-double-up/down for strong trends, trending-up/down for normal

Files changed:
- const.py: PRICE_TREND_* constants, PRICE_TREND_MAPPING, config constants
- utils/price.py: Extended calculate_price_trend() signature and return value
- sensor/calculators/trend.py: Pass new thresholds, handle 3-tuple return
- sensor/definitions.py: 5-level options for all 9 trend sensors
- sensor/core.py: 5-level icon mapping
- entity_utils/icons.py: 5-level trend icons
- config_flow_handlers/: validators, schemas, options_flow for new settings
- translations/*.json: Labels and error messages (en, de, nb, sv, nl)
- tests/test_percentage_calculations.py: Updated for 3-tuple return

Impact: Users get more nuanced trend information for automation decisions.
New trend_value attribute enables numeric comparisons (e.g., > 0 for any rise).
Existing automations using "rising"/"falling"/"stable" continue to work.
2026-01-20 13:36:01 +00:00
Julian Pawlowski
972cbce1d3 chore(release): bump version to 0.26.0 2026-01-20 12:40:37 +00:00
Julian Pawlowski
f88d6738e6 fix(validation): enhance user data validation to require active subscription and price info.
Fixes #73
2026-01-20 12:33:45 +00:00
Julian Pawlowski
4b32568665 fix(tests): include current_interval_price_base in interval sensors and remove from known exceptions 2026-01-20 12:06:10 +00:00
dependabot[bot]
4ceff6cf5f
chore(deps): bump astral-sh/setup-uv from 7.1.6 to 7.2.0 (#72) 2026-01-07 12:39:25 +01:00
Julian Pawlowski
285258c325 chore: remove country list from hacs.json 2025-12-28 11:30:02 +00:00
Julian Pawlowski
3e6bcf2345 fix(sensor): synchronize current_interval_price_base with current_interval_price
Fixed inconsistency between "Current Electricity Price" and "Current Electricity Price
(Energy Dashboard)" sensors that were showing different prices and icons.

Changes:
- Add current_interval_price_base to TIME_SENSITIVE_ENTITY_KEYS so it updates at
  quarter-hour boundaries instead of only on API polls. This ensures both sensors
  update synchronously when a new 15-minute interval starts.
- Use interval_data["startsAt"] as timestamp for current interval price sensors
  (both variants) instead of rounded calculation time. This prevents timestamp
  divergence when sensors update at slightly different times.
- Include current_interval_price_base in icon color attribute mapping so both
  sensors display the same dynamic cash icon based on current price level.
- Include current_interval_price_base in dynamic icon function so it gets the
  correct icon based on current price level (VERY_CHEAP/CHEAP/NORMAL/EXPENSIVE).

Impact: Both sensors now show identical prices, timestamps, and icons as intended.
They update synchronously at interval boundaries (00, 15, 30, 45 minutes) and
correctly represent the Energy Dashboard compatible variant without lag or
inconsistencies.
2025-12-26 16:23:05 +00:00
Julian Pawlowski
0a4af0de2f feat(sensor): convert timing sensors to hour-based display with minute attributes
Convert best_price and peak_price timing sensors to display in hours (UI-friendly)
while retaining minute values in attributes (automation-friendly). This improves
readability in dashboards by using Home Assistant's automatic duration formatting
"1 h 35 min" instead of decimal "1.58 h".

BREAKING CHANGE: State unit changed from minutes to hours for 6 timing sensors.

Affected sensors:
  * best_price_period_duration, best_price_remaining_minutes, best_price_next_in_minutes
  * peak_price_period_duration, peak_price_remaining_minutes, peak_price_next_in_minutes

Migration guide for users:
  - If your automations use {{ state_attr(..., 'remaining_time') }} or similar:
    No action needed - attribute values remain in minutes
  - If your automations use {{ states('sensor.best_price_remaining_minutes') }} directly:
    Update to use the minute attribute instead: {{ state_attr('sensor.best_price_remaining_minutes', 'remaining_minutes') }}
  - If your dashboards display the state value:
    Values now show as "1 h 35 min" instead of "95" - this is the intended improvement
  - If your templates do math with the state: multiply by 60 to convert hours back to minutes
    Before: remaining * 60
    After: remaining_minutes (use attribute directly)

Implementation details:
- Timing sensors now use device_class=DURATION, unit=HOURS, precision=2
- State values converted from minutes to hours via _minutes_to_hours()
- New minute-precision attributes added for automation compatibility:
  * period_duration_minutes (for checking if period is long enough)
  * remaining_minutes (for countdown-based automation logic)
  * next_in_minutes (for time-to-event automation triggers)
- Translation improvements across all 5 languages (en, de, nb, nl, sv):
  * Descriptions now clarify state in hours vs attributes in minutes
  * Long descriptions explain dual-format architecture
  * Usage tips updated to reference minute attributes for automations
  * All translation files synchronized (fixed order, removed duplicates)
- Type safety: Added type assertions (cast) for timing calculator results to
  satisfy Pyright type checking (handles both float and datetime return types)

Home Assistant now automatically formats these durations as "1 h 35 min" for improved
UX, matching the behavior of battery.remaining_time and other duration sensors.

Rationale for breaking change:
The previous minute-based state was unintuitive for users ("95 minutes" doesn't
immediately convey "1.5 hours") and didn't match Home Assistant's standard duration
formatting. The new hour-based state with minute attributes provides:
- Better UX: Automatic "1 h 35 min" formatting in UI
- Full automation compatibility: Minute attributes for all calculation needs
- Consistency: Matches HA's duration sensor pattern (battery, timer, etc.)

Impact: Timing sensors now display in human-readable hours with full backward
compatibility via minute attributes. Users relying on direct state access must
migrate to minute attributes (simple change, documented above).
2025-12-26 16:03:00 +00:00
Julian Pawlowski
09a50dccff fix(sensor): streamline lifecycle attrs and next poll visibility
- Remove pool stats/fetch-age from lifecycle sensor to avoid stale data under state-change filtering; add `next_api_poll` for transparency.
- Clean lifecycle calculator by dropping unused helpers/constants and delete the obsolete cache age test.
- Clarify lifecycle state is diagnostics-only in coordinator comments, keep state-change filtering in timer test, and retain quarter-hour precision notes in constants.
- Keep sensor core aligned with lifecycle state filtering.

Impact: Lifecycle sensor now exposes only state-relevant fields without recorder noise, next API poll is visible, and dead code/tests tied to removed attributes are gone.
2025-12-26 12:13:36 +00:00
Julian Pawlowski
665fac10fc feat(services): add peak price overlay toggle to ApexCharts YAML
Added `highlight_peak_price` (default: false) to `get_apexcharts_yaml` service
and implemented a subtle red overlay analogous to best price periods using
`period_filter: 'peak_price'`. Tooltips now dynamically exclude overlay
series to prevent overlay tooltips.

Impact: Users can visualize peak-price periods in ApexCharts cards
when desired, with default opt-out behavior.
2025-12-26 00:07:28 +00:00
Julian Pawlowski
c6b34984fa chore: Remove outdated documentation for sensors and troubleshooting in version v0.25.0b0; update versioning logic to skip documentation versioning for beta releases. 2025-12-25 23:06:27 +00:00
github-actions[bot]
3624f1c9a8 docs: add version snapshot v0.25.0b0 and cleanup old versions [skip ci] 2025-12-25 22:54:51 +00:00
Julian Pawlowski
3968dba9d2 chore(release): enhance version parsing to support beta/prerelease suffix 2025-12-25 22:50:12 +00:00
Julian Pawlowski
3157c6f0df chore(release): bump version to 0.25.0b0 2025-12-25 22:48:07 +00:00
Julian Pawlowski
e851cb0670 chore(release): enhance version format validation to support prerelease tags 2025-12-25 22:48:01 +00:00
Julian Pawlowski
15e09fa210 docs(user): unify entity ID examples and add "Entity ID tip" across guides
Added a consistent "Entity ID tip" block and normalized all example
entity IDs to the `<home_name>` placeholder across user docs. Updated
YAML and example references to current entity naming (e.g.,
`sensor.<home_name>_current_electricity_price`,
`sensor.<home_name>_price_today`,
`sensor.<home_name>_today_s_price_volatility`,
`binary_sensor.<home_name>_best_price_period`, etc.). Refreshed
automation examples to use language-independent attributes (e.g.
`price_volatility`) and improved robustness. Aligned ApexCharts examples
to use `sensor.<home_name>_chart_metadata` and corrected references for
tomorrow data availability.

Changed files:
- docs/user/docs/actions.md
- docs/user/docs/automation-examples.md
- docs/user/docs/chart-examples.md
- docs/user/docs/configuration.md
- docs/user/docs/dashboard-examples.md
- docs/user/docs/dynamic-icons.md
- docs/user/docs/faq.md
- docs/user/docs/icon-colors.md
- docs/user/docs/period-calculation.md
- docs/user/docs/sensors.md

Impact: Clearer, language-independent examples that reduce confusion and
prevent brittle automations; easier copy/paste adaptation across setups;
more accurate guidance for chart configuration and period/volatility usage.
2025-12-25 19:20:37 +00:00
Julian Pawlowski
c6d6e4a5b2 fix(volatility): expose price coefficient variation attribute
Expose the `price_coefficient_variation_%` value across period statistics, binary sensor attributes, and the volatility calculator, and refresh the volatility descriptions/translations to mention the coefficient-of-variation metric.
2025-12-25 19:10:42 +00:00
Julian Pawlowski
23b4330b9a fix(coordinator): track API calls separately from cached data usage
The lifecycle sensor was always showing "fresh" state because
_last_price_update was set on every coordinator update, regardless of
whether data came from API or cache.

Changes:
- interval_pool/manager.py: get_intervals() and get_sensor_data() now
  return tuple[data, bool] where bool indicates actual API call
- coordinator/price_data_manager.py: All fetch methods propagate
  api_called flag through the call chain
- coordinator/core.py: Only update _last_price_update when api_called=True,
  added debug logging to distinguish API calls from cached data
- services/get_price.py: Updated to handle new tuple return type

Impact: Lifecycle sensor now correctly shows "cached" during normal
15-minute updates (using pool cache) and only "fresh" within 5 minutes
of actual API calls. This fixes the issue where the sensor would never
leave the "fresh" state during frequent HA restarts or normal operation.
2025-12-25 18:53:29 +00:00
Julian Pawlowski
81ebfb4916 feat(devcontainer): add Google Code Assistant and OpenAI ChatGPT extensions 2025-12-25 12:02:55 +00:00
Copilot
a437d22b7a
Fix flex filter excluding valid low-price intervals in best price periods (#68)
Fixed bug in best price flex filter that incorrectly excluded prices
when checking for periods. The filter was requiring price >= daily_min,
which is unnecessary and could theoretically exclude valid low prices.

Changed from:
  in_flex = price >= criteria.ref_price and price <= flex_threshold

To:
  in_flex = price <= flex_threshold

This ensures all low prices up to the threshold are included in best
price period consideration, matching the expected behavior described
in the period calculation documentation.

The fix addresses the user's observation that qualifying intervals
appearing after the daily minimum in chronological order should be
included if they meet the flex criteria.
2025-12-25 09:49:31 +01:00
Julian Pawlowski
9eea984d1f refactor(coordinator): remove price_data from cache, delegate to Pool
Cache now stores only user metadata and timestamps. Price data is
managed exclusively by IntervalPool (single source of truth).

Changes:
- cache.py: Remove price_data and last_price_update fields
- core.py: Remove _cached_price_data, update references to use Pool
- core.py: Rename _data_fetcher to _price_data_manager
- AGENTS.md: Update class naming examples (DataFetcher → PriceDataManager)

This completes the Pool integration architecture where IntervalPool
handles all price data persistence and coordinator cache handles
only user account metadata.
2025-12-23 14:15:26 +00:00
Julian Pawlowski
9b34d416bc feat(services): add debug_clear_tomorrow for testing refresh cycle
Add debug service to clear tomorrow data from interval pool, enabling
testing of tomorrow data refresh cycle without waiting for next day.

Service available only in DevContainer (TIBBER_PRICES_DEV=1 env var).
Removes intervals from both Pool index and coordinator.data["priceInfo"]
so sensors properly show "unknown" state.

Changes:
- Add debug_clear_tomorrow.py service handler
- Register conditionally based on TIBBER_PRICES_DEV env var
- Add service schema and translations
- Set TIBBER_PRICES_DEV=1 in devcontainer.json

Usage: Developer Tools → Services → tibber_prices.debug_clear_tomorrow

Impact: Enables rapid testing of tomorrow data refresh cycle during
development without waiting or restarting HA.
2025-12-23 14:13:51 +00:00
Julian Pawlowski
cfc7cf6abc refactor(coordinator): replace DataFetcher with PriceDataManager
Rename and refactor data_fetching.py → price_data_manager.py to reflect
actual responsibilities:
- User data: Fetches directly via API, validates, caches
- Price data: Delegates to IntervalPool (single source of truth)

Key changes:
- Add should_fetch_tomorrow_data() for intelligent API call decisions
- Add include_tomorrow parameter to prevent API spam before 13:00
- Remove cached_price_data property (Pool is source of truth)
- Update tests to use new class name

Impact: Clearer separation of concerns, reduced API calls through
intelligent tomorrow data fetching logic.
2025-12-23 14:13:43 +00:00
Julian Pawlowski
78df8a4b17 refactor(lifecycle): integrate with Pool for sensor metrics
Replace cache-based metrics with Pool as single source of truth:
- get_cache_age_minutes() → get_sensor_fetch_age_minutes() (from Pool)
- Remove get_cache_validity_status(), get_data_completeness_status()
- Add get_pool_stats() for comprehensive pool statistics
- Add has_tomorrow_data() using Pool as source

Attributes now show:
- sensor_intervals_count/expected/has_gaps (protected range)
- cache_intervals_total/limit/fill_percent/extra (entire pool)
- last_sensor_fetch, cache_oldest/newest_interval timestamps
- tomorrow_available based on Pool state

Impact: More accurate lifecycle status, consistent with Pool as source
of truth, cleaner diagnostic information.
2025-12-23 14:13:34 +00:00
Julian Pawlowski
7adc56bf79 fix(interval_pool): prevent external mutation of cached intervals
Return shallow copies from _get_cached_intervals() to prevent external
code (e.g., parse_all_timestamps()) from mutating Pool internal cache.
This fixes TypeError in check_coverage() caused by datetime objects in
cached interval dicts.

Additional improvements:
- Add TimeService support for time-travel testing in cache/manager
- Normalize startsAt to consistent format (handles datetime vs string)
- Rename detect_gaps() → check_coverage() for clarity
- Add get_sensor_data() for sensor data fetching with fetch/return separation
- Add get_pool_stats() for lifecycle sensor metrics

Impact: Fixes critical cache mutation bug, enables time-travel testing,
improves pool API for sensor integration.
2025-12-23 14:13:24 +00:00
Julian Pawlowski
94615dc6cd refactor(interval_pool): improve reliability and test coverage
Added async_shutdown() method for proper cleanup on unload - cancels
debounce and background tasks to prevent orphaned task leaks.

Added Phase 1.5 to GC: removes empty fetch groups after dead interval
cleanup, with index rebuild to maintain consistency.

Added update_batch() to TimestampIndex for efficient batch updates.
Touch operations now use batch updates instead of N remove+add calls.

Rewrote memory leak tests for modular architecture - all 9 tests now
pass using new component APIs (cache, index, gc).

Impact: Prevents task leaks on HA restart/reload, reduces memory
overhead from empty groups, improves touch operation performance.
2025-12-23 10:10:35 +00:00
github-actions[bot]
fc64aecdd9 docs: add version snapshot v0.24.0 and cleanup old versions [skip ci] 2025-12-22 23:42:52 +00:00
Julian Pawlowski
db0de2376b chore(release): bump version to 0.24.0 2025-12-22 23:40:14 +00:00
Julian Pawlowski
4971ab92d6 fix(chartdata): use proportional padding for yaxis bounds
Changed from fixed padding (0.5ct below min, 1ct above max) to
proportional padding based on data range (8% below, 15% above).

This ensures consistent visual "airiness" across all price ranges,
whether prices are at 30ct or 150ct. Both subunit (ct/øre) and
base currency (€/kr) now use the same proportional logic.

Previous fixed padding looked too tight on charts with large price
ranges (e.g., 0.6€-1.5€) compared to charts with small ranges
(e.g., 28-35ct).

Impact: Chart metadata sensor provides better-scaled yaxis_min/yaxis_max
values for all chart cards, making price visualizations more readable
with appropriate whitespace around data regardless of price range.
2025-12-22 23:39:35 +00:00
Julian Pawlowski
49b8a018e7 fix(types): resolve Pyright type errors
- coordinator/core.py: Fix return type for _get_threshold_percentages()
- coordinator/data_transformation.py: Add type ignore for cached data return
- sensor/core.py: Initialize _state_info with required unrecorded_attributes
2025-12-22 23:22:02 +00:00
Julian Pawlowski
4158e7b1fd feat(periods): cross-day extension and supersession
Intelligent handling when tomorrow's price data arrives:

1. Cross-Day Extension
   - Late-night periods (starting ≥20:00) can extend past midnight
   - Extension continues while prices remain below daily_min × (1+flex)
   - Maximum extension to 08:00 next day (covers typical night low)

2. Period Supersession
   - Obsolete late-night today periods filtered when tomorrow is better
   - Tomorrow must be ≥10% cheaper to supersede (SUPERSESSION_PRICE_IMPROVEMENT_PCT)
   - Prevents stale relaxation periods from persisting

Impact: Late-night periods reflect tomorrow's data when available.
2025-12-22 23:21:57 +00:00
Julian Pawlowski
5ef0396c8b feat(periods): add quality gates for period homogeneity
Prevent relaxation from creating heterogeneous periods:

1. CV-based Quality Gate (PERIOD_MAX_CV = 25%)
   - Periods with internal CV >25% are rejected during relaxation
   - CV field added to period statistics for transparency

2. Period Overlap Protection
   - New periods cannot "swallow" existing smaller periods
   - CV-based merge blocking prevents heterogeneous combinations
   - Preserves good baseline periods from relaxation replacement

3. Constants in types.py
   - PERIOD_MAX_CV, CROSS_DAY_*, SUPERSESSION_* thresholds
   - TibberPricesPeriodStatistics extended with coefficient_of_variation field

Impact: Users get smaller, more homogeneous periods that better represent
actual cheap/expensive windows.
2025-12-22 23:21:51 +00:00
Julian Pawlowski
7ee013daf2 feat(outliers): adaptive confidence based on daily volatility
Outlier smoothing now adapts to daily price volatility (CV):
- Flat days (CV≤10%): conservative (confidence=2.5), fewer false positives
- Volatile days (CV≥30%): aggressive (confidence=1.5), catch more spikes
- Linear interpolation between thresholds

Uses calculate_coefficient_of_variation() for consistency with volatility sensors.

Impact: Better outlier detection that respects natural price variation patterns.
Flat days preserve more structure, volatile days get stronger smoothing.
2025-12-22 23:21:44 +00:00
Julian Pawlowski
325d855997 feat(utils): add coefficient of variation (CV) calculation
Add calculate_coefficient_of_variation() as central utility function:
- CV = (std_dev / mean) * 100 as standardized volatility measure
- calculate_volatility_with_cv() returns both level and numeric CV
- Volatility sensors now expose CV in attributes for transparency

Used as foundation for quality gates, adaptive smoothing, and period statistics.

Impact: Volatility sensors show numeric CV percentage alongside categorical level,
enabling users to see exact price variation.
2025-12-22 23:21:38 +00:00
Julian Pawlowski
70552459ce fix(periods): protect daily extremes from outlier smoothing
The outlier filter was incorrectly smoothing daily minimum/maximum prices,
causing best/peak price periods to miss their most important intervals.

Root cause: When the daily minimum (e.g., 0.5535 kr at 05:00) was surrounded
by higher prices, the trend-based prediction calculated an "expected" price
(0.6372 kr) that exceeded the flex threshold (0.6365 kr), causing the
interval to be excluded from the best price period.

Solution: Daily extremes are now protected from smoothing. Before applying
any outlier detection, we calculate daily min/max prices and skip smoothing
for any interval at or within 0.1% of these values.

Changes:
- Added _calculate_daily_extremes() to compute daily min/max
- Added _is_daily_extreme() to check if price should be protected
- Added EXTREMES_PROTECTION_TOLERANCE constant (0.1%)
- Updated filter_price_outliers() to skip extremes before analysis
- Enhanced logging to show protected interval count

Impact: Best price periods now correctly include daily minimum intervals,
and peak price periods correctly include daily maximum intervals. The
period for 2024-12-23 now extends from 03:15-05:30 (10 intervals) instead
of incorrectly stopping at 05:00 (7 intervals).
2025-12-22 21:05:30 +00:00
Julian Pawlowski
11d4cbfd09 feat(config_flow): add price level gap tolerance for Tibber API level field
Implement gap tolerance smoothing for Tibber's price level classification
(VERY_CHEAP/CHEAP/NORMAL/EXPENSIVE/VERY_EXPENSIVE), separate from the existing
rating_level gap tolerance (LOW/NORMAL/HIGH).

New feature:
- Add CONF_PRICE_LEVEL_GAP_TOLERANCE config option with separate UI step
- Implement _apply_level_gap_tolerance() using same bidirectional gravitational
  pull algorithm as rating gap tolerance
- Add _build_level_blocks() and _merge_small_level_blocks() helper functions

Config flow changes:
- Add new "price_level" options step with dedicated schema
- Add menu entry "🏷️ Preisniveau" / "🏷️ Price Level"
- Include translations for all 5 languages (de, en, nb, nl, sv)

Bug fixes:
- Use copy.deepcopy() for price intervals before enrichment to prevent
  in-place modification of cached raw API data, which caused gap tolerance
  changes to not take effect when reverting settings
- Clear transformation cache in invalidate_config_cache() to ensure
  re-enrichment with new settings

Logging improvements:
- Reduce options update handler from 4 INFO messages to 1 DEBUG message
- Move level_filtering and period_overlap debug logs to .details logger
  for granular control via configuration.yaml

Technical details:
- level_gap_tolerance is tracked separately in transformation config hash
- Algorithm: Identifies small blocks (≤ tolerance) and merges them into
  the larger neighboring block using gravitational pull calculation
- Default: 1 (smooth single isolated intervals), Range: 0-4

Impact: Users can now stabilize Tibber's price level classification
independently from the internal rating_level calculation. Prevents
automation flickering caused by brief price level changes in Tibber's API.
2025-12-22 20:25:30 +00:00
Julian Pawlowski
f57997b119 feat(config_flow): add configurable hysteresis and gap tolerance for price ratings
Added UI controls for price rating stabilization parameters that were
previously hardcoded. Users can now fine-tune rating stability to match
their automation needs.

Changes:
- Added CONF_PRICE_RATING_HYSTERESIS constant (0-5%, step 0.5%, default 2%)
- Added CONF_PRICE_RATING_GAP_TOLERANCE constant (0-4 intervals, default 1)
- Extended get_price_rating_schema() with two new sliders
- Updated data_transformation.py to pass both parameters to enrichment function
- Improved descriptions in all 5 languages (de, en, nb, nl, sv) to focus on
  automation stability instead of chart appearance
- Both settings included in factory reset via get_default_options()

Hysteresis explanation: Prevents rapid state changes when prices hover near
thresholds (e.g., LOW requires price > threshold+hysteresis to leave).

Gap tolerance explanation: Merges small isolated rating blocks into dominant
neighboring blocks using "look through" algorithm (fixed in previous commit).

Impact: Users can now adjust rating stability for their specific use cases.
Lower hysteresis (0-1%) for responsive automations, higher (3-5%) for stable
long-running processes. Gap tolerance prevents brief rating spikes from
triggering unnecessary automation actions.
2025-12-22 13:54:10 +00:00
Julian Pawlowski
64cf842719 fix(rating): improve gap tolerance to find dominant large blocks
The gap tolerance algorithm now looks through small intermediate blocks
to find the first LARGE block (> gap_tolerance) in each direction.
This ensures small isolated rating intervals are merged into the
correct dominant block, not just the nearest neighbor.

Example: NORMAL(large) HIGH(1) NORMAL(1) HIGH(large)
Before: HIGH at 05:45 merged into NORMAL (wrong - nearest neighbor)
After:  NORMAL at 06:00 merged into HIGH (correct - dominant block)

Also collects all merge decisions BEFORE applying them, preventing
order-dependent outcomes when multiple small blocks are adjacent.

Impact: Rating transitions now appear at visually logical positions
where prices actually change direction, not at arbitrary boundaries.
2025-12-22 13:28:25 +00:00
Julian Pawlowski
ba032a1c94 chore(bootstrap): update Home Assistant version to 2025.12.4 2025-12-22 10:09:28 +00:00
Julian Pawlowski
ced9d8656b fix(chartdata): assign vertical transition lines to more expensive segment
Problem: In segmented price charts with connect_segments=true, vertical lines
at price level transitions were always drawn by the ending segment. This meant
a price INCREASE showed a cheap-colored line going UP, and a price DECREASE
showed an expensive-colored line going DOWN - counterintuitive for users.

Solution: Implement directional bridge-point logic using price level hierarchy:
- Add _is_transition_to_more_expensive() helper using PRICE_LEVEL_MAPPING and
  PRICE_RATING_MAPPING to determine transition direction
- Price INCREASE (cheap → expensive): The MORE EXPENSIVE segment draws the
  vertical line UP via new start-bridge logic (end-bridge at segment start)
- Price DECREASE (expensive → cheap): The MORE EXPENSIVE segment draws the
  vertical line DOWN via existing end-bridge logic (bridge at segment end)

Technical changes:
- Track prev_value and prev_price for segment start detection
- Add end-bridge points at segment starts for upward transitions
- Replace unconditional bridge points with directional hold/bridge logic
- Hold points extend segment horizontally when next segment handles transition

Impact: Vertical transition lines now consistently use the color of the more
expensive price level, making price movements more visually intuitive.
2025-12-21 17:40:13 +00:00
Julian Pawlowski
941f903a9c fix(apexcharts): synchronize y-axis tick intervals for consistent grid alignment
Problem: When using dual y-axes (price + hidden highlight for best-price overlay),
ApexCharts calculates tick intervals independently for each axis. This caused
misaligned horizontal grid lines - the grid follows the first y-axis ticks,
but if the hidden highlight axis had different tick calculations, visual
inconsistencies appeared (especially visible without best-price highlight).

Solution:
- Set tickAmount: 4 on BOTH y-axes to force identical tick intervals
- Add forceNiceScale: true to ensure rounded tick values despite fixed min/max
- Add showAlways: true to price axis in template modes to prevent axis
  disappearing when toggling series via legend

Also add tooltip.shared: true to combine tooltips from all series at the
same x-value into a single tooltip, reducing visual clutter at data points.

Impact: Grid lines now align consistently regardless of which series are
visible. Y-axis remains stable when toggling series in legend.
2025-12-21 17:39:12 +00:00
Julian Pawlowski
ada17f6d90 refactor(services): process chartdata intervals as unified timeline instead of per-day
Changed from iterating over each day separately to collecting all
intervals for selected days into one continuous list before processing.

Changes:
- Collect all intervals via get_intervals_for_day_offsets() with all
  day_offsets at once
- Remove outer `for day in days:` loop around interval processing
- Build date->day_key mapping during average calculation for lookup
- Add _get_day_key_for_interval() helper for average_field assignment
- Simplify midnight handling: only extend at END of entire selection
- Remove complex "next day lookup" logic at midnight boundaries

The segment boundary handling (bridge points, NULL insertion) now works
automatically across midnight since intervals are processed as one list.

Impact: Fixes bridge point rendering at midnight when rating levels
change between days. Simplifies code structure by removing ~60 lines
of per-day midnight-specific logic.
2025-12-21 14:55:52 +00:00
github-actions[bot]
5cc71901b9 docs: add version snapshot v0.23.1 and cleanup old versions [skip ci] 2025-12-21 10:48:25 +00:00
Julian Pawlowski
78b57241eb chore(release): bump version to 0.23.1 2025-12-21 10:46:00 +00:00
Julian Pawlowski
38ea143fc7 Merge branch 'main' of https://github.com/jpawlowski/hass.tibber_prices 2025-12-21 10:44:32 +00:00
Julian Pawlowski
4e0c2b47b1 fix: conditionally enable tooltips for first series based on highlight_best_price
Fixes #63
2025-12-21 10:44:29 +00:00
github-actions[bot]
19882fb17d docs: add version snapshot v0.23.0 and cleanup old versions [skip ci] 2025-12-18 15:19:19 +00:00
Julian Pawlowski
9eb5c01c94 chore(release): bump version to 0.23.0 2025-12-18 15:16:55 +00:00
Julian Pawlowski
df1ee2943b docs: update AGENTS.md links to use main branch 2025-12-18 15:16:34 +00:00
Julian Pawlowski
f539c9119b Merge branch 'main' of https://github.com/jpawlowski/hass.tibber_prices 2025-12-18 15:15:23 +00:00
Julian Pawlowski
dff0faeef5 docs(dev): update GitHub links to use main branch
Changed all documentation links from version-specific tags (v0.20.0) to
main branch references. This makes documentation maintenance-free - links
stay current as code evolves.

Updated 38 files across:
- docs/developer/docs/ (7 files)
- docs/developer/versioned_docs/version-v0.21.0/ (8 files)
- docs/developer/versioned_docs/version-v0.22.0/ (8 files)

Impact: Documentation links no longer break when new versions are released.
Links always point to current code implementation.
2025-12-18 15:15:18 +00:00
Julian Pawlowski
b815aea8bf docs(user): add comprehensive average sensor documentation
Expanded user documentation with detailed guidance on average sensors:

1. sensors.md (+182 lines):
   - New 'Average Price Sensors' section with mean vs median explanation
   - 3 real-world automation examples (heat pump, dishwasher, EV charging)
   - Display configuration guide with use-case recommendations

2. configuration.md (+75 lines):
   - New 'Average Sensor Display Settings' section
   - Comparison table of display modes (mean/median/both)
   - Attribute availability details and recorder implications

3. Minor updates to installation.md and versioned docs

Impact: Users can now understand when to use mean vs median and how to
configure display format for their specific automation needs.
2025-12-18 15:15:00 +00:00
Julian Pawlowski
0a06e12afb i18n: update translations for average sensor display feature
Synchronized all translation files (de, en, nb, nl, sv) with:
1. Custom translations: Added 'configurable display format' messaging to
   sensor descriptions
2. Standard translations: Added detailed bullet-point descriptions for
   average_sensor_display config option

Changes affect both /custom_translations/ and /translations/ directories,
ensuring UI shows complete information about the new display configuration
option across all supported languages.
2025-12-18 15:14:41 +00:00
Julian Pawlowski
aff3350de7 test(sensors): add comprehensive test coverage for mean/median display
Added new test suite and updated existing tests to verify always-both-attributes
behavior.

Changes:
- test_mean_median_display.py: NEW - Tests both attributes always present,
  configurable state display, recorder exclusion, and config changes
- test_avg_none_fallback.py: Updated to test mean/median individually (65 lines)
- test_sensor_timer_assignment.py: Minor updates for compatibility (12 lines)

Coverage: All 399 tests passing, including new edge cases for attribute
presence and recorder integration.
2025-12-18 15:14:22 +00:00
Julian Pawlowski
abb02083a7 feat(sensors): always show both mean and median in average sensor attributes
Implemented configurable display format (mean/median/both) while always
calculating and exposing both price_mean and price_median attributes.

Core changes:
- utils/average.py: Refactored calculate_mean_median() to always return both
  values, added comprehensive None handling (117 lines changed)
- sensor/attributes/helpers.py: Always include both attributes regardless of
  user display preference (41 lines)
- sensor/core.py: Dynamic _unrecorded_attributes based on display setting
  (55 lines), extracted helper methods to reduce complexity
- Updated all calculators (rolling_hour, trend, volatility, window_24h) to
  use new always-both approach

Impact: Users can switch display format in UI without losing historical data.
Automation authors always have access to both statistical measures.
2025-12-18 15:12:30 +00:00
dependabot[bot]
95ebbf6701
chore(deps): bump astral-sh/setup-uv from 7.1.5 to 7.1.6 (#61) 2025-12-15 21:21:11 +01:00
github-actions[bot]
c30af465c9 docs: add version snapshot v0.22.1 and cleanup old versions [skip ci] 2025-12-13 14:10:02 +00:00
Julian Pawlowski
29e934d66b chore(release): bump version to 0.22.1 2025-12-13 14:07:34 +00:00
Julian Pawlowski
d00935e697 fix(tests): remove unused mock_config_entry and update price_avg to base currency in percentage calculations 2025-12-13 14:07:16 +00:00
Julian Pawlowski
87f0022baa fix(api): handle None values in API responses to prevent AttributeError
Fixed issue #60 where Tibber API temporarily returning incomplete data
(None values during maintenance) caused AttributeError crashes.

Root cause: `.get(key, default)` returns None when key exists with None value,
causing chained `.get()` calls to crash (None.get() → AttributeError).

Changes:
- api/helpers.py: Use `or {}` pattern in flatten_price_info() to handle
  None values (priceInfo, priceInfoRange, today, tomorrow)
- entity.py: Use `or {}` pattern in _get_fallback_device_info() for address dict
- coordinator/data_fetching.py: Add _validate_user_data() method (67 lines)
  to reject incomplete API responses before caching
- coordinator/data_fetching.py: Modify _get_currency_for_home() to raise
  exceptions instead of silent EUR fallback
- coordinator/data_fetching.py: Add home_id parameter to constructor
- coordinator/core.py: Pass home_id to TibberPricesDataFetcher
- tests/test_user_data_validation.py: Add 12 test cases for validation logic

Architecture improvement: Instead of defensive coding with fallbacks,
implement validation to reject incomplete data upfront. This prevents
caching temporary API errors and ensures currency is always known
(critical for price calculations).

Impact: Integration now handles API maintenance periods gracefully without
crashes. No silent EUR fallbacks - raises exceptions if currency unavailable,
ensuring data integrity. Users see clear errors instead of wrong calculations.

Fixes #60
2025-12-13 14:02:30 +00:00
Julian Pawlowski
6c741e8392 fix(config_flow): restructure options flow to menu-based navigation and fix settings persistence
Fixes configuration wizard not saving settings (#59):

Root cause was twofold:
1. Linear multi-step flow pattern didn't properly persist changes between steps
2. Best/peak price settings used nested sections format - values were saved
   in sections (period_settings, flexibility_settings, etc.) but read from
   flat structure, causing configured values to be ignored on subsequent runs

Solution:
- Replaced linear step-through flow with menu-based navigation system
- Each configuration area now has dedicated "Save & Back" buttons
- Removed nested sections from all steps except best/peak price (where they
  provide better UX for grouping related settings)
- Fixed best/peak price steps to correctly extract values from sections:
  period_settings, flexibility_settings, relaxation_and_target_periods
- Added reset-to-defaults functionality with confirmation dialog

UI/UX improvements:
- Menu structure: General Settings, Currency Display, Price Rating Thresholds,
  Volatility, Best Price Period, Peak Price Period, Price Trend,
  Chart Data Export, Reset to Defaults, Back
- Removed confusing step progress indicators ("{step_num} / {total_steps}")
- Changed all submit buttons from "Continue →" to "↩ Save & Back"
- Clear grouping of settings by functional area

Translation updates (nl.json + sv.json):
- Refined volatility threshold descriptions with CV formula explanations
- Clarified price trend thresholds (compares current vs. future N-hour average,
  not "per hour increase")
- Standardized terminology (e.g., "entry" → "item", compound word consistency)
- Consistently formatted all sensor names and descriptions
- Added new data lifecycle status sensor names

Technical changes:
- Options flow refactored from linear to menu pattern with menu_options dict
- New reset_to_defaults step with confirmation and abort handlers
- Section extraction logic in best_price/peak_price steps now correctly reads
  from nested structure (period_settings.*, flexibility_settings.*, etc.)
- Removed sections from general_settings, display_settings, volatility, etc.
  (simpler flat structure via menu navigation)

Impact: Configuration wizard now reliably saves all settings. Users can
navigate between setting areas without restarting the flow. Reset function
enables quick recovery when experimenting with thresholds. Previously
configured best/peak price settings are now correctly applied.
2025-12-13 13:33:31 +00:00
Julian Pawlowski
1c19cebff5 fix: support main and subunit currency 2025-12-11 23:07:06 +00:00
Julian Pawlowski
be34e87fa6 refactor(currency): rename minor_currency to subunit_currency in services.yaml 2025-12-11 09:36:24 +00:00
github-actions[bot]
51cf230c48 docs: add version snapshot v0.22.0 and cleanup old versions [skip ci] 2025-12-11 08:44:26 +00:00
Julian Pawlowski
050ee4eba7 chore(release): bump version to 0.22.0 2025-12-11 08:41:55 +00:00
Julian Pawlowski
60e05e0815 refactor(currency)!: rename major/minor to base/subunit currency terminology
Complete terminology migration from confusing "major/minor" to clearer
"base/subunit" currency naming throughout entire codebase, translations,
documentation, tests, and services.

BREAKING CHANGES:

1. **Service API Parameters Renamed**:
   - `get_chartdata`: `minor_currency` → `subunit_currency`
   - `get_apexcharts_yaml`: Updated service_data references from
     `minor_currency: true` to `subunit_currency: true`
   - All automations/scripts using these parameters MUST be updated

2. **Configuration Option Key Changed**:
   - Config entry option: Display mode setting now uses new terminology
   - Internal key: `currency_display_mode` values remain "base"/"subunit"
   - User-facing labels updated in all 5 languages (de, en, nb, nl, sv)

3. **Sensor Entity Key Renamed**:
   - `current_interval_price_major` → `current_interval_price_base`
   - Entity ID changes: `sensor.tibber_home_current_interval_price_major`
     → `sensor.tibber_home_current_interval_price_base`
   - Energy Dashboard configurations MUST update entity references

4. **Function Signatures Changed**:
   - `format_price_unit_major()` → `format_price_unit_base()`
   - `format_price_unit_minor()` → `format_price_unit_subunit()`
   - `get_price_value()`: Parameter `in_euro` deprecated in favor of
     `config_entry` (backward compatible for now)

5. **Translation Keys Renamed**:
   - All language files: Sensor translation key
     `current_interval_price_major` → `current_interval_price_base`
   - Service parameter descriptions updated in all languages
   - Selector options updated: Display mode dropdown values

Changes by Category:

**Core Code (Python)**:
- const.py: Renamed all format_price_unit_*() functions, updated docstrings
- entity_utils/helpers.py: Updated get_price_value() with config-driven
  conversion and backward-compatible in_euro parameter
- sensor/__init__.py: Added display mode filtering for base currency sensor
- sensor/core.py:
  * Implemented suggested_display_precision property for dynamic decimal places
  * Updated native_unit_of_measurement to use get_display_unit_string()
  * Updated all price conversion calls to use config_entry parameter
- sensor/definitions.py: Renamed entity key and updated all
  suggested_display_precision values (2 decimals for most sensors)
- sensor/calculators/*.py: Updated all price conversion calls (8 calculators)
- sensor/helpers.py: Updated aggregate_price_data() signature with config_entry
- sensor/attributes/future.py: Updated future price attributes conversion

**Services**:
- services/chartdata.py: Renamed parameter minor_currency → subunit_currency
  throughout (53 occurrences), updated metadata calculation
- services/apexcharts.py: Updated service_data references in generated YAML
- services/formatters.py: Renamed parameter use_minor_currency →
  use_subunit_currency in aggregate_hourly_exact() and get_period_data()
- sensor/chart_metadata.py: Updated default parameter name

**Translations (5 Languages)**:
- All /translations/*.json:
  * Added new config step "display_settings" with comprehensive explanations
  * Renamed current_interval_price_major → current_interval_price_base
  * Updated service parameter descriptions (subunit_currency)
  * Added selector.currency_display_mode.options with translated labels
- All /custom_translations/*.json:
  * Renamed sensor description keys
  * Updated chart_metadata usage_tips references

**Documentation**:
- docs/user/docs/actions.md: Updated parameter table and feature list
- docs/user/versioned_docs/version-v0.21.0/actions.md: Backported changes

**Tests**:
- Updated 7 test files with renamed parameters and conversion logic:
  * test_connect_segments.py: Renamed minor/major to subunit/base
  * test_period_data_format.py: Updated period price conversion tests
  * test_avg_none_fallback.py: Fixed tuple unpacking for new return format
  * test_best_price_e2e.py: Added config_entry parameter to all calls
  * test_cache_validity.py: Fixed cache data structure (price_info key)
  * test_coordinator_shutdown.py: Added repair_manager mock
  * test_midnight_turnover.py: Added config_entry parameter
  * test_peak_price_e2e.py: Added config_entry parameter, fixed price_avg → price_mean
  * test_percentage_calculations.py: Added config_entry mock

**Coordinator/Period Calculation**:
- coordinator/periods.py: Added config_entry parameter to
  calculate_periods_with_relaxation() calls (2 locations)

Migration Guide:

1. **Update Service Calls in Automations/Scripts**:
   \`\`\`yaml
   # Before:
   service: tibber_prices.get_chartdata
   data:
     minor_currency: true

   # After:
   service: tibber_prices.get_chartdata
   data:
     subunit_currency: true
   \`\`\`

2. **Update Energy Dashboard Configuration**:
   - Settings → Dashboards → Energy
   - Replace sensor entity:
     `sensor.tibber_home_current_interval_price_major` →
     `sensor.tibber_home_current_interval_price_base`

3. **Review Integration Configuration**:
   - Settings → Devices & Services → Tibber Prices → Configure
   - New "Currency Display Settings" step added
   - Default mode depends on currency (EUR → subunit, Scandinavian → base)

Rationale:

The "major/minor" terminology was confusing and didn't clearly communicate:
- **Major** → Unclear if this means "primary" or "large value"
- **Minor** → Easily confused with "less important" rather than "smaller unit"

New terminology is precise and self-explanatory:
- **Base currency** → Standard ISO currency (€, kr, $, £)
- **Subunit currency** → Fractional unit (ct, øre, ¢, p)

This aligns with:
- International terminology (ISO 4217 standard)
- Banking/financial industry conventions
- User expectations from payment processing systems

Impact: Aligns currency terminology with international standards. Users must
update service calls, automations, and Energy Dashboard configuration after
upgrade.

Refs: User feedback session (December 2025) identified terminology confusion
2025-12-11 08:26:30 +00:00
Julian Pawlowski
ddc092a3a4 fix(statistics): handle None median value in price statistics calculation 2025-12-09 18:36:37 +00:00
Julian Pawlowski
cfb9515660 Merge branch 'main' of https://github.com/jpawlowski/hass.tibber_prices 2025-12-09 18:21:59 +00:00
Julian Pawlowski
284a7f4291 fix(periods): Periods are now correctly recalculated after tomorrow prices became available. 2025-12-09 16:57:57 +00:00
dependabot[bot]
ae02686d27
chore(deps): bump astral-sh/setup-uv from 7.1.4 to 7.1.5 (#57) 2025-12-08 22:59:08 +01:00
dependabot[bot]
3ca5196b9b
chore(deps): bump actions/upload-pages-artifact from 3 to 4 (#56) 2025-12-08 22:58:56 +01:00
dependabot[bot]
7c61fc0ecd
chore(deps): bump actions/setup-node from 4 to 6 (#55) 2025-12-08 22:58:43 +01:00
dependabot[bot]
bc0ae0b5d5
chore(deps): bump actions/checkout from 4 to 6 (#54) 2025-12-08 22:58:31 +01:00
dependabot[bot]
4e1c7f8d26
chore(deps): bump home-assistant/actions (#53) 2025-12-08 22:58:16 +01:00
Julian Pawlowski
51a99980df feat(sensors)!: add configurable median/mean display for average sensors
Add user-configurable option to choose between median and arithmetic mean
as the displayed value for all 14 average price sensors, with the alternate
value exposed as attribute.

BREAKING CHANGE: Average sensor default changed from arithmetic mean to
median. Users who rely on arithmetic mean behavior may use the price_mean attribue now, or must manually reconfigure
via Settings → Devices & Services → Tibber Prices → Configure → General
Settings → "Average Sensor Display" → Select "Arithmetic Mean" to get this as sensor state.

Affected sensors (14 total):
- Daily averages: average_price_today, average_price_tomorrow
- 24h windows: trailing_price_average, leading_price_average
- Rolling hour: current_hour_average_price, next_hour_average_price
- Future forecasts: next_avg_3h, next_avg_6h, next_avg_9h, next_avg_12h

Implementation:
- All average calculators now return (mean, median) tuples
- User preference controls which value appears in sensor state
- Alternate value automatically added to attributes
- Period statistics (best_price/peak_price) extended with both values

Technical changes:
- New config option: CONF_AVERAGE_SENSOR_DISPLAY (default: "median")
- Calculator functions return tuples: (avg, median)
- Attribute builders: add_alternate_average_attribute() helper function
- Period statistics: price_avg → price_mean + price_median
- Translations: Updated all 5 languages (de, en, nb, nl, sv)
- Documentation: AGENTS.md, period-calculation.md, recorder-optimization.md

Migration path:
Users can switch back to arithmetic mean via:
Settings → Integrations → Tibber Prices → Configure
→ General Settings → "Average Sensor Display" → "Arithmetic Mean"

Impact: Median is more resistant to price spikes, providing more stable
automation triggers. Statistical analysis from coordinator still uses
arithmetic mean (e.g., trailing_avg_24h for rating calculations).

Co-developed-with: GitHub Copilot <copilot@github.com>
2025-12-08 17:53:40 +00:00
Julian Pawlowski
0f56e80838 Merge branch 'main' of https://github.com/jpawlowski/hass.tibber_prices 2025-12-07 21:06:57 +00:00
Julian Pawlowski
85e86cf80a fix(docs): update Developer Documentation link for clarity 2025-12-07 21:06:55 +00:00
github-actions[bot]
f67d712435 docs: add version snapshot v0.21.0 and cleanup old versions [skip ci] 2025-12-07 21:05:21 +00:00
Julian Pawlowski
99d7c97868 fix(translations): update home not found messages for clarity in multiple languages 2025-12-07 20:57:53 +00:00
Julian Pawlowski
b8bd4670d9 chore(release): bump version to 0.21.0 2025-12-07 20:52:11 +00:00
Julian Pawlowski
83be54d5ad feat(coordinator): implement repairs system for proactive user notifications
Add repair notification system with three auto-clearing repair types:
- Tomorrow data missing (after 18:00)
- API rate limit exceeded (3+ consecutive errors)
- Home not found in Tibber account

Includes:
- coordinator/repairs.py: Complete TibberPricesRepairManager implementation
- Enhanced API error handling with explicit 5xx handling
- Translations for 5 languages (EN, DE, NB, NL, SV)
- Developer documentation in docs/developer/docs/repairs-system.md

Impact: Users receive actionable notifications for important issues instead
of only seeing stale data in logs.
2025-12-07 20:51:43 +00:00
Julian Pawlowski
4bd90ccdee chore: Update logo and icons for Tibber Prices Integration 2025-12-07 19:00:32 +00:00
Julian Pawlowski
98512672ae feat(lifecycle): implement HA entity best practices for state management
Implemented comprehensive entity lifecycle patterns following Home Assistant
best practices for proper state management and history tracking.
Changes:
- entity.py: Added available property to base class
  - Returns False when coordinator has no data or last_update_success=False
  - Prevents entities from showing stale data during errors
  - Auth failures trigger reauth flow via ConfigEntryAuthFailed

- sensor/core.py: Added state restore and background task handling
  - Changed inheritance: SensorEntity → RestoreSensor
  - Restore native_value from SensorExtraStoredData in async_added_to_hass()
  - Chart sensors restore response data from attributes
  - Converted blocking service calls to background tasks using hass.async_create_task()
  - Eliminates 194ms setup warning by making async_added_to_hass non-blocking

- binary_sensor/core.py: Added state restore and force_update
  - Changed inheritance: BinarySensorEntity → RestoreEntity + BinarySensorEntity
  - Restore is_on state in async_added_to_hass()
  - Added available property override for connection sensor (always True)
  - Added force_update property for connection sensor to track all state changes
  - Other binary sensors use base available logic

- AGENTS.md: Documented entity lifecycle patterns in Common Pitfalls
  - Added "Entity Lifecycle & State Management" section
  - Documents available, state restore, and force_update patterns
  - Explains why each pattern matters for proper HA integration

Impact: Entities no longer show stale data during errors, history has no gaps
after HA restart, connection state changes are properly tracked, and config
entry setup completes in <200ms (under HA threshold).

All patterns verified against HA developer documentation:
https://developers.home-assistant.io/docs/core/entity/
2025-12-07 17:24:41 +00:00
Julian Pawlowski
7d7784300d Merge branch 'main' of https://github.com/jpawlowski/hass.tibber_prices 2025-12-07 16:59:13 +00:00
Julian Pawlowski
334f462621 docs: update documentation structure for Docusaurus sites
Update all references to reflect two separate Docusaurus instances
(user + developer) with proper file paths and navigation management.

Changes:
- AGENTS.md: Document Docusaurus structure and file organization
- CONTRIBUTING.md: Add Docusaurus workflow guidelines
- docs/developer/docs/period-calculation-theory.md: Fix cross-reference
- docs/developer/sidebars.ts: Add recorder-optimization to navigation

Documentation organized as:
- docs/user/docs/*.md (user guides, via sidebars.ts)
- docs/developer/docs/*.md (developer guides, via sidebars.ts)
- AGENTS.md (AI patterns, conventions)

Impact: AI and contributors know where to place new documentation.
2025-12-07 16:59:06 +00:00
Julian Pawlowski
b99158d826 fix(docs): correct license reference from Apache 2.0 to MIT
Project uses MIT License, not Apache License 2.0.
2025-12-07 16:58:54 +00:00
Julian Pawlowski
a9c04dc0ec docs(developer): add recorder optimization guide
Add comprehensive documentation for _unrecorded_attributes
implementation, categorizing all excluded attributes with reasoning,
expected database impact, and decision framework for future attributes.

Added to Developer Docs → Advanced Topics navigation.

Content includes:
- 7 exclusion categories with examples
- Space savings calculations (60-85% reduction)
- Decision framework for new attributes
- Testing and validation guidelines
- SQL queries for verification
2025-12-07 16:57:53 +00:00
Julian Pawlowski
763a6b76b9 perf(entities): exclude non-essential attributes from recorder history
Implement _unrecorded_attributes in both sensor and binary_sensor
entities to prevent Home Assistant Recorder database bloat.

Excluded attributes (60-85% size reduction per state):
- Descriptions/help text (static, large strings)
- Large nested structures (periods, trend_attributes, chart data)
- Frequently changing diagnostics (icon_color, cache_age)
- Static/rarely changing config (currency, resolution)
- Temporary/time-bound data (next_api_poll, last_*)
- Redundant/derived data (price_spread, diff_%)

Kept for history analysis:
- timestamp (always first), all price values
- Period timing (start, end, duration_minutes)
- Price statistics (avg, min, max)
- Boolean status flags, relaxation_active

Impact: Reduces attribute size from ~3-8 KB to ~0.5-1.5 KB per state
change. Expected savings: ~1 GB per month for typical installation.

See: https://developers.home-assistant.io/docs/core/entity/#excluding-state-attributes-from-recorder-history
2025-12-07 16:57:40 +00:00
Julian Pawlowski
bc24513037
Modify FUNDING.yml with new sponsorship details
Updated funding model to include GitHub Sponsors and Buy Me a Coffee.
2025-12-07 16:36:16 +01:00
Julian Pawlowski
6241f47012 fix(translations): ensure newline at end of translation files for consistency 2025-12-07 15:17:21 +00:00
Julian Pawlowski
07c01dea01 refactor(i18n): normalize enum values and improve translation consistency
Unified enum representation across all translation files and improved
consistency of localization patterns.

Key changes:
- Replaced uppercase enum constants (VERY_CHEAP, LOW, RISING) with
  localized lowercase values (sehr günstig, niedrig, steigend) across
  all languages in both translations/ and custom_translations/
- Removed **bold** markdown from sensor attributes (custom_translations/)
  as it doesn't render in extra_state_attributes UI
- Preserved **bold** in Config Flow descriptions (translations/) where
  markdown is properly rendered
- Corrected German formality: "Sie" → "du" throughout all descriptions
- Completed missing Config Flow translations in Dutch, Swedish, and
  Norwegian (~45 fields: period_settings, flexibility_settings,
  relaxation_and_target_periods sections)
- Fixed chart_data_export and chart_metadata sensor classification
  (moved from binary_sensor to sensor as they are ENUM type)
- Corrected sensor placement in custom_translations/ (all 5 languages)

Files changed: 10 (5 translations/ + 5 custom_translations/)
Lines: +203, -222

Impact: All 5 languages now use consistent, properly formatted
localized enum values. Config Flow UI displays correctly formatted
examples with bold highlighting. Sensor attributes show clean text
without raw markdown syntax. German uses informal "du" tone throughout.
2025-12-07 14:21:53 +00:00
Julian Pawlowski
a7bbcb8dc9 docs: Add BMC logo SVG file to the images directory 2025-12-06 02:35:39 +00:00
Julian Pawlowski
c4b68c4cb1 fix(styles): add padding to hero section for improved layout 2025-12-06 02:07:47 +00:00
Julian Pawlowski
cc845ee675 fix(styles): adjust padding for heroBanner in CSS modules 2025-12-06 02:00:01 +00:00
Julian Pawlowski
e79bc97321 chore(docs): replace SVG logo with image for improved performance and compatibility 2025-12-06 01:57:44 +00:00
Julian Pawlowski
d71f3408b9 fix(docs): update developer documentation link in user intro
Change relative path ../development/ to absolute path /hass.tibber_prices/developer/
since user and developer docs are now separate Docusaurus instances.

Fixes broken link warning during build.
2025-12-06 01:54:13 +00:00
Julian Pawlowski
78a03f2827 feat(workflows): enhance GitHub Actions workflows with concurrency control and deployment updates 2025-12-06 01:50:49 +00:00
Julian Pawlowski
6898c126e3 fix(workflow): create separate directories for user and developer documentation during merge 2025-12-06 01:40:38 +00:00
Julian Pawlowski
d73eda4b2f git commit -m "feat(docs): add dual Docusaurus sites with custom branding and Giscus integration
- Split documentation into separate User and Developer sites
- Migrated existing docs to proper Docusaurus structure
- Added custom Tibber-themed header logos (light + dark mode variants)
- Implemented custom color scheme matching integration branding
  - Hero gradient: Cyan → Dark Cyan → Gold
  - Removed standard Docusaurus purple/green theme
- Integrated Giscus comments system for community collaboration
  - User docs: Comments enabled on guides, examples, FAQ
  - User docs: Comments disabled on reference pages (glossary, sensors, troubleshooting)
  - Developer docs: No comments (GitHub Issues/PRs preferred)
- Added categorized sidebars with emoji navigation
- Created 8 new placeholder documentation pages
- Fixed image paths for baseUrl compatibility (local + GitHub Pages)
- Escaped MDX special characters in performance metrics
- Added GitHub Actions workflow for automated deployment
- Created helper scripts: dev-user, dev-developer, build-all

Breaking changes:
- Moved /docs/user/*.md to /docs/user/docs/*.md
- Moved /docs/development/*.md to /docs/developer/docs/*.md
2025-12-06 01:37:06 +00:00
Julian Pawlowski
b5db6053ba docs: Update chart examples and sensors documentation for chart_metadata integration 2025-12-05 21:44:46 +00:00
Julian Pawlowski
86afea9cce docs: Update README with example screenshots. 2025-12-05 21:37:31 +00:00
Julian Pawlowski
afb8ac2327 doc: Add comprehensive chart examples and screenshots for tibber_prices integration
- Created a new documentation file `chart-examples.md` detailing various chart configurations available through the `tibber_prices.get_apexcharts_yaml` action.
- Included descriptions, dependencies, and YAML generation examples for four chart modes: Today's Prices, Rolling 48h Window, and Rolling Window Auto-Zoom.
- Added a section on dynamic Y-axis scaling and best price period highlights.
- Established prerequisites for using the charts, including required cards and customization tips.
- Introduced a new `README.md` in the images/charts directory to document available chart screenshots and guidelines for capturing them.
2025-12-05 21:15:52 +00:00
Julian Pawlowski
f92fc3b444 refactor(services): remove gradient_stop, use fixed 50% gradient
Implementation flaw discovered: gradient_stop calculated as
`(avg - min) / (max - min)` for combined data produces one value
applied to ALL series. Each series (VERY_CHEAP, NORMAL, VERY_EXPENSIVE)
has different min/max ranges, so the same gradient stop position
represents a different absolute price in each series.

Example failure case:
- VERY_CHEAP: 10-20 ct → 50% at 15 ct (below overall avg!)
- VERY_EXPENSIVE: 40-50 ct → 50% at 45 ct (above overall avg!)

Conclusion: Gradient shows middle of each series range, not average
price position.

Solution: Fixed 50% gradient purely for visual appeal. Semantic
information provided by:
- Series colors (CHEAP/NORMAL/EXPENSIVE)
- Grid lines (implicitly show average)
- Dynamic Y-axis bounds (optimal scaling via chart_metadata sensor)

Changes:
- sensor/chart_metadata.py: Remove gradient_stop extraction
- services/get_apexcharts_yaml.py: Fixed gradient at [50, 100]
- custom_translations/*.json: Remove gradient_stop references

Impact: Honest visualization with no false semantic signals. Feature
was never released, clean removal without migration.
2025-12-05 20:51:30 +00:00
Julian Pawlowski
6922e52368 feat(sensors): add chart_metadata sensor for lightweight chart configuration
Implemented new chart_metadata diagnostic sensor that provides essential
chart configuration values (yaxis_min, yaxis_max, gradient_stop) as
attributes, enabling dynamic chart configuration without requiring
async service calls in templates.

Sensor implementation:
- New chart_metadata.py module with metadata-only service calls
- Automatically calls get_chartdata with metadata="only" parameter
- Refreshes on coordinator updates (new price data or user data)
- State values: "pending", "ready", "error"
- Enabled by default (critical for chart features)

ApexCharts YAML generator integration:
- Checks for chart_metadata sensor availability before generation
- Uses template variables to read sensor attributes dynamically
- Fallback to fixed values (gradient_stop=50%) if sensor unavailable
- Creates separate notifications for two independent issues:
  1. Chart metadata sensor disabled (reduced functionality warning)
  2. Required custom cards missing (YAML won't work warning)
- Both notifications explain YAML generation context and provide
  complete fix instructions with regeneration requirement

Configuration:
- Supports configuration.yaml: tibber_prices.chart_metadata_config
- Optional parameters: day, minor_currency, resolution
- Defaults to minor_currency=True for ApexCharts compatibility

Translation additions:
- Entity name and state translations (all 5 languages)
- Notification messages for sensor unavailable and missing cards
- best_price_period_name for tooltip formatter

Binary sensor improvements:
- tomorrow_data_available now enabled by default (critical for automations)
- data_lifecycle_status now enabled by default (critical for debugging)

Impact: Users get dynamic chart configuration with optimized Y-axis scaling
and gradient positioning without manual calculations. ApexCharts YAML
generation now provides clear, actionable notifications when issues occur,
ensuring users understand why functionality is limited and how to fix it.
2025-12-05 20:30:54 +00:00
Julian Pawlowski
ac6f1e0955 chore(release): bump version to 0.20.0 2025-12-05 18:14:32 +00:00
Julian Pawlowski
c8e9f7ec2a feat(apexcharts): add server-side metadata with dynamic yaxis and gradient
Implemented comprehensive metadata calculation for chart data export service
with automatic Y-axis scaling and gradient positioning based on actual price
statistics.

Changes:
- Added 'metadata' parameter to get_chartdata service (include/only/none)
- Implemented _calculate_metadata() with per-day price statistics
  * min/max/avg/median prices
  * avg_position and median_position (0-1 scale for gradient stops)
  * yaxis_suggested bounds (floor(min)-1, ceil(max)+1)
  * time_range with day boundaries
  * currency info with symbol and unit
- Integrated metadata into rolling_window modes via config-template-card
  * Pre-calculated yaxis bounds (no async issues in templates)
  * Dynamic gradient stops based on avg_position
  * Server-side calculation ensures consistency

Visual refinements:
- Best price overlay opacity reduced to 0.05 (ultra-subtle green hint)
- Stroke width increased to 1.5 for better visibility
- Gradient opacity adjusted to 0.45 with "light" shade
- Marker configuration: size 0, hover size 2, strokeWidth 1
- Header display: Only show LOW/HIGH rating_levels (min/max prices)
  * Conditional logic excludes NORMAL and level types
  * Entity state shows meaningful extrema values
- NOW marker label removed for rolling_window_autozoom mode
  * Static position at 120min lookback makes label misleading

Code cleanup:
- Removed redundant all_series_config (server-side data formatting)
- Currency names capitalized (Cents, Øre, Öre, Pence)

Translation updates:
- Added metadata selector translations (de, en, nb, nl, sv)
- Added metadata field description in services
- Synchronized all language files

Impact: Users get dynamic Y-axis scaling based on actual price data,
eliminating manual configuration. Rolling window charts automatically
adjust axis bounds and gradient positioning. Header shows only
meaningful extreme values (daily min/max). All data transformation
happens server-side for optimal performance and consistency.
2025-12-05 18:14:18 +00:00
Julian Pawlowski
2f1929fbdc chore(release): bump version to 0.19.0 2025-12-04 14:39:16 +00:00
Julian Pawlowski
c9a7dcdae7 feat(services): add rolling window options with auto-zoom for ApexCharts
Added two new rolling window options for get_apexcharts_yaml service to provide
flexible dynamic chart visualization:

- rolling_window: Fixed 48h window that automatically shifts between
  yesterday+today and today+tomorrow based on data availability
- rolling_window_autozoom: Same as rolling_window but with progressive zoom-in
  (2h lookback + remaining time until midnight, updates every 15min)

Implementation changes:
- Updated service schema validation to accept new day options
- Added entity mapping patterns for both rolling modes
- Implemented minute-based graph_span calculation with quarter-hour alignment
- Added config-template-card integration for dynamic span updates
- Used current_interval_price sensor as 15-minute update trigger
- Unified data loading: both rolling modes omit day parameter for dynamic selection
- Applied ternary operator pattern for cleaner day_param logic
- Made grid lines more subtle (borderColor #f5f5f5, strokeDashArray 0)

Translation updates:
- Added selector options in all 5 languages (de, en, nb, nl, sv)
- Updated field descriptions to include default behavior and new options
- Documented that rolling window is default when day parameter omitted

Documentation updates:
- Updated user docs (actions.md, automation-examples.md) with new options
- Added detailed explanation of day parameter options
- Included examples for both rolling_window and rolling_window_autozoom modes

Impact: Users can now create auto-adapting ApexCharts that show 48h rolling
windows with optional progressive zoom throughout the day. Requires
config-template-card for dynamic behavior.
2025-12-04 14:39:00 +00:00
Julian Pawlowski
1386407df8 fix(translations): update descriptions and names for clarity in multiple language files 2025-12-04 12:41:11 +00:00
Julian Pawlowski
c28c33dade chore(release): bump version to 0.18.1 2025-12-03 14:21:06 +00:00
Julian Pawlowski
6e0310ef7c fix(services): correct period data format for ApexCharts visualization
Period data in array_of_arrays format now generates proper segment structure
for stepline charts. Each period produces 2-3 data points depending on
insert_nulls parameter:

1. Start time with price (begin period)
2. End time with price (hold price level)
3. End time with NULL (terminate segment, only if insert_nulls='segments'/'all')

This enables ApexCharts to correctly display periods as continuous blocks with
clean gaps between them. Previously only start point was generated, causing
periods to render as single points instead of continuous segments.

Changes:
- formatters.py: Updated get_period_data() to generate 2-3 points per period
- formatters.py: Added insert_nulls parameter to control NULL termination
- get_chartdata.py: Pass insert_nulls parameter to get_period_data()
- get_apexcharts_yaml.py: Set insert_nulls='segments' for period overlay
- get_apexcharts_yaml.py: Preserve NULL values in data_generator mapping
- get_apexcharts_yaml.py: Store original price for potential tooltip access
- tests: Added comprehensive period data format tests

Impact: Best price and peak price period overlays now display correctly as
continuous blocks with proper segment separation in ApexCharts cards.
2025-12-03 14:20:46 +00:00
Julian Pawlowski
a3696fe182 ci(release): auto-delete inappropriate version tags with clear error messaging
Release workflow now automatically deletes tags when version number doesn't
match commit types (e.g., PATCH bump when MINOR needed for features).

Changes:
- New step 'Delete inappropriate version tag' runs after version_check
- Automatically deletes tag and exits with error if version inappropriate
- All subsequent steps conditional on successful version validation
- Improved warning message: removed confusing 'X.Y.Z' placeholder
- Added notice: 'This tag will be automatically deleted in the next step'
- Removed redundant 'Version Check Summary' step

Impact: Users get immediate, clear feedback when pushing wrong version tags.
Workflow fails fast with actionable error message instead of creating release
with embedded warning. No manual tag deletion needed.
2025-12-03 13:45:21 +00:00
Julian Pawlowski
a2d664e120 chore(release): bump version to 0.18.0 2025-12-03 13:36:04 +00:00
Julian Pawlowski
d7b129efec chore(release): bump version to 0.17.1 2025-12-03 13:16:06 +00:00
Julian Pawlowski
8893b31f21 fix(binary_sensor): restore push updates from coordinator
Binary sensor _handle_coordinator_update() was empty, blocking all push updates
from coordinator. This prevented binary sensors from reflecting state changes
immediately after API fetch or error conditions.

Changes:
- Implement _handle_coordinator_update() to call async_write_ha_state()
- All binary sensors now receive push updates when coordinator has new data

Binary sensors affected:
- tomorrow_data_available: Now reflects data availability immediately after API fetch
- connection: Now shows disconnected state immediately on auth/API errors
- chart_data_export: Now updates chart data when price data changes
- peak_price_period, best_price_period: Get push updates when periods change
- data_lifecycle_status: Gets push updates on status changes

Impact: Binary sensors update in real-time instead of waiting for next timer
cycle or user interaction. Fixes stale state issue where tomorrow_data_available
remained off despite data being available, and connection sensor not reflecting
authentication failures immediately.
2025-12-03 13:14:26 +00:00
Julian Pawlowski
0ac2c4970f feat(config): add energy section to configuration.yaml 2025-12-03 11:18:59 +00:00
dependabot[bot]
604c5d53cb
chore(deps): bump actions/checkout from 6.0.0 to 6.0.1 (#49)
Bumps [actions/checkout](https://github.com/actions/checkout) from 6.0.0 to 6.0.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Commits](https://github.com/actions/checkout/compare/v6...v6.0.1)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-02 21:24:01 +01:00
Julian Pawlowski
a1ab98d666 refactor(config_flow): reorganize options flow steps with section structure
Restructured 5 options flow steps (current_interval_price_rating, best_price,
peak_price, price_trend, volatility) to use Home Assistant's sections feature
for better UI organization and logical grouping.

Changes:
- current_interval_price_rating: Single section "price_rating_thresholds"
- best_price: Three sections (period_settings, flexibility_settings,
  relaxation_and_target_periods)
- peak_price: Three sections (period_settings, flexibility_settings,
  relaxation_and_target_periods)
- price_trend: Single section "price_trend_thresholds"
- volatility: Single section "volatility_thresholds"

Each section includes name, description, data fields, and data_description
fields following HA translation schema requirements.

Updated all 5 language files (de, en, nb, nl, sv) with new section structure
while preserving existing field descriptions and translations.

Impact: Options flow now displays configuration fields in collapsible,
logically grouped sections with clear section headers, improving UX for
complex multi-parameter configuration steps. No functional changes to
configuration logic or validation.
2025-12-02 20:23:31 +00:00
Julian Pawlowski
3098144db2 chore(release): bump version to 0.17.0 2025-12-02 19:00:54 +00:00
Julian Pawlowski
3977d5e329 fix(coordinator): add _is_fetching flag and fix tomorrow data detection
Implement _is_fetching flag to show "refreshing" status during API calls,
and fix needs_tomorrow_data() to recognize single-home cache format.

Changes:
- Set _is_fetching flag before API call, reset after completion (core.py)
- Fix needs_tomorrow_data() to check for "price_info" key instead of "homes"
- Remove redundant "homes" check in should_update_price_data()
- Improve logging: change debug to info for tomorrow data checks

Lifecycle status now correctly transitions after 13:00 when tomorrow data
is missing: cached → searching_tomorrow → refreshing → fresh → cached

Impact: Users will see accurate lifecycle status and tomorrow's electricity
prices will automatically load when available after 13:00, fixing issue
since v0.14.0 where prices weren't fetched without manual HA restart.
2025-12-02 19:00:20 +00:00
Julian Pawlowski
d6ae931918 feat(services): add new services and icons for enhanced functionality and user experience 2025-12-02 18:46:15 +00:00
Julian Pawlowski
ab9735928a refactor(docs): update terminology from "services" to "actions" for clarity and consistency 2025-12-02 18:35:59 +00:00
Julian Pawlowski
97db134ed5 feat(services): add icons to service definitions for better visibility 2025-12-02 17:16:44 +00:00
Julian Pawlowski
d2252cac45 Merge branch 'main' of https://github.com/jpawlowski/hass.tibber_prices 2025-12-02 17:13:59 +00:00
Julian Pawlowski
7978498006 chore(release): sync manifest.json with tag and recreate release if necessary 2025-12-02 17:13:56 +00:00
github-actions[bot]
ae6f0780fd chore(release): sync manifest.json with tag v0.16.1 2025-12-02 16:49:44 +00:00
Julian Pawlowski
b78ddeaf43 feat(docs): update get_apexcharts_yaml service descriptions to clarify limitations and customization options 2025-12-02 16:47:36 +00:00
Julian Pawlowski
0a44dd7f12 chore(release): bump version to 0.16.0 2025-12-01 23:48:36 +00:00
Julian Pawlowski
369f07ee39 docs(AGENTS): update Conventional Commits guidelines and best practices 2025-12-01 23:48:30 +00:00
Julian Pawlowski
e156dfb061 feat(services): add rolling 48h window support to chart services
Add dynamic rolling window mode to get_chartdata and get_apexcharts_yaml
services that automatically adapts to data availability.

When 'day' parameter is omitted, services return 48-hour window:
- With tomorrow data (after ~13:00): today + tomorrow
- Without tomorrow data: yesterday + today

Changes:
- Implement rolling window logic in get_chartdata using has_tomorrow_data()
- Generate config-template-card wrapper in get_apexcharts_yaml for dynamic
  ApexCharts span.offset based on tomorrow_data_available binary sensor
- Update service descriptions in services.yaml
- Add rolling window descriptions to all translations (de, en, nb, nl, sv)
- Document rolling window mode in docs/user/services.md
- Add ApexCharts examples with prerequisites in docs/user/automation-examples.md

BREAKING CHANGE: get_apexcharts_yaml rolling window mode requires
config-template-card in addition to apexcharts-card for dynamic offset
calculation.

Impact: Users can create auto-adapting 48h price charts without manual day
selection. Fixed day views (day: today/yesterday/tomorrow) still work with
apexcharts-card only.
2025-12-01 23:46:09 +00:00
Julian Pawlowski
cf8d9ba8e8 feat(apexcharts): add highlight option for best price periods in chart 2025-12-01 21:51:39 +00:00
Julian Pawlowski
f70ac9cff6 feat(services): improve ApexCharts segment visualization and fix header display
Simplifies the connect_segments implementation to use a unified bridge-point
approach for all price transitions (up/down/same). Previously used
direction-dependent logic (hold vs connect points) which was unnecessarily
complex.

Changes:
- get_chartdata.py: Bridge points now always use next interval's price at
  boundary timestamp, creating smooth visual connection between segments
- get_chartdata.py: Trailing NULL removal now conditional on insert_nulls mode
  ('segments' removes for header fix, 'all' preserves intentional gaps)
- get_apexcharts_yaml.py: Enable connect_segments by default, activate
  show_states for header min/max display
- get_apexcharts_yaml.py: Remove extrema series (not compatible with
  data_generator approach - ApexCharts requires entity time-series data)
- tests: Move test_connect_segments.py to tests/services/ to mirror source
  structure

Impact: ApexCharts cards now show clean visual connections between price level
segments with proper header statistics display. Trailing NULLs no longer cause
"N/A" in headers for filtered data. Test organization improved for
maintainability.
2025-12-01 11:14:27 +00:00
Copilot
49628f3394
Add connect_segments parameter and fix ApexCharts header N/A display (#46)
* Initial plan

* Add connect_segments parameter to get_chartdata service for visual segment connections

Co-authored-by: jpawlowski <75446+jpawlowski@users.noreply.github.com>

* Address code review feedback: fix test logic and correct misleading comment

Co-authored-by: jpawlowski <75446+jpawlowski@users.noreply.github.com>

* Integrate PR45: Remove trailing null values for proper ApexCharts header display

Co-authored-by: jpawlowski <75446+jpawlowski@users.noreply.github.com>

* Add connect_segments translations for de, nb, nl, sv languages

Co-authored-by: jpawlowski <75446+jpawlowski@users.noreply.github.com>

* Changes before error encountered

Co-authored-by: jpawlowski <75446+jpawlowski@users.noreply.github.com>

* Fix hassfest validation: Move time_units from translations to custom_translations

Co-authored-by: jpawlowski <75446+jpawlowski@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: jpawlowski <75446+jpawlowski@users.noreply.github.com>
2025-12-01 03:19:52 +01:00
Julian Pawlowski
b306a491e0 refactor(translations): unify time unit translations across multiple languages 2025-11-30 17:25:58 +00:00
Julian Pawlowski
fe2cb1180a refactor(generate-notes): enhance output formatting for error messages and logs 2025-11-30 16:55:31 +00:00
Julian Pawlowski
2320520ed9 chore(release): bump version to 0.15.0 2025-11-30 16:43:21 +00:00
Julian Pawlowski
412cecc126 refactor(cache): enhance cache validation to support new structure and invalidate old format 2025-11-30 16:42:41 +00:00
Julian Pawlowski
6f93bb8288 refactor(formatters, get_chartdata): serialize datetime objects to ISO format in data points 2025-11-30 15:07:18 +00:00
Julian Pawlowski
09eb9c7050 refactor(validate): remove 'ignore' key from HACS validation step 2025-11-28 16:39:00 +00:00
Julian Pawlowski
b9647659bd chore(devcontainer): update Node.js version to 24 in devcontainer configuration 2025-11-26 14:36:30 +00:00
Julian Pawlowski
7c0000039e refactor(config_flow): disable subentry flow temporarily due to incomplete time-travel feature 2025-11-26 14:36:08 +00:00
Julian Pawlowski
50021ce3ba chore(devcontainer): add setup-git.sh script for host Git configuration 2025-11-26 14:36:00 +00:00
Julian Pawlowski
a90fef6f2d refactor(scripts): reorganize and standardize development scripts
Major restructuring of the scripts/ directory with consistent output
formatting, improved organization, and stricter error handling.

Breaking Changes:
- Updated development environment to Home Assistant 2025.7+
  - Removed Python 3.12 compatibility (HA 2025.7+ requires Python 3.13)
  - Updated all HA core requirements from 2025.7 requirement files
  - Added new dependencies: python-multipart, uv (for faster package management)
  - Updated GitHub Actions workflows to use Python 3.13

Changes:
- Created centralized output library (scripts/.lib/output.sh)
  - Unified color codes and Unicode symbols
  - Consistent formatting functions (log_header, log_success, log_error, etc.)
  - Support for embedded formatting codes (${BOLD}, ${GREEN}, etc.)

- Reorganized into logical subdirectories:
  - scripts/setup/ - Setup and maintenance scripts
    - bootstrap: Install/update dependencies (used in CI/CD)
    - setup: Full DevContainer setup (pyright, copilot, HACS)
    - reset: Reset config/ directory to fresh state (NEW)
    - sync-hacs: Sync HACS integrations
  - scripts/release/ - Release management scripts
    - prepare: Version bump and tagging
    - suggest-version: Semantic version suggestion
    - generate-notes: Release notes generation
    - check-if-released: Check release status
    - hassfest: Local integration validation

- Updated all scripts with:
  - set -euo pipefail for stricter error handling
  - Consistent SCRIPT_DIR pattern for reliable sourcing
  - Professional output with colors and emojis
  - Unified styling across all 17 scripts

- Removed redundant scripts:
  - scripts/update (was just wrapper around bootstrap)
  - scripts/json_schemas/ (moved to schemas/json/)

- Enhanced clean script:
  - Improved artifact cleanup
  - Better handling of accidental package installations
  - Hints for reset and deep clean options

- New reset script features:
  - Standard mode: Keep configuration.yaml
  - Full mode (--full): Reset configuration.yaml from git
  - Automatic re-setup after reset

- Updated documentation:
  - AGENTS.md: Updated script references and workflow guidance
  - docs/development/: Updated all references to new script structure

Impact: Development environment now requires Python 3.13 and Home Assistant
2025.7+. Developers get consistent, professional script output with better
error handling and logical organization. Single source of truth for styling
makes future updates trivial.
2025-11-26 13:11:52 +00:00
Julian Pawlowski
1a396a4faf docs(agents): update class naming documentation
Updated AGENTS.md:
- Fixed TibberPricesFlowHandler → TibberPricesConfigFlowHandler reference

Impact: Documentation now matches current code structure.
2025-11-25 20:44:40 +00:00
Julian Pawlowski
cca104dfc4 chore(dev): update dev environment configuration
DevContainer updates:
- .devcontainer/devcontainer.json: Added Python path configuration

Configuration updates:
- config/configuration.yaml: Added test home configuration

Impact: Improved development environment setup. No production changes.
2025-11-25 20:44:40 +00:00
Julian Pawlowski
3c69807c05 refactor(logging): use details logger for verbose period calculation logs
Moved verbose debug logging to separate _LOGGER_DETAILS logger:
- core.py: Outlier flex capping messages
- outlier_filtering.py: Spike detection, context validation, smoothing details
- period_building.py: Level filter details, gap tolerance info
- relaxation.py: Per-phase iteration details, filter combination attempts

Pattern: Main _LOGGER for high-level progress, _LOGGER_DETAILS for step-by-step

Benefits:
- Users can disable verbose logs via logger configuration
- Main DEBUG log stays readable (high-level flow)
- Details available when needed for troubleshooting

Added:
- period_overlap.py: Docstring for extend_period_if_adjacent()

Impact: Cleaner log output by default. Enable details logger
(homeassistant.components.tibber_prices.coordinator.period_handlers.details)
for deep debugging.
2025-11-25 20:44:39 +00:00
Julian Pawlowski
9ae618fff9 refactor(config_flow): rename TibberPricesFlowHandler to TibberPricesConfigFlowHandler
Renamed main config flow handler class for clarity:
- TibberPricesFlowHandler → TibberPricesConfigFlowHandler

Updated imports in:
- config_flow.py (import alias)
- config_flow_handlers/__init__.py (exports)

Reason: More explicit name distinguishes from OptionsFlowHandler and
SubentryFlowHandler. Follows naming convention of other flow handlers.

Impact: No functional changes, improved code readability.
2025-11-25 20:44:39 +00:00
Julian Pawlowski
6338f51527 refactor(services): rename service modules to match service names
Renamed service modules for consistency with service identifiers:
- apexcharts.py → get_apexcharts_yaml.py
- chartdata.py → get_chartdata.py
- Added: get_price.py (new service module)

Naming convention: Module names now match service names directly
(tibber_prices.get_apexcharts_yaml → get_apexcharts_yaml.py)

Impact: Improved code organization, easier to locate service implementations.
No functional changes.
2025-11-25 20:44:39 +00:00
Julian Pawlowski
7c117a2267 docs(schemas): update JSON schemas for translation structure
Updated translation JSON schemas to reflect current implementation:
- translation_schema.json: Documents HA's official translation structure
  (config, options, selector paths, entity states)
- custom_translation_schema.json: Documents custom extension structure
  (entity descriptions not supported by HA schema)

Schema updates:
- Added time_units section (day, days, hour, hours, minute, minutes, ago, now)
- Documented selector.{translation_key}.options.{value} pattern
- Added account_choice selector structure

Impact: Provides validation and documentation for translation files.
Helps maintain consistency across all 5 language files (de, en, nb, nl, sv).
2025-11-25 20:44:39 +00:00
Julian Pawlowski
b6f5f1678f feat(services): add fetch_price_info_range service and update schema
Added new service for fetching historical/future price data:
- fetch_price_info_range: Query prices for arbitrary date ranges
- Supports start_time and end_time parameters
- Returns structured price data via service response
- Uses interval pool for efficient data retrieval

Service definition:
- services.yaml: Added fetch_price_info_range with date selectors
- services/__init__.py: Implemented handler with validation
- Response format: {"priceInfo": [...], "currency": "..."}

Schema updates:
- config_flow_handlers/schemas.py: Convert days slider to IntSelector
  (was NumberSelector with float, caused "2.0 Tage" display issue)

Impact: Users can fetch price data for custom date ranges programmatically.
Config flow displays clean integer values for day offsets.
2025-11-25 20:44:39 +00:00
Julian Pawlowski
44f6ae2c5e feat(interval-pool): add intelligent interval caching and memory optimization
Implemented interval pool architecture for efficient price data management:

Core Components:
- IntervalPool: Central storage with timestamp-based index
- FetchGroupCache: Protected range management (day-before-yesterday to tomorrow)
- IntervalFetcher: Gap detection and optimized API queries
- TimestampIndex: O(1) lookup for price intervals

Key Features:
- Deduplication: Touch intervals instead of duplicating (memory efficient)
- GC cleanup: Removes dead intervals no longer referenced by index
- Gap detection: Only fetches missing ranges, reuses cached data
- Protected range: Keeps yesterday/today/tomorrow, purges older data
- Resolution support: Handles hourly (pre-Oct 2025) and quarter-hourly data

Integration:
- TibberPricesApiClient: Uses interval pool for all range queries
- DataUpdateCoordinator: Retrieves data from pool instead of direct API
- Transparent: No changes required in sensor/service layers

Performance Benefits:
- Reduces API calls by 70% (reuses overlapping intervals)
- Memory footprint: ~10KB per home (protects 384 intervals max)
- Lookup time: O(1) timestamp-based index

Breaking Changes: None (backward compatible integration layer)

Impact: Significantly reduces Tibber API load while maintaining data
freshness. Memory-efficient storage prevents unbounded growth.
2025-11-25 20:44:39 +00:00
Julian Pawlowski
74789877ff test: fix async mocking and add noqa comments for private access
Fixed test issues:
- test_resource_cleanup.py: Use AsyncMock for async_unload_entry
  (was MagicMock, caused TypeError with async context)
- Added # noqa: SLF001 comments to all private member access in tests
  (18 instances - legitimate test access patterns)

Test files updated:
- test_resource_cleanup.py (AsyncMock fix)
- test_interval_pool_memory_leak.py (8 noqa comments)
- test_interval_pool_optimization.py (4 noqa comments)

Impact: All tests pass linting, async tests execute correctly.
2025-11-25 20:44:39 +00:00
Julian Pawlowski
e04e38d09f refactor(logging): remove verbose debug logging from price enrichment
Removed excessive debug logging in enrich_price_info_with_differences():
- Deleted per-interval "Processing" messages (cluttered logs)
- Kept boundary INFO messages (enrichment start/skip counts)
- Removed unused variable expected_intervals_24h
- Removed unused parameter day_label from _process_price_interval()

Impact: Cleaner logs, no functional changes. Reduces log volume during
price data processing.
2025-11-25 20:44:39 +00:00
Julian Pawlowski
2449c28a88 feat(i18n): localize time offset descriptions and config flow strings
Added complete localization support for time offset descriptions:
- Convert hardcoded English strings "(X days ago)" to translatable keys
- Add time_units translations (day/days, hour/hours, minute/minutes, ago, now)
- Support singular/plural forms in all 5 languages (de, en, nb, nl, sv)
- German: Proper Dativ case "Tagen" with preposition "vor"
- Compact format for mixed offsets: "7 Tagen - 02:30"

Config flow improvements:
- Replace hardcoded "Enter new API token" with translated "Add new Tibber account API token"
- Use get_translation() for account_choice dropdown labels
- Fix SelectOptionDict usage (no mixing with translation_key parameter)
- Convert days slider from float to int (prevents "2.0 Tage" display)
- DurationSelector: default {"hours": 0, "minutes": 0} to fix validation errors

Translation keys added:
- selector.account_choice.options.new_token
- time_units (day, days, hour, hours, minute, minutes, ago, now)
- config.step.time_offset_description guidance text

Impact: Config flow works fully translated in all 5 languages with proper grammar.
2025-11-25 20:44:39 +00:00
dependabot[bot]
bab72ac341
chore(deps): bump actions/setup-python from 6.0.0 to 6.1.0 (#41)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 6.0.0 to 6.1.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](e797f83bcb...83679a892e)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 6.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-25 21:30:12 +01:00
dependabot[bot]
78ef7d1098
chore(deps): update pre-commit requirement (#38) 2025-11-24 22:26:21 +01:00
dependabot[bot]
9c86de20fc
chore(deps): bump home-assistant/actions (#39) 2025-11-24 22:25:28 +01:00
Julian Pawlowski
7e47ef5995 docs: fix attribute names in AGENTS.md examples
Updated attribute ordering documentation to use correct names:
- "periods" → "pricePeriods" (matches code since refactoring)
- "intervals" → "priceInfo" (flat list structure)

Impact: Documentation now matches actual code structure.
2025-11-24 16:26:23 +00:00
Julian Pawlowski
6b78cd757f refactor: simplify needs_tomorrow_data() - remove tomorrow_date parameter
Changed needs_tomorrow_data() to auto-calculate tomorrow date using
get_intervals_for_day_offsets([1]) helper instead of requiring explicit
tomorrow_date parameter.

Changes:
- coordinator/helpers.py: needs_tomorrow_data() signature simplified
  * Uses get_intervals_for_day_offsets([1]) to detect tomorrow intervals
  * No longer requires tomorrow_date parameter (calculated automatically)
  * Consistent with all other data access patterns

- coordinator/data_fetching.py: Removed tomorrow_date calculation and passing
  * Removed unused date import
  * Simplified method call: needs_tomorrow_data() instead of needs_tomorrow_data(tomorrow_date)

- sensor/calculators/lifecycle.py: Updated calls to _needs_tomorrow_data()
  * Removed tomorrow_date variable where it was only used for this call
  * Combined nested if statements with 'and' operator

Impact: Cleaner API, fewer parameters to track, consistent with other
helper functions that auto-calculate dates based on current time.
2025-11-24 16:26:08 +00:00
Julian Pawlowski
2de793cfda refactor: migrate from multi-home to single-home-per-coordinator architecture
Changed from centralized main+subentry coordinator pattern to independent
coordinators per home. Each config entry now manages its own home data
with its own API client and access token.

Architecture changes:
- API Client: async_get_price_info() changed from home_ids: set[str] to home_id: str
  * Removed GraphQL alias pattern (home0, home1, ...)
  * Single-home query structure without aliasing
  * Simplified response parsing (viewer.home instead of viewer.home0)

- Coordinator: Removed main/subentry distinction
  * Deleted is_main_entry() and _has_existing_main_coordinator()
  * Each coordinator fetches its own data independently
  * Removed _find_main_coordinator() and _get_configured_home_ids()
  * Simplified _async_update_data() - no subentry logic
  * Added _home_id instance variable from config_entry.data

- __init__.py: New _get_access_token() helper
  * Handles token retrieval for both parent and subentries
  * Subentries find parent entry to get shared access token
  * Creates single API client instance per coordinator

- Data structures: Flat single-home format
  * Old: {"homes": {home_id: {"price_info": [...]}}}
  * New: {"home_id": str, "price_info": [...], "currency": str}
  * Attribute name: "periods" → "pricePeriods" (consistent with priceInfo)

- helpers.py: Removed get_configured_home_ids() (no longer needed)
  * parse_all_timestamps() updated for single-home structure

Impact: Each home operates independently with its own lifecycle tracking,
caching, and period calculations. Simpler architecture, easier debugging,
better isolation between homes.
2025-11-24 16:24:37 +00:00
Julian Pawlowski
981fb08a69 refactor(price_info): price data handling to use unified interval retrieval
- Introduced `get_intervals_for_day_offsets` helper to streamline access to price intervals for yesterday, today, and tomorrow.
- Updated various components to replace direct access to `priceInfo` with the new helper, ensuring a flat structure for price intervals.
- Adjusted calculations and data processing methods to accommodate the new data structure.
- Enhanced documentation to reflect changes in caching strategy and data structure.
2025-11-24 10:49:34 +00:00
Julian Pawlowski
294d84128b refactor(services): rename and reorganize custom services for clarity and functionality 2025-11-23 13:17:21 +00:00
Julian Pawlowski
9ee7f81164 fix(coordinator): invalidate transformation cache when source data changes
Fixes bug where lifecycle sensor attributes (data_completeness, tomorrow_available)
didn't update after tomorrow data was successfully fetched from API.

Root cause: DataTransformer had cached transformation data but no mechanism to detect
when source API data changed (only checked config and midnight turnover).

Changes:
- coordinator/data_transformation.py: Track source_data_timestamp and invalidate cache
  when timestamp changes (detects new API data arrival)
- coordinator/data_transformation.py: Integrate period calculation into DataTransformer
  (calculate_periods_fn parameter) for complete single-layer caching
- coordinator/core.py: Remove duplicate transformation cache (_cached_transformed_data,
  _last_transformation_config), simplify _transform_data_for_*() to direct delegation
- tests/test_tomorrow_data_refresh.py: Add 3 regression tests for cache invalidation
  (new timestamp, config change behavior, cache preservation)

Impact: Lifecycle sensor attributes now update correctly when new API data arrives.
Reduced code by ~40 lines in coordinator, consolidated caching to single layer.
All 350 tests passing.
2025-11-23 13:10:19 +00:00
Julian Pawlowski
cfae3c9387 chore(release): bump version to 0.14.0 2025-11-23 11:20:16 +00:00
Julian Pawlowski
ea21b229ee refactor(calculators): consolidate duplicate data access patterns
Refactored Trend, Timing, and Lifecycle calculators to use BaseCalculator
helper methods instead of duplicating data access logic.

Changes:
- TrendCalculator: Simplified 12 lines of repeated price_info access to
  3-4 clean property calls (intervals_today/tomorrow, get_all_intervals)
- TimingCalculator: Replaced direct coordinator.data checks with has_data()
- LifecycleCalculator: Replaced 5 lines of nested gets with 2 helper calls

Benefits:
- Eliminated 10+ duplicate data access patterns
- Consistent None-handling across all calculators
- Single source of truth for coordinator data access
- Easier to maintain (changes propagate automatically)

BaseCalculator helpers used:
- has_data(): Replaces 'if not self.coordinator.data:' checks
- intervals_today/tomorrow: Direct property access to day intervals
- get_intervals(day): Safe day-specific interval retrieval
- get_all_intervals(): Combined yesterday+today+tomorrow
- coordinator_data: Property for coordinator.data access

Validation:
- Type checker: 0 errors, 0 warnings
- Tests: 347 passed, 2 skipped (no behavior change)
- Net: 19 deletions, 14 insertions (5 lines removed, patterns simplified)

Impact: Cleaner code, reduced duplication, consistent error handling.
Future calculator additions will automatically benefit from centralized
data access patterns.
2025-11-22 14:54:06 +00:00
Julian Pawlowski
ed08bc29da docs(architecture): document import architecture and dependency management
Added comprehensive 'Import Architecture and Dependency Management' section
to AGENTS.md documenting the calculator package's import patterns and
dependency flow.

Key documentation:
- Dependency flow: calculators → attributes/helpers (one-way, no circular)
- Hybrid Pattern: Trend/Volatility calculators build own attributes
  (intentional design from Nov 2025 refactoring)
- TYPE_CHECKING best practices: All 8 calculators use optimal pattern
- Import anti-patterns to avoid

Analysis findings:
- No circular dependencies detected (verified Jan 2025)
- All TYPE_CHECKING usage already optimal (no changes needed)
- Clean separation: attributes/helpers never import from calculators
- Backwards dependency (calculator → attributes) limited to 2 calculators

Validation:
- Type checker: 0 errors, 0 warnings
- Linter: All checks passed
- Tests: 347 passed, 2 skipped

Impact: Documents architectural decisions for future maintenance.
Provides clear guidelines for adding new calculators or modifying
import patterns without introducing circular dependencies.
2025-11-22 14:48:50 +00:00
Julian Pawlowski
36fef2da89 docs(agents): remove status tracking, focus on patterns only
Cleaned up AGENTS.md to focus on patterns and conventions:

Removed:
- "Current State (as of Nov 2025)" section
- "Classes that need renaming" outdated list
- "Action Required" checklist
- Temporal statements about current project state

Added:
- TypedDict exemption in "When prefix can be omitted" list
- Clear rationale: documentation-only, never instantiated

Rationale:
AGENTS.md documents patterns and conventions that help AI understand
the codebase structure. Status tracking belongs in git history or
planning documents. The file should be timeless guidance, not a
snapshot of work in progress.

Impact: Documentation is now focused on "how to write code correctly"
rather than "what state is the code in now".
2025-11-22 14:40:16 +00:00
Julian Pawlowski
3b11c6721e feat(types): add TypedDict documentation and BaseCalculator helpers
Phase 1.1 - TypedDict Documentation System:
- Created sensor/types.py with 14 TypedDict classes documenting sensor attributes
- Created binary_sensor/types.py with 3 TypedDict classes for binary sensors
- Added Literal types (PriceLevel, PriceRating, VolatilityLevel, DataCompleteness)
- Updated imports in sensor/attributes/__init__.py and binary_sensor/attributes.py
- Changed function signatures to use dict[str, Any] for runtime flexibility
- TypedDicts serve as IDE documentation, not runtime validation

Phase 1.2 - BaseCalculator Improvements:
- Added 8 smart data access methods to BaseCalculator:
  * get_intervals(day) - day-specific intervals with None-safety
  * intervals_today/tomorrow/yesterday - convenience properties
  * get_all_intervals() - combined yesterday+today+tomorrow
  * find_interval_at_offset(offset) - interval lookup with bounds checking
  * safe_get_from_interval(interval, key, default) - safe dict access
  * has_data() / has_price_info() - existence checks
  * get_day_intervals(day) - alias for consistency
- Refactored 5 calculator files to use new helper methods:
  * daily_stat.py: -11 lines (coordinator_data checks, get_intervals usage)
  * interval.py: -18 lines (eliminated find_price_data_for_interval duplication)
  * rolling_hour.py: -3 lines (simplified interval collection)
  * volatility.py: -4 lines (eliminated price_info local variable)
  * window_24h.py: -2 lines (replaced coordinator_data check)
  * Total: -38 lines of duplicate code eliminated
- Added noqa comment for lazy import (circular import avoidance)

Type Duplication Resolution:
- Identified duplication: Literal types in types.py vs string constants in const.py
- Attempted solution: Derive constants from Literal types using typing.get_args()
- Result: Circular import failure (const.py → sensor/types.py → sensor/__init__.py → const.py)
- Final solution: Keep string constants as single source of truth
- Added SYNC comments in all 3 files (const.py, sensor/types.py, binary_sensor/types.py)
- Accept manual synchronization to avoid circular dependencies
- Platform separation maintained (no cross-imports between sensor/ and binary_sensor/)

Impact: Developers get IDE autocomplete and type hints for attribute dictionaries.
Calculator code is more readable with fewer None-checks and clearer data access patterns.
Type/constant duplication documented with sync requirements.
2025-11-22 14:32:24 +00:00
Julian Pawlowski
32857c0cc0 test: remove obsolete lifecycle callback tests
Removed tests for the lifecycle callback system that was removed in
commit 48d6e25.

Also fixed commit f373c01 which incorrectly added test_lifecycle_tomorrow_update.py
instead of deleting it - this commit properly removes it.

Changes:
- tests/test_chart_data_push_updates.py: Deleted (235 lines)
- tests/test_lifecycle_tomorrow_update.py: Deleted (174 lines)
- tests/test_resource_cleanup.py: Removed lifecycle callback test method

Impact: Test suite now has 343 tests (down from 349). All tests pass.
No functionality affected - only test cleanup.
2025-11-22 13:04:47 +00:00
Julian Pawlowski
2d0febdab3 fix(binary_sensor): remove 6-hour lookahead limit for period icons
Simplified _has_future_periods() to check for ANY future periods instead
of limiting to 6-hour window. This ensures icons show 'waiting' state
whenever periods are scheduled, not just within artificial time limit.

Also added pragmatic fallback in timing calculator _find_next_period():
when skip_current=True but only one future period exists, return it
anyway instead of showing 'unknown'. This prevents timing sensors from
showing unknown during active periods.

Changes:
- binary_sensor/definitions.py: Removed PERIOD_LOOKAHEAD_HOURS constant
- binary_sensor/core.py: Simplified _has_future_periods() logic
- sensor/calculators/timing.py: Added pragmatic fallback for single period

Impact: Better user experience - icons always show future periods, timing
sensors show values even during edge cases.
2025-11-22 13:04:17 +00:00
Julian Pawlowski
f373c01fbb test: remove obsolete lifecycle callback tests
Deleted test_lifecycle_tomorrow_update.py (2 tests) which validated the
now-removed lifecycle callback system.

These tests were rendered obsolete by the removal of the custom lifecycle
callback mechanism in favor of Home Assistant's standard coordinator pattern.

Impact: Test suite reduced from 355 to 349 tests, all passing.
2025-11-22 13:01:30 +00:00
Julian Pawlowski
48d6e2580a refactor(coordinator): remove redundant lifecycle callback system
Removed custom lifecycle callback push-update mechanism after confirming
it was redundant with Home Assistant's built-in DataUpdateCoordinator
pattern.

Root cause analysis showed HA's async_update_listeners() is called
synchronously (no await) immediately after _async_update_data() returns,
making separate lifecycle callbacks unnecessary.

Changes:
- coordinator/core.py: Removed lifecycle callback methods and notifications
- sensor/core.py: Removed lifecycle callback registration and cleanup
- sensor/attributes/lifecycle.py: Removed next_tomorrow_check attribute
- sensor/calculators/lifecycle.py: Removed get_next_tomorrow_check_time()

Impact: Simplified coordinator pattern, no user-visible changes. Standard
HA coordinator mechanism provides same immediate update guarantee without
custom callback complexity.
2025-11-22 13:01:17 +00:00
Julian Pawlowski
f2627a5292 fix(period_handlers): normalize flex and min_distance to absolute values
Fixed critical sign convention bug where negative user-facing values
(e.g., peak_price_flex=-20%) weren't normalized for internal calculations,
causing incorrect period filtering.

Changes:
- periods.py: Added abs() normalization for flex and min_distance_from_avg
- core.py: Added comment documenting flex normalization by get_period_config()
- level_filtering.py: Simplified check_interval_criteria() to work with normalized
  positive values only, removed complex negative price handling
- relaxation.py: Removed sign handling since values are pre-normalized

Internal convention:
- User-facing: Best price uses positive (+15%), Peak price uses negative (-20%)
- Internal: Always positive (0.15 or 0.20) with reverse_sort flag for direction

Added comprehensive regression tests:
- test_best_price_e2e.py: Validates Best price periods generate correctly
- test_peak_price_e2e.py: Validates Peak price periods generate correctly
- test_level_filtering.py: Unit tests for flex/distance filter logic

Impact: Peak price periods now generate correctly. Bug caused 100% FLEX
filtering (192/192 intervals blocked) → 0 periods found. Fix ensures
reasonable filtering (~40-50%) with periods successfully generated.
2025-11-22 13:01:01 +00:00
Julian Pawlowski
476b0f6ef8 chore(release): bump version to 0.13.0 2025-11-22 04:47:44 +00:00
Julian Pawlowski
f128d00c99 test(period): document period calculation testing strategy
Added documentation file explaining why period calculation functions
are tested via integration tests rather than unit tests.

Rationale:
- Period building requires full coordinator context (TimeService, price_context)
- Complex enriched price data with multiple calculated fields
- Helper functions (split_intervals_by_day, calculate_reference_prices)
  are simple transformations that can't fail independently
- Integration tests provide better coverage than mocked unit tests

Testing strategy:
- test_midnight_periods.py: Period calculation across day boundaries
- test_midnight_turnover.py: Cache invalidation and recalculation
- docs/development/period-calculation-theory.md: Algorithm documentation

Impact: Clarifies testing approach for future contributors. Prevents
wasted effort on low-value unit tests for complex integrated functions.
2025-11-22 04:47:09 +00:00
Julian Pawlowski
a85c37e5ca test(time): add boundary tolerance and DST handling tests
Added 40+ tests for TibberPricesTimeService:

Quarter-hour rounding with ±2s tolerance:
- 17 tests covering boundary cases (exact, within tolerance, outside)
- Midnight wrap-around scenarios
- Microsecond precision edge cases

DST handling (23h/25h days):
- Standard day: 96 intervals (24h × 4)
- Spring DST: 92 intervals (23h × 4)
- Fall DST: 100 intervals (25h × 4)
- Note: Full DST tests require Europe/Berlin timezone setup

Day boundaries and interval offsets:
- Yesterday/today/tomorrow boundary calculation
- Interval offset (current/next/previous) with day wrapping
- Time comparison helpers (is_current_interval)

Impact: Validates critical time handling logic. Ensures quarter-hour
rounding works correctly for sensor updates despite HA scheduling jitter.
2025-11-22 04:46:53 +00:00
Julian Pawlowski
91ef2806e5 test(timers): comprehensive timer architecture validation
Added 60+ tests for three-timer architecture:

Timer #1 (API polling): next_api_poll_time calculation
- 8 tests covering timer offset calculation before/after 13:00
- Tests tomorrow data presence logic
- Verifies minute/second offset preservation

Timer #2 (quarter-hour refresh): :00, :15, :30, :45 boundaries
- 10 tests covering registration, cancellation, callback execution
- Verifies exact boundary timing (second=0)
- Tests independence from Timer #3

Timer #3 (minute refresh): :00, :30 every minute
- 6 tests covering 30-second boundary registration
- Verifies timing sensors assignment
- Tests countdown/progress update frequency

Sensor assignment:
- 20+ tests mapping 80+ sensors to correct timers
- Verifies TIME_SENSITIVE and MINUTE_UPDATE constants
- Catches missing/incorrect timer assignments

Impact: Comprehensive validation of timer architecture prevents
regression in entity update scheduling. Documents which sensors
use which timers.
2025-11-22 04:46:30 +00:00
Julian Pawlowski
d1376c8921 test(cleanup): add comprehensive resource cleanup tests
Added 40+ tests verifying memory leak prevention patterns:

- Listener cleanup: Time-sensitive, minute-update, lifecycle callbacks
- Timer cancellation: Quarter-hour, minute timers
- Config entry cleanup: Options update listener via async_on_unload
- Cache invalidation: Config, period, trend caches
- Storage cleanup: Cache files deleted on entry removal

Tests verify cleanup patterns exist in code (not full integration tests
due to complex mocking requirements).

Impact: Documents and tests cleanup contracts for future maintainability.
Prevents memory leaks when entities removed or config changed.
2025-11-22 04:46:11 +00:00
Julian Pawlowski
c7f6843c5b fix(sensors): ensure connection/tomorrow_data/lifecycle consistency
Fixed state inconsistencies during auth errors:

Bug #4: tomorrow_data_available incorrectly returns True during auth failure
- Now returns None (unknown) when coordinator.last_exception is ConfigEntryAuthFailed
- Prevents misleading "data available" when API connection lost

Bug #5: Sensor states inconsistent during error conditions
- connection: False during auth error (even with cached data)
- tomorrow_data_available: None during auth error (cannot verify)
- lifecycle_status: "error" during auth error

Changes:
- binary_sensor/core.py: Check last_exception before evaluating tomorrow data
- tests: 25 integration tests covering all error scenarios

Impact: All three sensors show consistent states during auth errors,
API timeouts, and normal operation. No misleading "available" status
when connection is lost.
2025-11-22 04:45:57 +00:00
Julian Pawlowski
85fe9666a7 feat(coordinator): add atomic midnight turnover coordination
Introduced TibberPricesMidnightHandler to prevent duplicate midnight
turnover when multiple timers fire simultaneously.

Problem: Timer #1 (API poll) and Timer #2 (quarter-hour refresh) both
wake at midnight, each detecting day change and triggering cache clear.
Race condition caused duplicate turnover operations.

Solution:
- Atomic flag coordination: First timer sets flag, subsequent timers skip
- Persistent state survives HA restart (cache stores last_turnover_time)
- Day-boundary detection: Compares current.date() vs last_check.date()
- 13 comprehensive tests covering race conditions and HA restart scenarios

Architecture:
- coordinator/midnight_handler.py: 165 lines, atomic coordination logic
- coordinator/core.py: Integrated handler in coordinator initialization
- coordinator/listeners.py: Delegate midnight check to handler

Impact: Eliminates duplicate cache clears at midnight. Single atomic
turnover operation regardless of how many timers fire simultaneously.
2025-11-22 04:45:41 +00:00
Julian Pawlowski
9c3c094305 fix(calculations): handle negative electricity prices correctly
Fixed multiple calculation issues with negative prices (Norway/Germany
renewable surplus scenarios):

Bug #6: Rating threshold validation with dead code
- Added threshold validation (low >= high) with warning
- Returns NORMAL as fallback for misconfigured thresholds

Bug #7: Min/Max functions returning 0.0 instead of None
- Changed default from 0.0 to None when window is empty
- Prevents misinterpretation (0.0 looks like price with negatives)

Bug #9: Period price diff percentage wrong sign with negative reference
- Use abs(ref_price) in percentage calculation
- Correct percentage direction for negative prices

Bug #10: Trend diff percentage wrong sign with negative current price
- Use abs(current_interval_price) in percentage calculation
- Correct trend direction when prices cross zero

Bug #11: later_half_diff calculation failed for negative prices
- Changed condition from `if current_interval_price > 0` to `!= 0`
- Use abs(current_interval_price) for percentage

Changes:
- utils/price.py: Add threshold validation, use abs() in percentages
- utils/average.py: Return None instead of 0.0 for empty windows
- period_statistics.py: Use abs() for reference prices
- trend.py: Use abs() for current prices, fix zero-check condition
- tests: 95+ new tests covering negative/zero/mixed price scenarios

Impact: All calculations work correctly with negative electricity prices.
Percentages show correct direction regardless of sign.
2025-11-22 04:45:23 +00:00
Julian Pawlowski
9a6eb44382 refactor(config): use negative values for Best Price min_distance
Best Price min_distance now uses negative values (-50 to 0) to match
semantic meaning "below average". Peak Price continues using positive
values (0 to 50) for "above average".

Uniform formula: avg * (1 + distance/100) works for both period types.
Sign indicates direction: negative = toward MIN (cheap), positive = toward MAX (expensive).

Changes:
- const.py: DEFAULT_BEST_PRICE_MIN_DISTANCE_FROM_AVG = -5 (was 5)
- schemas.py: Best Price range -50 to 0, Peak Price range 0 to 50
- validators.py: Separate validate_best_price_distance_percentage()
- level_filtering.py: Simplified to uniform formula (removed conditionals)
- translations: Separate error messages for Best/Peak distance validation
- tests: 37 comprehensive validator tests (100% coverage)

Impact: Configuration UI now visually represents direction relative to average.
Users see intuitive negative values for "below average" pricing.
2025-11-22 04:44:57 +00:00
Julian Pawlowski
215ac02302 feat(sensors): add lifecycle callback for chart_data_export sensor
chart_data_export now registers lifecycle callback for immediate
updates when coordinator data changes ("fresh" lifecycle state).
Previously only updated via polling intervals.

Changes:
- Register callback in sensor constructor (when entity_key matches)
- Callback triggers async_write_ha_state() on "fresh" lifecycle
- 5 new tests covering callback registration and triggering

Impact: Chart data export updates immediately on API data arrival,
enabling real-time dashboard updates without polling delay.
2025-11-22 04:44:38 +00:00
Julian Pawlowski
49866f26fa fix(coordinator): use coordinator update timestamp for cache validity
Cache validity now checks _last_coordinator_update (within 30min)
instead of _api_calls_today counter. Fixes false "stale" status
when coordinator runs every 15min but cache validation was only
checking API call counter.

Bug #1: Cache validity shows "stale" at 05:57 AM
Bug #2: Cache age calculation incorrect after midnight turnover
Bug #3: get_cache_validity inconsistent with cache_age sensor

Changes:
- Coordinator: Use _last_coordinator_update for cache validation
- Lifecycle: Extract cache validation to dedicated helper function
- Tests: 7 new tests covering midnight scenarios and edge cases

Impact: Cache validity sensor now accurately reflects coordinator
activity, not just explicit API calls. Correctly handles midnight
turnover without false "stale" status.
2025-11-22 04:44:22 +00:00
Julian Pawlowski
c6f41b1aa5 fix(manifest): remove integration_type field 2025-11-22 03:51:58 +00:00
Julian Pawlowski
c0069e32b8 fix(listeners): ensure both normal and time-sensitive listeners are notified after midnight turnover 2025-11-21 23:57:04 +00:00
Julian Pawlowski
f60b5990ae test: add pytest framework and midnight-crossing tests
Set up pytest with Home Assistant support and created 6 tests for
midnight-crossing period logic (5 unit tests + 1 integration test).

Added pytest configuration, test dependencies, test runner script
(./scripts/test), and comprehensive tests for group_periods_by_day()
and midnight turnover consistency.

All tests pass in 0.12s.

Impact: Provides regression testing for midnight-crossing period bugs.
Tests validate periods remain visible across midnight turnover.
2025-11-21 23:47:01 +00:00
Julian Pawlowski
47b0a298d4 feat(periods): add midnight-crossing periods and day volatility attributes
Periods can now naturally cross midnight boundaries, and new diagnostic
attributes help users understand price classification changes at midnight.

**New Features:**

1. Midnight-Crossing Period Support (relaxation.py):
   - group_periods_by_day() assigns periods to ALL spanned days
   - Periods crossing midnight appear in both yesterday and today
   - Enables period formation across calendar day boundaries
   - Ensures min_periods checking works correctly at midnight

2. Extended Price Data Window (relaxation.py):
   - Period calculation now uses full 3-day data (yesterday+today+tomorrow)
   - Enables natural period formation without artificial midnight cutoff
   - Removed date filter that excluded yesterday's prices

3. Day Volatility Diagnostic Attributes (period_statistics.py, core.py):
   - day_volatility_%: Daily price spread as percentage (span/avg × 100)
   - day_price_min/max/span: Daily price range in minor currency (ct/øre)
   - Helps detect when midnight classification changes are economically significant
   - Uses period start day's reference prices for consistency

**Documentation:**

4. Design Principles (period-calculation-theory.md):
   - Clarified per-day evaluation principle (always was the design)
   - Added comprehensive section on midnight boundary handling
   - Documented volatility threshold separation (sensor vs period filters)
   - Explained market context for midnight price jumps (EPEX SPOT timing)

5. User Guides (period-calculation.md, automation-examples.md):
   - Added \"Midnight Price Classification Changes\" troubleshooting section
   - Provided automation examples using volatility attributes
   - Explained why Best→Peak classification can change at midnight
   - Documented level filter volatility threshold behavior

**Architecture:**

- Per-day evaluation: Each interval evaluated against its OWN day's min/max/avg
  (not period start day) ensures mathematical correctness across midnight
- Period boundaries: Periods can naturally cross midnight but may split when
  consecutive days differ significantly (intentional, mathematically correct)
- Volatility thresholds: Sensor thresholds (user-configurable) remain separate
  from period filter thresholds (fixed internal) to prevent unexpected behavior

Impact: Periods crossing midnight are now consistently visible before and
after midnight turnover. Users can understand and handle edge cases where
price classification changes at midnight on low-volatility days.
2025-11-21 23:18:46 +00:00
dependabot[bot]
dd12f97207
chore(deps): bump astral-sh/setup-uv from 7.1.3 to 7.1.4 (#34)
Bumps [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) from 7.1.3 to 7.1.4.
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](5a7eac68fb...1e862dfacb)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: 7.1.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-21 21:22:38 +01:00
dependabot[bot]
ed90004f63
chore(deps): bump actions/checkout from 5.0.1 to 6.0.0 (#36)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.1 to 6.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Commits](https://github.com/actions/checkout/compare/v5.0.1...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-21 21:22:27 +01:00
dependabot[bot]
b2ef3b2a29
chore(deps): bump home-assistant/actions (#35)
Bumps [home-assistant/actions](https://github.com/home-assistant/actions) from 8ca6e134c077479b26138bd33520707e8d94ef59 to 01a62fa0b7ab4a0ac894184f48a82477812dca4b.
- [Release notes](https://github.com/home-assistant/actions/releases)
- [Commits](8ca6e134c0...01a62fa0b7)

---
updated-dependencies:
- dependency-name: home-assistant/actions
  dependency-version: 01a62fa0b7ab4a0ac894184f48a82477812dca4b
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-21 21:22:16 +01:00
Julian Pawlowski
2e5b48192a chore(release): bump version to 0.12.1 2025-11-21 18:33:18 +00:00
Julian Pawlowski
e35729d9b7 fix(coordinator): tomorrow sensors show unknown after 13:00 data fetch
Synchronized coordinator._cached_price_data after API calls to ensure tomorrow data is available for sensor value calculations and lifecycle state detection.

Impact: Tomorrow sensors now display values correctly after afternoon data fetch. Lifecycle sensor status remains stable without flickering between "searching_tomorrow" and other states.
2025-11-21 18:32:40 +00:00
Julian Pawlowski
f6b553d90e fix(periods): restore relaxation metadata marking with correct sign handling
Restored mark_periods_with_relaxation() function and added call in
relax_all_prices() to properly mark periods found through relaxation.

Problem: Periods found via relaxation were missing metadata attributes:
- relaxation_active
- relaxation_level
- relaxation_threshold_original_%
- relaxation_threshold_applied_%

These attributes are expected by:
- period_overlap.py: For merging periods with correct relaxation info
- binary_sensor/attributes.py: For displaying relaxation info to users

Implementation:
- Added reverse_sort parameter to preserve sign semantics
- For Best Price: Store positive thresholds (e.g., +15%, +18%)
- For Peak Price: Store negative thresholds (e.g., -20%, -23%)
- Mark periods immediately after calculate_periods() and before
  resolve_period_overlaps() so metadata is preserved during merging

Impact: Users can now see which periods were found through relaxation
and at what flex threshold. Peak Price periods show negative thresholds
matching the user's configuration semantics (negative = below maximum).
2025-11-21 17:40:15 +00:00
Julian Pawlowski
14b68a504b refactor(config): optimize volatility thresholds with separate ranges and improved UX
Volatility Threshold Optimization:
- Replaced global MIN/MAX_VOLATILITY_THRESHOLD (0-100%) with six separate
  constants for overlapping ranges per threshold level
- MODERATE: 5.0-25.0% (was: 0-100%)
- HIGH: 20.0-40.0% (was: 0-100%)
- VERY_HIGH: 35.0-80.0% (was: 0-100%)
- Added detailed comments explaining ranges and cascading requirements

Validators:
- Added three specific validation functions (one per threshold level)
- Added cross-validation ensuring MODERATE < HIGH < VERY_HIGH
- Added fallback to existing option values for completeness check
- Updated error keys to specific messages per threshold level

UI Improvements:
- Changed NumberSelector mode: BOX → SLIDER (consistency with other config steps)
- Changed step size: 0.1% → 1.0% (better UX, sufficient precision)
- Updated min/max ranges to match new validation constants

Translations:
- Removed: "invalid_volatility_threshold" (generic)
- Added: "invalid_volatility_threshold_moderate/high/very_high" (specific ranges)
- Added: "invalid_volatility_thresholds" (cross-validation error)
- Updated all 5 languages (de, en, nb, nl, sv)

Files modified:
- config_flow_handlers/options_flow.py: Updated validation logic
- config_flow_handlers/schemas.py: Updated NumberSelector configs
- config_flow_handlers/validators.py: Added specific validators + cross-validation
- const.py: Replaced global constants with six specific constants
- translations/*.json: Updated error messages (5 languages)

Impact: Users get clearer validation errors with specific ranges shown,
better UX with sliders and appropriate step size, and guaranteed
threshold ordering (MODERATE < HIGH < VERY_HIGH).
2025-11-21 17:31:07 +00:00
Julian Pawlowski
0fd98554ae refactor(entity): switch description content based on extended_descriptions
Changed description attribute behavior from "add separate long_description
attribute" to "switch description content" when CONF_EXTENDED_DESCRIPTIONS
is enabled.

OLD: description always shown, long_description added as separate attribute
NEW: description content switches between short and long based on config

Implementation:
- Check extended_descriptions flag BEFORE loading translation
- Load "long_description" key if enabled, fallback to "description" if missing
- Assign loaded content to "description" attribute (same key always)
- usage_tips remains separate attribute (only when extended=true)
- Updated both sync (entities) and async (services) versions

Added PLR0912 noqa: Branch complexity justified by feature requirements
(extended check + fallback logic + position handling).

Impact: Users see more detailed descriptions when extended mode enabled,
without attribute clutter. Fallback ensures robustness if long_description
missing in translations.
2025-11-21 17:30:29 +00:00
Julian Pawlowski
7a1675a55a fix(api): initialize time attribute to prevent AttributeError
Fixed uninitialized self.time attribute causing AttributeError during
config entry creation. Added explicit initialization to None with
Optional type annotation and guard in _get_price_info_for_specific_homes().

Impact: Config flow no longer crashes when creating initial config entry.
Users can complete setup without errors.
2025-11-21 17:29:04 +00:00
Julian Pawlowski
ebd1b8ddbf chore: Enhance validation logic and constants for options configuration flow
- Added new validation functions for various parameters including flexibility percentage, distance percentage, minimum periods, gap count, relaxation attempts, price rating thresholds, volatility threshold, and price trend thresholds.
- Updated constants in `const.py` to define maximum and minimum limits for the new validation criteria.
- Improved error messages in translations for invalid parameters to provide clearer guidance to users.
- Adjusted existing validation functions to ensure they align with the new constants and validation logic.
2025-11-21 13:57:35 +00:00
512 changed files with 139984 additions and 6865 deletions

View file

@ -1,18 +1,29 @@
{ {
"name": "jpawlowski/hass.tibber_prices", "name": "jpawlowski/hass.tibber_prices",
"image": "mcr.microsoft.com/devcontainers/python:3.13", "image": "mcr.microsoft.com/devcontainers/python:3.14",
"postCreateCommand": "scripts/setup", "postCreateCommand": "bash .devcontainer/setup-git.sh && scripts/setup/setup",
"postStartCommand": "scripts/motd", "postStartCommand": "scripts/motd",
"containerEnv": { "containerEnv": {
"PYTHONASYNCIODEBUG": "1" "PYTHONASYNCIODEBUG": "1",
"TIBBER_PRICES_DEV": "1"
}, },
"forwardPorts": [ "forwardPorts": [
8123 8123,
3000,
3001
], ],
"portsAttributes": { "portsAttributes": {
"8123": { "8123": {
"label": "Home Assistant", "label": "Home Assistant",
"onAutoForward": "notify" "onAutoForward": "notify"
},
"3000": {
"label": "Docusaurus User Docs",
"onAutoForward": "notify"
},
"3001": {
"label": "Docusaurus Developer Docs",
"onAutoForward": "notify"
} }
}, },
"customizations": { "customizations": {
@ -59,7 +70,7 @@
], ],
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python", "python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
"python.analysis.extraPaths": [ "python.analysis.extraPaths": [
"${workspaceFolder}/.venv/lib/python3.13/site-packages" "${workspaceFolder}/.venv/lib/python3.14/site-packages"
], ],
"python.terminal.activateEnvironment": true, "python.terminal.activateEnvironment": true,
"python.terminal.activateEnvInCurrentTerminal": true, "python.terminal.activateEnvInCurrentTerminal": true,
@ -94,23 +105,30 @@
"fileMatch": [ "fileMatch": [
"homeassistant/components/*/manifest.json" "homeassistant/components/*/manifest.json"
], ],
"url": "${containerWorkspaceFolder}/scripts/json_schemas/manifest_schema.json" "url": "${containerWorkspaceFolder}/schemas/json/manifest_schema.json"
}, },
{ {
"fileMatch": [ "fileMatch": [
"homeassistant/components/*/translations/*.json" "homeassistant/components/*/translations/*.json"
], ],
"url": "${containerWorkspaceFolder}/scripts/json_schemas/translation_schema.json" "url": "${containerWorkspaceFolder}/schemas/json/translation_schema.json"
} }
] ],
"git.useConfigOnly": false
} }
} }
}, },
"mounts": [
"source=${localEnv:HOME}${localEnv:USERPROFILE}/.gitconfig,target=/home/vscode/.gitconfig.host,type=bind,consistency=cached"
],
"remoteUser": "vscode", "remoteUser": "vscode",
"features": { "features": {
"ghcr.io/devcontainers/features/github-cli:1": {}, "ghcr.io/devcontainers/features/github-cli:1": {},
"ghcr.io/flexwie/devcontainer-features/op:1": {
"version": "latest"
},
"ghcr.io/devcontainers/features/node:1": { "ghcr.io/devcontainers/features/node:1": {
"version": "22" "version": "24"
}, },
"ghcr.io/devcontainers/features/rust:1": { "ghcr.io/devcontainers/features/rust:1": {
"version": "latest", "version": "latest",

99
.devcontainer/setup-git.sh Executable file
View file

@ -0,0 +1,99 @@
#!/bin/bash
# Setup Git configuration from host
# This script is idempotent and can be run multiple times safely.
# Exit on error
set -e
# Check if host gitconfig exists
if [ ! -f ~/.gitconfig.host ]; then
echo "No host .gitconfig found, skipping Git setup"
exit 0
fi
echo "Setting up Git configuration from host..."
# Extract and set user info
USER_NAME=$(grep -A 2 '^\[user\]' ~/.gitconfig.host | grep 'name' | sed 's/.*= //' | xargs)
USER_EMAIL=$(grep -A 2 '^\[user\]' ~/.gitconfig.host | grep 'email' | sed 's/.*= //' | xargs)
if [ -n "$USER_NAME" ]; then
CURRENT_NAME=$(git config --global user.name 2>/dev/null || echo "")
if [ "$CURRENT_NAME" != "$USER_NAME" ]; then
git config --global user.name "$USER_NAME"
echo "✓ Set user.name: $USER_NAME"
else
echo " user.name already set: $USER_NAME"
fi
fi
if [ -n "$USER_EMAIL" ]; then
CURRENT_EMAIL=$(git config --global user.email 2>/dev/null || echo "")
if [ "$CURRENT_EMAIL" != "$USER_EMAIL" ]; then
git config --global user.email "$USER_EMAIL"
echo "✓ Set user.email: $USER_EMAIL"
else
echo " user.email already set: $USER_EMAIL"
fi
fi
# Set safe defaults for container
git config --global init.defaultBranch main
git config --global pull.rebase false
git config --global merge.conflictStyle diff3
git config --global submodule.recurse true
git config --global color.ui true
echo "✓ Set Git defaults"
# Copy useful aliases (skip if they have macOS-specific paths)
if grep -q '^\[alias\]' ~/.gitconfig.host; then
echo "✓ Syncing Git aliases..."
# First, collect all aliases from host config
TEMP_ALIASES=$(mktemp)
sed -n '/^\[alias\]/,/^\[/p' ~/.gitconfig.host | \
grep -v '^\[' | \
grep -v '^$' | \
while IFS= read -r line; do
# Skip aliases with macOS-specific paths
if echo "$line" | grep -q -E '/(Applications|usr/local)'; then
continue
fi
echo "$line" >> "$TEMP_ALIASES"
done
# Apply each alias (git config --global overwrites existing values = idempotent)
if [ -s "$TEMP_ALIASES" ]; then
while IFS= read -r line; do
ALIAS_NAME=$(echo "$line" | awk '{print $1}')
ALIAS_VALUE=$(echo "$line" | sed "s/^$ALIAS_NAME = //")
git config --global "alias.$ALIAS_NAME" "$ALIAS_VALUE" 2>/dev/null || true
done < "$TEMP_ALIASES"
echo " Synced $(wc -l < "$TEMP_ALIASES") aliases"
fi
rm -f "$TEMP_ALIASES"
fi
# Disable GPG signing in container (1Password SSH signing doesn't work in DevContainers)
# SSH agent forwarding works for git push/pull, but SSH signing requires direct
# access to 1Password app which isn't available in the container.
#
# For signed commits: Make final commits on host macOS where 1Password is available.
# The container is for development/testing - pre-commit hooks will still run.
CURRENT_SIGNING=$(git config --global commit.gpgsign 2>/dev/null || echo "false")
if [ "$CURRENT_SIGNING" != "false" ]; then
echo " Disabling commit signing in container (1Password not accessible)"
echo " → For signed commits, commit from macOS terminal outside container"
git config --global commit.gpgsign false
else
echo " Commit signing already disabled"
fi
# Keep the signing key info for reference, but don't use it
SIGNING_KEY=$(grep 'signingkey' ~/.gitconfig.host 2>/dev/null | sed 's/.*= //' | xargs || echo "")
if [ -n "$SIGNING_KEY" ]; then
echo " → Your signing key: ${SIGNING_KEY:0:20}... (available on host)"
fi
echo "✓ Git configuration complete"

4
.github/FUNDING.yml vendored Normal file
View file

@ -0,0 +1,4 @@
# These are supported funding model platforms
github: [ jpawlowski ]
buy_me_a_coffee: jpawlowski

View file

@ -20,7 +20,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6.0.0 uses: actions/checkout@v6
with: with:
fetch-depth: 0 # Need full history for git describe fetch-depth: 0 # Need full history for git describe
@ -43,13 +43,13 @@ jobs:
echo "✗ Tag v${{ steps.manifest.outputs.version }} does not exist yet" echo "✗ Tag v${{ steps.manifest.outputs.version }} does not exist yet"
fi fi
- name: Validate version format - name: Validate version format (stable or beta)
if: steps.tag_check.outputs.exists == 'false' if: steps.tag_check.outputs.exists == 'false'
run: | run: |
VERSION="${{ steps.manifest.outputs.version }}" VERSION="${{ steps.manifest.outputs.version }}"
if ! echo "$VERSION" | grep -qE '^[0-9]+\.[0-9]+\.[0-9]+$'; then if ! echo "$VERSION" | grep -qE '^[0-9]+\.[0-9]+\.[0-9]+(b[0-9]+)?$'; then
echo "❌ Invalid version format: $VERSION" echo "❌ Invalid version format: $VERSION"
echo "Expected format: X.Y.Z (e.g., 1.0.0)" echo "Expected format: X.Y.Z or X.Y.ZbN (e.g., 1.0.0, 0.25.0b0)"
exit 1 exit 1
fi fi
echo "✓ Version format valid: $VERSION" echo "✓ Version format valid: $VERSION"

163
.github/workflows/docusaurus.yml vendored Normal file
View file

@ -0,0 +1,163 @@
name: Deploy Docusaurus Documentation (Dual Sites)
on:
push:
branches: [main]
paths:
- 'docs/**'
- '.github/workflows/docusaurus.yml'
tags:
- 'v*.*.*'
workflow_dispatch:
# Concurrency control: cancel in-progress deployments
# Pattern from GitHub Actions best practices for deployment workflows
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: write
pages: write
id-token: write
jobs:
deploy:
name: Build and Deploy Documentation Sites
runs-on: ubuntu-latest
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0 # Needed for version timestamps
- name: Detect prerelease tag (beta/rc)
id: taginfo
run: |
if [[ "${GITHUB_REF}" =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+(b[0-9]+|rc[0-9]+)$ ]]; then
echo "is_prerelease=true" >> "$GITHUB_OUTPUT"
echo "Detected prerelease tag: ${GITHUB_REF}"
else
echo "is_prerelease=false" >> "$GITHUB_OUTPUT"
echo "Stable tag or branch: ${GITHUB_REF}"
fi
- uses: actions/setup-node@v6
with:
node-version: 24
cache: 'npm'
cache-dependency-path: |
docs/user/package-lock.json
docs/developer/package-lock.json
# USER DOCS BUILD
- name: Install user docs dependencies
working-directory: docs/user
run: npm ci
- name: Create user docs version snapshot on tag
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
working-directory: docs/user
run: |
TAG_VERSION=${GITHUB_REF#refs/tags/}
echo "Creating user documentation version: $TAG_VERSION"
npm run docusaurus docs:version $TAG_VERSION || echo "Version already exists"
# Update GitHub links in versioned docs
if [ -d "versioned_docs/version-$TAG_VERSION" ]; then
find versioned_docs/version-$TAG_VERSION -name "*.md" -type f -exec sed -i "s|github.com/jpawlowski/hass.tibber_prices/blob/main/|github.com/jpawlowski/hass.tibber_prices/blob/$TAG_VERSION/|g" {} \; || true
fi
- name: Cleanup old user docs versions
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
working-directory: docs/user
run: |
chmod +x ../cleanup-old-versions.sh
# Adapt script for single-instance mode (versioned_docs/ instead of user_versioned_docs/)
sed 's/user_versioned_docs/versioned_docs/g; s/user_versions.json/versions.json/g; s/developer_versioned_docs/versioned_docs/g; s/developer_versions.json/versions.json/g' ../cleanup-old-versions.sh > cleanup-single.sh
chmod +x cleanup-single.sh
./cleanup-single.sh
- name: Build user docs website
working-directory: docs/user
run: npm run build
# DEVELOPER DOCS BUILD
- name: Install developer docs dependencies
working-directory: docs/developer
run: npm ci
- name: Create developer docs version snapshot on tag
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
working-directory: docs/developer
run: |
TAG_VERSION=${GITHUB_REF#refs/tags/}
echo "Creating developer documentation version: $TAG_VERSION"
npm run docusaurus docs:version $TAG_VERSION || echo "Version already exists"
# Update GitHub links in versioned docs
if [ -d "versioned_docs/version-$TAG_VERSION" ]; then
find versioned_docs/version-$TAG_VERSION -name "*.md" -type f -exec sed -i "s|github.com/jpawlowski/hass.tibber_prices/blob/main/|github.com/jpawlowski/hass.tibber_prices/blob/$TAG_VERSION/|g" {} \; || true
fi
- name: Cleanup old developer docs versions
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
working-directory: docs/developer
run: |
chmod +x ../cleanup-old-versions.sh
# Adapt script for single-instance mode
sed 's/user_versioned_docs/versioned_docs/g; s/user_versions.json/versions.json/g; s/developer_versioned_docs/versioned_docs/g; s/developer_versions.json/versions.json/g' ../cleanup-old-versions.sh > cleanup-single.sh
chmod +x cleanup-single.sh
./cleanup-single.sh
- name: Build developer docs website
working-directory: docs/developer
run: npm run build
# MERGE BUILDS
- name: Merge both documentation sites
run: |
mkdir -p deploy-root/user
mkdir -p deploy-root/developer
cp docs/index.html deploy-root/
cp -r docs/user/build/* deploy-root/user/
cp -r docs/developer/build/* deploy-root/developer/
# COMMIT VERSION SNAPSHOTS
- name: Commit version snapshots back to repository
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
run: |
TAG_VERSION=${GITHUB_REF#refs/tags/}
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
# Add version files from both docs
git add docs/user/versioned_docs/ docs/user/versions.json 2>/dev/null || true
git add docs/developer/versioned_docs/ docs/developer/versions.json 2>/dev/null || true
# Commit if there are changes
if git diff --staged --quiet; then
echo "No version snapshot changes to commit"
else
git commit -m "docs: add version snapshot $TAG_VERSION and cleanup old versions [skip ci]"
git push origin HEAD:main
echo "Version snapshots committed and pushed to main"
fi
# DEPLOY TO GITHUB PAGES
- name: Setup Pages
uses: actions/configure-pages@v6
- name: Upload artifact
uses: actions/upload-pages-artifact@v4
with:
path: ./deploy-root
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v5

View file

@ -4,9 +4,15 @@ on:
push: push:
branches: branches:
- "main" - "main"
paths-ignore:
- 'docs/**'
- '.github/workflows/docusaurus.yml'
pull_request: pull_request:
branches: branches:
- "main" - "main"
paths-ignore:
- 'docs/**'
- '.github/workflows/docusaurus.yml'
permissions: {} permissions: {}
@ -20,20 +26,20 @@ jobs:
runs-on: "ubuntu-latest" runs-on: "ubuntu-latest"
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1 uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Set up Python - name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0 uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0
with: with:
python-version: "3.13" python-version: "3.14"
- name: Install uv - name: Install uv
uses: astral-sh/setup-uv@5a7eac68fb9809dea845d802897dc5c723910fa3 # v7.1.3 uses: astral-sh/setup-uv@37802adc94f370d6bfd71619e3f0bf239e1f3b78 # v7.6.0
with: with:
version: "0.9.3" version: "0.9.3"
- name: Install requirements - name: Install requirements
run: scripts/bootstrap run: scripts/setup/bootstrap
- name: Lint check - name: Lint check
run: scripts/lint-check run: scripts/lint-check

View file

@ -27,7 +27,7 @@ jobs:
version: ${{ steps.tag.outputs.version }} version: ${{ steps.tag.outputs.version }}
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6.0.0 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
@ -77,22 +77,36 @@ jobs:
- name: Commit and push manifest.json update - name: Commit and push manifest.json update
if: steps.update.outputs.updated == 'true' if: steps.update.outputs.updated == 'true'
run: | run: |
TAG_VERSION="v${{ steps.tag.outputs.version }}"
git config user.name "github-actions[bot]" git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com" git config user.email "github-actions[bot]@users.noreply.github.com"
git add custom_components/tibber_prices/manifest.json git add custom_components/tibber_prices/manifest.json
git commit -m "chore(release): sync manifest.json with tag v${{ steps.tag.outputs.version }}" git commit -m "chore(release): sync manifest.json with tag ${TAG_VERSION}"
# Push to main branch # Push to main branch
git push origin HEAD:main git push origin HEAD:main
# Delete and recreate tag on new commit
echo "::notice::Recreating tag ${TAG_VERSION} on updated commit"
git tag -d "${TAG_VERSION}"
git push origin :refs/tags/"${TAG_VERSION}"
git tag -a "${TAG_VERSION}" -m "Release ${TAG_VERSION}"
git push origin "${TAG_VERSION}"
# Delete existing release if present (will be recreated by release-notes job)
gh release delete "${TAG_VERSION}" --yes --cleanup-tag=false || echo "No existing release to delete"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
release-notes: release-notes:
name: Generate and publish release notes name: Generate and publish release notes
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: sync-manifest # Wait for manifest sync to complete needs: sync-manifest # Wait for manifest sync to complete
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6.0.0 uses: actions/checkout@v6
with: with:
fetch-depth: 0 # Fetch all history for git-cliff fetch-depth: 0 # Fetch all history for git-cliff
ref: main # Use updated main branch if manifest was synced ref: main # Use updated main branch if manifest was synced
@ -121,10 +135,20 @@ jobs:
FEAT=$(echo "$COMMITS" | grep -cE "^feat(\(.+\))?:" || true) FEAT=$(echo "$COMMITS" | grep -cE "^feat(\(.+\))?:" || true)
FIX=$(echo "$COMMITS" | grep -cE "^fix(\(.+\))?:" || true) FIX=$(echo "$COMMITS" | grep -cE "^fix(\(.+\))?:" || true)
# Parse versions parse_version() {
local version="$1"
if [[ $version =~ ^([0-9]+)\.([0-9]+)\.([0-9]+)(b[0-9]+)?$ ]]; then
echo "${BASH_REMATCH[1]} ${BASH_REMATCH[2]} ${BASH_REMATCH[3]} ${BASH_REMATCH[4]}"
else
echo "Invalid version format: $version" >&2
exit 1
fi
}
# Parse versions (support beta/prerelease suffix like 0.25.0b0)
PREV_VERSION="${PREV_TAG#v}" PREV_VERSION="${PREV_TAG#v}"
IFS='.' read -r PREV_MAJOR PREV_MINOR PREV_PATCH <<< "$PREV_VERSION" read -r PREV_MAJOR PREV_MINOR PREV_PATCH PREV_PRERELEASE <<< "$(parse_version "$PREV_VERSION")"
IFS='.' read -r MAJOR MINOR PATCH <<< "$TAG_VERSION" read -r MAJOR MINOR PATCH PRERELEASE <<< "$(parse_version "$TAG_VERSION")"
WARNING="" WARNING=""
SUGGESTION="" SUGGESTION=""
@ -166,9 +190,11 @@ jobs:
echo "**Commits analyzed:** Breaking=$BREAKING, Features=$FEAT, Fixes=$FIX" echo "**Commits analyzed:** Breaking=$BREAKING, Features=$FEAT, Fixes=$FIX"
echo "" echo ""
echo "**To fix:**" echo "**To fix:**"
echo "1. Delete the tag: \`git tag -d v$TAG_VERSION && git push origin :refs/tags/v$TAG_VERSION\`" echo "1. Run locally: \`./scripts/release/suggest-version\`"
echo "2. Run locally: \`./scripts/suggest-version\`" echo "2. Create correct tag: \`./scripts/release/prepare <suggested-version>\`"
echo "3. Create correct tag: \`./scripts/prepare-release X.Y.Z\`" echo "3. Push the corrected tag: \`git push origin v<suggested-version>\`"
echo ""
echo "**This tag will be automatically deleted in the next step.**"
echo "EOF" echo "EOF"
} >> $GITHUB_OUTPUT } >> $GITHUB_OUTPUT
else else
@ -176,7 +202,19 @@ jobs:
echo "warning=" >> $GITHUB_OUTPUT echo "warning=" >> $GITHUB_OUTPUT
fi fi
- name: Delete inappropriate version tag
if: steps.version_check.outputs.warning != ''
run: |
TAG_NAME="${GITHUB_REF#refs/tags/}"
echo "❌ Deleting tag $TAG_NAME (version not appropriate for changes)"
echo ""
echo "${{ steps.version_check.outputs.warning }}"
echo ""
git push origin --delete "$TAG_NAME"
exit 1
- name: Install git-cliff - name: Install git-cliff
if: steps.version_check.outputs.warning == ''
run: | run: |
wget https://github.com/orhun/git-cliff/releases/download/v2.4.0/git-cliff-2.4.0-x86_64-unknown-linux-gnu.tar.gz wget https://github.com/orhun/git-cliff/releases/download/v2.4.0/git-cliff-2.4.0-x86_64-unknown-linux-gnu.tar.gz
tar -xzf git-cliff-2.4.0-x86_64-unknown-linux-gnu.tar.gz tar -xzf git-cliff-2.4.0-x86_64-unknown-linux-gnu.tar.gz
@ -184,6 +222,7 @@ jobs:
git-cliff --version git-cliff --version
- name: Generate release notes - name: Generate release notes
if: steps.version_check.outputs.warning == ''
id: release_notes id: release_notes
run: | run: |
FROM_TAG="${{ steps.previoustag.outputs.previous_tag }}" FROM_TAG="${{ steps.previoustag.outputs.previous_tag }}"
@ -193,7 +232,7 @@ jobs:
# Use our script with git-cliff backend (AI disabled in CI) # Use our script with git-cliff backend (AI disabled in CI)
# git-cliff will handle filtering via cliff.toml # git-cliff will handle filtering via cliff.toml
USE_AI=false ./scripts/generate-release-notes "${FROM_TAG}" "${TO_TAG}" > release-notes.md USE_AI=false ./scripts/release/generate-notes "${FROM_TAG}" "${TO_TAG}" > release-notes.md
# Extract title from release notes (first line starting with "# ") # Extract title from release notes (first line starting with "# ")
TITLE=$(head -n 1 release-notes.md | sed 's/^# //') TITLE=$(head -n 1 release-notes.md | sed 's/^# //')
@ -202,15 +241,6 @@ jobs:
fi fi
echo "title=$TITLE" >> $GITHUB_OUTPUT echo "title=$TITLE" >> $GITHUB_OUTPUT
# Append version warning if present
WARNING="${{ steps.version_check.outputs.warning }}"
if [ -n "$WARNING" ]; then
echo "" >> release-notes.md
echo "---" >> release-notes.md
echo "" >> release-notes.md
echo "$WARNING" >> release-notes.md
fi
# Output for GitHub Actions # Output for GitHub Actions
{ {
echo 'notes<<EOF' echo 'notes<<EOF'
@ -218,25 +248,20 @@ jobs:
echo EOF echo EOF
} >> $GITHUB_OUTPUT } >> $GITHUB_OUTPUT
- name: Version Check Summary
if: steps.version_check.outputs.warning != ''
run: |
echo "### ⚠️ Version Mismatch Detected" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "${{ steps.version_check.outputs.warning }}" >> $GITHUB_STEP_SUMMARY
- name: Create GitHub Release - name: Create GitHub Release
if: steps.version_check.outputs.warning == ''
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
with: with:
name: ${{ steps.release_notes.outputs.title }} name: ${{ steps.release_notes.outputs.title }}
body: ${{ steps.release_notes.outputs.notes }} body: ${{ steps.release_notes.outputs.notes }}
draft: false draft: false
prerelease: false prerelease: ${{ contains(github.ref, 'b') }}
generate_release_notes: false # We provide our own generate_release_notes: false # We provide our own
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Summary - name: Summary
if: steps.version_check.outputs.warning == ''
run: | run: |
echo "✅ Release notes generated and published!" >> $GITHUB_STEP_SUMMARY echo "✅ Release notes generated and published!" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY echo "" >> $GITHUB_STEP_SUMMARY

View file

@ -7,9 +7,15 @@ on:
push: push:
branches: branches:
- main - main
paths-ignore:
- 'docs/**'
- '.github/workflows/docusaurus.yml'
pull_request: pull_request:
branches: branches:
- main - main
paths-ignore:
- 'docs/**'
- '.github/workflows/docusaurus.yml'
permissions: {} permissions: {}
@ -23,10 +29,10 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout the repository - name: Checkout the repository
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1 uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Run hassfest validation - name: Run hassfest validation
uses: home-assistant/actions/hassfest@8ca6e134c077479b26138bd33520707e8d94ef59 # master uses: home-assistant/actions/hassfest@d56d093b9ab8d2105bc0cb6ee9bcc0ef4ec8b96d # master
hacs: # https://github.com/hacs/action hacs: # https://github.com/hacs/action
name: HACS validation name: HACS validation
@ -36,5 +42,3 @@ jobs:
uses: hacs/action@d556e736723344f83838d08488c983a15381059a # 22.5.0 uses: hacs/action@d556e736723344f83838d08488c983a15381059a # 22.5.0
with: with:
category: integration category: integration
# Remove this 'ignore' key when you have added brand images for your custom integration to https://github.com/home-assistant/brands
ignore: brands

396
AGENTS.md
View file

@ -4,8 +4,8 @@ This is a **Home Assistant custom component** for Tibber electricity price data,
## Documentation Metadata ## Documentation Metadata
- **Last Major Update**: 2025-11-18 - **Last Major Update**: 2025-01-21
- **Last Architecture Review**: 2025-11-18 (Completed sensor/core.py refactoring: Calculator Pattern implementation with 8 specialized calculators and 8 attribute modules. Reduced core.py from 2,170 → 1,268 lines (42% reduction). Total 3,047 lines extracted to specialized packages.) - **Last Architecture Review**: 2025-01-21 (Phase 1: Added TypedDict documentation system, improved BaseCalculator with 8 helper methods. Phase 2: Documented Import Architecture - Hybrid Pattern (Trend/Volatility build own attributes), verified no circular dependencies, confirmed optimal TYPE_CHECKING usage across all 8 calculators.)
- **Last Code Example Cleanup**: 2025-11-18 (Removed redundant implementation details from AGENTS.md, added guidelines for when to include code examples) - **Last Code Example Cleanup**: 2025-11-18 (Removed redundant implementation details from AGENTS.md, added guidelines for when to include code examples)
- **Documentation Status**: ✅ Current (verified against codebase) - **Documentation Status**: ✅ Current (verified against codebase)
@ -17,14 +17,18 @@ _Note: When proposing significant updates to this file, update the metadata abov
When working with the codebase, Copilot MUST actively maintain consistency between this documentation and the actual code: When working with the codebase, Copilot MUST actively maintain consistency between this documentation and the actual code:
**Scope:** "This documentation" and "this file" refer specifically to `AGENTS.md` in the repository root. This does NOT include user-facing documentation like `README.md`, `/docs/user/`, or comments in code. Those serve different purposes and are maintained separately. **Scope:** "This documentation" and "this file" refer specifically to `AGENTS.md` in the repository root. This does NOT include user-facing documentation like `README.md`, Docusaurus sites, or comments in code. Those serve different purposes and are maintained separately.
**Documentation Organization:** **Documentation Organization:**
- **This file** (`AGENTS.md`): AI/Developer long-term memory, patterns, conventions - **This file** (`AGENTS.md`): AI/Developer long-term memory, patterns, conventions
- **`docs/user/`**: End-user guides (installation, configuration, usage examples) - **`docs/user/`**: Docusaurus site for end-users (installation, configuration, usage examples)
- **`docs/development/`**: Contributor guides (setup, architecture, release management) - Markdown files in `docs/user/docs/*.md`
- **`README.md`**: Project overview with links to detailed documentation - Navigation managed via `docs/user/sidebars.ts`
- **`docs/developer/`**: Docusaurus site for contributors (architecture, development guides)
- Markdown files in `docs/developer/docs/*.md`
- Navigation managed via `docs/developer/sidebars.ts`
- **`README.md`**: Project overview with links to documentation sites
**Automatic Inconsistency Detection:** **Automatic Inconsistency Detection:**
@ -375,15 +379,15 @@ After successful refactoring:
**Core Data Flow:** **Core Data Flow:**
1. `TibberPricesApiClient` (`api.py`) queries Tibber's GraphQL API with `resolution:QUARTER_HOURLY` for user data and prices (yesterday/today/tomorrow - 192 intervals total) 1. `TibberPricesApiClient` (`api.py`) queries Tibber's GraphQL API with `resolution:QUARTER_HOURLY` for user data and prices (day before yesterday/yesterday/today/tomorrow - 384 intervals total, ensuring trailing 24h averages are accurate for all intervals)
2. `TibberPricesDataUpdateCoordinator` (`coordinator.py`) orchestrates updates every 15 minutes, manages persistent storage via `Store`, and schedules quarter-hour entity refreshes 2. `TibberPricesDataUpdateCoordinator` (`coordinator.py`) orchestrates updates every 15 minutes, manages persistent storage via `Store`, and schedules quarter-hour entity refreshes
3. Price enrichment functions (`utils/price.py`, `utils/average.py`) calculate trailing/leading 24h averages, price differences, and rating levels for each 15-minute interval 3. Price enrichment functions (`utils/price.py`, `utils/average.py`) calculate trailing/leading 24h averages, price differences, and rating levels for each 15-minute interval
4. Entity platforms (`sensor/` package, `binary_sensor/` package) expose enriched data as Home Assistant entities 4. Entity platforms (`sensor/` package, `binary_sensor/` package) expose enriched data as Home Assistant entities
5. Custom services (`services.py`) provide API endpoints for integrations like ApexCharts 5. Custom services (`services/` package) provide API endpoints for chart data export, ApexCharts YAML generation, and user data refresh
**Key Patterns:** **Key Patterns:**
- **Dual translation system**: Standard HA translations in `/translations/` (config flow, UI strings per HA schema), supplemental in `/custom_translations/` (entity descriptions not supported by HA schema). Both must stay in sync. Use `async_load_translations()` and `async_load_standard_translations()` from `const.py`. When to use which: `/translations/` is bound to official HA schema requirements; anything else goes in `/custom_translations/` (requires manual translation loading). **Schema reference**: `/scripts/json_schemas/translation_schema.json` provides the structure for `/translations/*.json` files based on [HA's translation documentation](https://developers.home-assistant.io/docs/internationalization/core). - **Dual translation system**: Standard HA translations in `/translations/` (config flow, UI strings per HA schema), supplemental in `/custom_translations/` (entity descriptions not supported by HA schema). Both must stay in sync. Use `async_load_translations()` and `async_load_standard_translations()` from `const.py`. When to use which: `/translations/` is bound to official HA schema requirements; anything else goes in `/custom_translations/` (requires manual translation loading). **Schema reference**: `/schemas/json/translation_schema.json` provides the structure for `/translations/*.json` files based on [HA's translation documentation](https://developers.home-assistant.io/docs/internationalization/core).
- **Select selector translations**: Use `selector.{translation_key}.options.{value}` structure (NOT `selector.select.{translation_key}`). Translation keys map to JSON in `/translations/*.json` following the HA schema structure. - **Select selector translations**: Use `selector.{translation_key}.options.{value}` structure (NOT `selector.select.{translation_key}`). Translation keys map to JSON in `/translations/*.json` following the HA schema structure.
@ -418,7 +422,7 @@ After successful refactoring:
- **Architecture Benefits**: 42% line reduction in core.py (2,170 → 1,268 lines), clear separation of concerns, improved testability, reusable components - **Architecture Benefits**: 42% line reduction in core.py (2,170 → 1,268 lines), clear separation of concerns, improved testability, reusable components
- **See "Common Tasks" section** for detailed patterns and examples - **See "Common Tasks" section** for detailed patterns and examples
- **Quarter-hour precision**: Entities update on 00/15/30/45-minute boundaries via `schedule_quarter_hour_refresh()` in `coordinator/listeners.py`, not just on data fetch intervals. Uses `async_track_utc_time_change(minute=[0, 15, 30, 45], second=0)` for absolute-time scheduling. Smart boundary tolerance (±2 seconds) in `sensor/helpers.py``round_to_nearest_quarter_hour()` handles HA scheduling jitter: if HA triggers at 14:59:58 → rounds to 15:00:00 (next interval), if HA restarts at 14:59:30 → stays at 14:45:00 (current interval). This ensures current price sensors update without waiting for the next API poll, while preventing premature data display during normal operation. - **Quarter-hour precision**: Entities update on 00/15/30/45-minute boundaries via `schedule_quarter_hour_refresh()` in `coordinator/listeners.py`, not just on data fetch intervals. Uses `async_track_utc_time_change(minute=[0, 15, 30, 45], second=0)` for absolute-time scheduling. Smart boundary tolerance (±2 seconds) in `sensor/helpers.py``round_to_nearest_quarter_hour()` handles HA scheduling jitter: if HA triggers at 14:59:58 → rounds to 15:00:00 (next interval), if HA restarts at 14:59:30 → stays at 14:45:00 (current interval). This ensures current price sensors update without waiting for the next API poll, while preventing premature data display during normal operation.
- **Currency handling**: Multi-currency support with major/minor units (e.g., EUR/ct, NOK/øre) via `get_currency_info()` and `format_price_unit_*()` in `const.py`. - **Currency handling**: Multi-currency support with base/sub units (e.g., EUR/ct, NOK/øre) via `get_currency_info()` and `format_price_unit_*()` in `const.py`.
- **Intelligent caching strategy**: Minimizes API calls while ensuring data freshness: - **Intelligent caching strategy**: Minimizes API calls while ensuring data freshness:
- User data cached for 24h (rarely changes) - User data cached for 24h (rarely changes)
- Price data validated against calendar day - cleared on midnight turnover to force fresh fetch - Price data validated against calendar day - cleared on midnight turnover to force fresh fetch
@ -473,7 +477,13 @@ custom_components/tibber_prices/
│ ├── __init__.py # Package exports │ ├── __init__.py # Package exports
│ ├── average.py # Trailing/leading average utilities │ ├── average.py # Trailing/leading average utilities
│ └── price.py # Price enrichment, level/rating calculations │ └── price.py # Price enrichment, level/rating calculations
├── services.py # Custom services (get_price, ApexCharts, etc.) ├── services/ # Custom services package
│ ├── __init__.py # Service registration
│ ├── chartdata.py # Chart data export service
│ ├── apexcharts.py # ApexCharts YAML generator
│ ├── refresh_user_data.py # User data refresh
│ ├── formatters.py # Data transformation utilities
│ └── helpers.py # Common service helpers
├── sensor/ # Sensor platform (package) ├── sensor/ # Sensor platform (package)
│ ├── __init__.py # Platform setup (async_setup_entry) │ ├── __init__.py # Platform setup (async_setup_entry)
│ ├── core.py # TibberPricesSensor class (1,268 lines) │ ├── core.py # TibberPricesSensor class (1,268 lines)
@ -523,6 +533,110 @@ custom_components/tibber_prices/
└── services.yaml # Service definitions └── services.yaml # Service definitions
``` ```
## Import Architecture and Dependency Management
**CRITICAL: Import architecture follows strict layering to prevent circular dependencies.**
### Dependency Flow (Calculator Pattern)
**Clean Separation:**
```
sensor/calculators/ → sensor/attributes/ (Volatility only - Hybrid Pattern)
sensor/calculators/ → sensor/helpers/ (DailyStat, RollingHour - Pure functions)
sensor/calculators/ → entity_utils/ (Pure utility functions)
sensor/calculators/ → const.py (Constants only)
sensor/attributes/ ✗ (NO imports from calculators/)
sensor/helpers/ ✗ (NO imports from calculators/)
```
**Why this works:**
- **One-way dependencies**: Calculators can import from attributes/helpers, but NOT vice versa
- **No circular imports**: Reverse direction is empty (verified Jan 2025)
- **Clean testing**: Each layer can be tested independently
### Hybrid Pattern (Trend/Volatility Calculators)
**Background:** During Nov 2025 refactoring, Trend and Volatility calculators retained attribute-building logic to avoid duplicating complex calculations. This creates a **backwards dependency** (calculator → attributes) but is INTENTIONAL.
**Pattern:**
1. **Calculator** computes value AND builds attribute dict
2. **Core** stores attributes in `cached_data` dict
3. **Attributes package** retrieves cached attributes via:
- `_add_cached_trend_attributes()` for trend sensors
- `_add_timing_or_volatility_attributes()` for volatility sensors
**Example (Volatility):**
```python
# sensor/calculators/volatility.py
from custom_components.tibber_prices.sensor.attributes import (
add_volatility_type_attributes, # ← Backwards dependency (calculator → attributes)
get_prices_for_volatility,
)
def get_volatility_value(self, *, volatility_type: str) -> str | None:
# Calculate volatility level
volatility = calculate_volatility_level(prices, ...)
# Build attribute dict (stored for later)
self._last_volatility_attributes = {"volatility": volatility, ...}
add_volatility_type_attributes(self._last_volatility_attributes, ...)
return volatility
def get_volatility_attributes(self) -> dict | None:
return self._last_volatility_attributes # ← Retrieved by attributes package
```
**Trade-offs:**
- ✅ **Pro**: Complex logic stays in ONE place (no duplication)
- ✅ **Pro**: Calculator has full context for attribute decisions
- ❌ **Con**: Violates strict separation (calculator builds attributes)
- ❌ **Con**: Creates backwards dependency (testability impact)
**Decision:** Pattern is **acceptable** for complex calculators (Trend, Volatility) where attribute logic is tightly coupled to calculation. Simple calculators (Interval, DailyStat, etc.) DO NOT follow this pattern.
### TYPE_CHECKING Best Practices
All calculator modules use `TYPE_CHECKING` correctly:
**Pattern:**
```python
# Runtime imports (used in function bodies)
from custom_components.tibber_prices.const import CONF_PRICE_RATING_THRESHOLD_HIGH
from custom_components.tibber_prices.entity_utils import get_price_value
from .base import TibberPricesBaseCalculator
# Type-only imports (only for type hints)
if TYPE_CHECKING:
from collections.abc import Callable
from typing import Any
```
**Rules:**
- ✅ **Runtime imports**: Functions, classes, constants used in code → OUTSIDE TYPE_CHECKING
- ✅ **Type-only imports**: Only used in type hints → INSIDE TYPE_CHECKING
- ✅ **Coordinator import**: Always in base.py, inherited by all calculators
**Verified Status (Jan 2025):**
- All 8 calculators (base, interval, rolling_hour, daily_stat, window_24h, volatility, trend, timing, metadata) use TYPE_CHECKING correctly
- No optimization needed - imports are already categorized optimally
### Import Anti-Patterns to Avoid
❌ **DON'T:**
- Import from higher layers (attributes/helpers importing from calculators)
- Use runtime imports for type-only dependencies
- Create circular dependencies between packages
- Import entire modules when only needing one function
✅ **DO:**
- Follow one-way dependency flow (calculators → attributes/helpers)
- Use TYPE_CHECKING for type-only imports
- Import specific items: `from .helpers import aggregate_price_data`
- Document intentional backwards dependencies (Hybrid Pattern)
## Period Calculation System (Best/Peak Price Periods) ## Period Calculation System (Best/Peak Price Periods)
**CRITICAL:** Period calculation uses multi-criteria filtering that can create **mathematical conflicts** at high flexibility values. Understanding these interactions is essential for reliable period detection. **CRITICAL:** Period calculation uses multi-criteria filtering that can create **mathematical conflicts** at high flexibility values. Understanding these interactions is essential for reliable period detection.
@ -627,9 +741,9 @@ When debugging period calculation issues:
4. Check relaxation warnings: INFO at 25%, WARNING at 30% indicate suboptimal config 4. Check relaxation warnings: INFO at 25%, WARNING at 30% indicate suboptimal config
**See:** **See:**
- **Theory documentation**: `docs/development/period-calculation-theory.md` (comprehensive mathematical analysis, conflict conditions, configuration pitfalls) - **Theory documentation**: `docs/developer/docs/period-calculation-theory.md` (comprehensive mathematical analysis, conflict conditions, configuration pitfalls)
- **Implementation**: `coordinator/period_handlers/` package (core.py, relaxation.py, level_filtering.py, period_building.py) - **Implementation**: `coordinator/period_handlers/` package (core.py, relaxation.py, level_filtering.py, period_building.py)
- **User guide**: `docs/user/period-calculation.md` (simplified user-facing explanations) - **User guide**: `docs/user/docs/period-calculation.md` (simplified user-facing explanations)
## Development Environment Setup ## Development Environment Setup
@ -659,12 +773,12 @@ When debugging period calculation issues:
- All scripts in `./scripts/` automatically use the correct `.venv` - All scripts in `./scripts/` automatically use the correct `.venv`
- No need to manually activate venv or specify Python path - No need to manually activate venv or specify Python path
- Examples: `./scripts/lint`, `./scripts/develop`, `./scripts/lint-check` - Examples: `./scripts/lint`, `./scripts/develop`, `./scripts/lint-check`
- Release management: `./scripts/prepare-release`, `./scripts/generate-release-notes` - Release management: `./scripts/release/prepare`, `./scripts/release/generate-notes`
**Release Note Backends (auto-installed in DevContainer):** **Release Note Backends (auto-installed in DevContainer):**
- **Rust toolchain**: Minimal Rust installation via DevContainer feature - **Rust toolchain**: Minimal Rust installation via DevContainer feature
- **git-cliff**: Template-based release notes (fast, reliable, installed via cargo in `scripts/setup`) - **git-cliff**: Template-based release notes (fast, reliable, installed via cargo in `scripts/setup/setup`)
- Manual grep/awk parsing as fallback (always available) - Manual grep/awk parsing as fallback (always available)
**When generating shell commands:** **When generating shell commands:**
@ -733,7 +847,7 @@ If you notice commands failing or missing dependencies:
**Local validation:** **Local validation:**
```bash ```bash
./scripts/hassfest # Lightweight local integration validation ./scripts/release/hassfest # Lightweight local integration validation
``` ```
Note: The local `hassfest` script performs basic validation checks (JSON syntax, required files, Python syntax). Full hassfest validation runs in GitHub Actions. Note: The local `hassfest` script performs basic validation checks (JSON syntax, required files, Python syntax). Full hassfest validation runs in GitHub Actions.
@ -741,9 +855,14 @@ Note: The local `hassfest` script performs basic validation checks (JSON syntax,
**Testing:** **Testing:**
```bash ```bash
pytest tests/ # Unit tests exist (test_*.py) but no framework enforced ./scripts/test # Run all tests (pytest with project configuration)
./scripts/test -v # Verbose output
./scripts/test -k test_midnight # Run specific test by name
./scripts/test tests/test_midnight_periods.py # Run specific file
``` ```
Test framework: pytest with Home Assistant custom component support. Tests live in `/tests/` directory. Use `@pytest.mark.unit` for fast tests, `@pytest.mark.integration` for tests that use real coordinator/time services.
## Testing Changes ## Testing Changes
**IMPORTANT: Never start `./scripts/develop` automatically.** **IMPORTANT: Never start `./scripts/develop` automatically.**
@ -908,42 +1027,82 @@ Combine into single commit when:
### Conventional Commits Format ### Conventional Commits Format
**Reference:** Follow [Conventional Commits v1.0.0](https://www.conventionalcommits.org/en/v1.0.0/) specification.
**Structure:** **Structure:**
``` ```
<type>(<scope>): <short summary (max 50-72 chars)> <type>[optional scope]: <description>
<detailed description, wrapped at 72 chars> [optional body]
Impact: <user-visible effects or context for future release notes> [optional footer(s)]
``` ```
**Best Practices:** **Required Elements:**
- **Subject line**: Max 50 chars (hard limit 72), imperative mood ("Add" not "Added") - **type**: Lowercase, communicates intent (feat, fix, docs, etc.)
- **Body**: Wrap at 72 chars, explain WHAT and WHY (not HOW - code shows that) - **description**: Short summary (max 50-72 chars), imperative mood ("add" not "added"), lowercase start, no period
- **Blank line**: Required between subject and body
- **Impact section**: Our addition for release note generation (optional but recommended)
**Types:** **Optional Elements:**
- **scope**: Parentheses after type, e.g., `feat(sensors):` - lowercase, specific area of change
- **body**: Detailed explanation, wrap at 72 chars, explain WHAT and WHY (not HOW - code shows that)
- **footer**: Breaking changes, issue references, or custom fields
**Breaking Changes:**
Use `BREAKING CHANGE:` footer or `!` after type/scope:
```
feat(api)!: drop support for legacy endpoint
BREAKING CHANGE: The /v1/prices endpoint has been removed. Use /v2/prices instead.
```
**Types (Conventional Commits standard):**
- `feat`: New feature (appears in release notes as "New Features") - `feat`: New feature (appears in release notes as "New Features")
- `fix`: Bug fix (appears in release notes as "Bug Fixes") - `fix`: Bug fix (appears in release notes as "Bug Fixes")
- `docs`: Documentation only (appears in release notes as "Documentation") - `docs`: Documentation only (appears in release notes as "Documentation")
- `style`: Code style/formatting (no behavior change, omitted from release notes)
- `refactor`: Code restructure without behavior change (may or may not appear in release notes) - `refactor`: Code restructure without behavior change (may or may not appear in release notes)
- `chore`: Maintenance tasks (usually omitted from release notes) - `perf`: Performance improvement (appears in release notes)
- `test`: Test changes only (omitted from release notes) - `test`: Test changes only (omitted from release notes)
- `style`: Formatting changes (omitted from release notes) - `build`: Build system/dependencies (omitted from release notes)
- `ci`: CI configuration (omitted from release notes)
- `chore`: Maintenance tasks (usually omitted from release notes)
**Scope (optional but recommended):** **Scope (project-specific, optional but recommended):**
- `translations`: Translation system changes - `translations`: Translation system changes
- `config_flow`: Configuration flow changes - `config_flow`: Configuration flow changes
- `sensors`: Sensor implementation - `sensors`: Sensor implementation
- `binary_sensors`: Binary sensor implementation
- `api`: API client changes - `api`: API client changes
- `coordinator`: Data coordinator changes - `coordinator`: Data coordinator changes
- `services`: Service implementations
- `docs`: Documentation files - `docs`: Documentation files
**Custom Footer - Impact Section:**
Add `Impact:` footer for release note generation context (project-specific addition):
```
feat(services): add rolling window support
Implement dynamic 48h window that adapts to data availability.
Impact: Users can create auto-adapting price charts without manual
day selection. Requires config-template-card for ApexCharts mode.
```
**Best Practices:**
- **Subject line**: Max 50 chars (hard limit 72), lowercase, imperative mood
- **Body**: Wrap at 72 chars, optional but useful for complex changes
- **Blank line**: Required between subject and body
- **Impact footer**: Optional but recommended for user-facing changes
### Technical Commit Message Examples ### Technical Commit Message Examples
**Example 1: Bug Fix** **Example 1: Bug Fix**
@ -1112,10 +1271,10 @@ The "Impact:" section bridges technical commits and future release notes:
1. **Helper Script** (recommended, foolproof) 1. **Helper Script** (recommended, foolproof)
- Script: `./scripts/prepare-release VERSION` - Script: `./scripts/release/prepare VERSION`
- Bumps manifest.json version → commits → creates tag locally - Bumps manifest.json version → commits → creates tag locally
- You review and push when ready - You review and push when ready
- Example: `./scripts/prepare-release 0.3.0` - Example: `./scripts/release/prepare 0.3.0`
2. **Auto-Tag Workflow** (safety net) 2. **Auto-Tag Workflow** (safety net)
@ -1126,7 +1285,7 @@ The "Impact:" section bridges technical commits and future release notes:
3. **Local Script** (testing, preview, and updating releases) 3. **Local Script** (testing, preview, and updating releases)
- Script: `./scripts/generate-release-notes [FROM_TAG] [TO_TAG]` - Script: `./scripts/release/generate-notes [FROM_TAG] [TO_TAG]`
- Parses Conventional Commits between tags - Parses Conventional Commits between tags
- Supports multiple backends (auto-detected): - Supports multiple backends (auto-detected):
- **AI-powered**: GitHub Copilot CLI (best, context-aware) - **AI-powered**: GitHub Copilot CLI (best, context-aware)
@ -1138,7 +1297,7 @@ The "Impact:" section bridges technical commits and future release notes:
```bash ```bash
# Generate and preview notes # Generate and preview notes
./scripts/generate-release-notes v0.2.0 v0.3.0 ./scripts/release/generate-notes v0.2.0 v0.3.0
# If release exists, you'll see: # If release exists, you'll see:
# → Generated release notes # → Generated release notes
@ -1147,7 +1306,7 @@ The "Impact:" section bridges technical commits and future release notes:
# → Answer 'y' to auto-update, 'n' to skip # → Answer 'y' to auto-update, 'n' to skip
# Force specific backend # Force specific backend
RELEASE_NOTES_BACKEND=copilot ./scripts/generate-release-notes v0.2.0 v0.3.0 RELEASE_NOTES_BACKEND=copilot ./scripts/release/generate-notes v0.2.0 v0.3.0
``` ```
4. **GitHub UI Button** (manual, PR-based) 4. **GitHub UI Button** (manual, PR-based)
@ -1168,7 +1327,7 @@ The "Impact:" section bridges technical commits and future release notes:
```bash ```bash
# Step 1: Get version suggestion (analyzes commits since last release) # Step 1: Get version suggestion (analyzes commits since last release)
./scripts/suggest-version ./scripts/release/suggest-version
# Output shows: # Output shows:
# - Commit analysis (features, fixes, breaking changes) # - Commit analysis (features, fixes, breaking changes)
@ -1177,12 +1336,12 @@ The "Impact:" section bridges technical commits and future release notes:
# - Preview and release commands # - Preview and release commands
# Step 2: Preview release notes (with AI if available) # Step 2: Preview release notes (with AI if available)
./scripts/generate-release-notes v0.2.0 HEAD ./scripts/release/generate-notes v0.2.0 HEAD
# Step 3: Prepare release (bumps manifest.json + creates tag) # Step 3: Prepare release (bumps manifest.json + creates tag)
./scripts/prepare-release 0.3.0 ./scripts/release/prepare 0.3.0
# Or without argument to show suggestion first: # Or without argument to show suggestion first:
./scripts/prepare-release ./scripts/release/prepare
# Step 4: Review changes # Step 4: Review changes
git log -1 --stat git log -1 --stat
@ -1200,7 +1359,7 @@ If you want better release notes after the automated release:
```bash ```bash
# Generate AI-powered notes and update existing release # Generate AI-powered notes and update existing release
./scripts/generate-release-notes v0.2.0 v0.3.0 ./scripts/release/generate-notes v0.2.0 v0.3.0
# Script will: # Script will:
# 1. Generate notes (uses AI if available locally) # 1. Generate notes (uses AI if available locally)
@ -1240,16 +1399,16 @@ git push
```bash ```bash
# Generate from latest tag to HEAD # Generate from latest tag to HEAD
./scripts/generate-release-notes ./scripts/release/generate-notes
# Generate between specific tags # Generate between specific tags
./scripts/generate-release-notes v1.0.0 v1.1.0 ./scripts/release/generate-notes v1.0.0 v1.1.0
# Force specific backend # Force specific backend
RELEASE_NOTES_BACKEND=manual ./scripts/generate-release-notes RELEASE_NOTES_BACKEND=manual ./scripts/release/generate-notes
# Disable AI (use in CI/CD) # Disable AI (use in CI/CD)
USE_AI=false ./scripts/generate-release-notes USE_AI=false ./scripts/release/generate-notes
``` ```
**Backend Selection Logic:** **Backend Selection Logic:**
@ -1315,7 +1474,7 @@ All backends produce GitHub-flavored Markdown with consistent structure:
```bash ```bash
# git-cliff (fast, reliable, used in CI/CD) # git-cliff (fast, reliable, used in CI/CD)
# Auto-installed in DevContainer via scripts/setup # Auto-installed in DevContainer via scripts/setup/setup
# See: https://git-cliff.org/docs/installation # See: https://git-cliff.org/docs/installation
cargo install git-cliff # or download binary from releases cargo install git-cliff # or download binary from releases
```` ````
@ -1337,13 +1496,13 @@ cargo install git-cliff # or download binary from releases
```bash ```bash
# Run local validation (checks JSON syntax, Python syntax, required files) # Run local validation (checks JSON syntax, Python syntax, required files)
./scripts/hassfest ./scripts/release/hassfest
# Or validate JSON files manually if needed: # Or validate JSON files manually if needed:
python -m json.tool custom_components/tibber_prices/translations/de.json > /dev/null python -m json.tool custom_components/tibber_prices/translations/de.json > /dev/null
``` ```
**Why:** The `./scripts/hassfest` script validates JSON syntax (translations, manifest), Python syntax, and required files. This catches common errors before pushing to GitHub Actions. For quick JSON-only checks, you can still use `python -m json.tool` directly. **Why:** The `./scripts/release/hassfest` script validates JSON syntax (translations, manifest), Python syntax, and required files. This catches common errors before pushing to GitHub Actions. For quick JSON-only checks, you can still use `python -m json.tool` directly.
## Linting Best Practices ## Linting Best Practices
@ -1634,7 +1793,7 @@ Never use raw API price data directly. Always enrich via `enrich_price_info_with
Always use `dt_util` from `homeassistant.util` instead of Python's `datetime` module for timezone-aware operations. **Critical:** Use `dt_util.as_local()` when comparing API timestamps to local time. Import datetime types only for type hints: `from datetime import date, datetime, timedelta`. Always use `dt_util` from `homeassistant.util` instead of Python's `datetime` module for timezone-aware operations. **Critical:** Use `dt_util.as_local()` when comparing API timestamps to local time. Import datetime types only for type hints: `from datetime import date, datetime, timedelta`.
**4. Coordinator Data Structure** **4. Coordinator Data Structure**
Coordinator data follows structure: `coordinator.data = {"user_data": {...}, "priceInfo": {"yesterday": [...], "today": [...], "tomorrow": [...], "currency": "EUR"}}`. Each price list contains enriched interval dicts. See `coordinator/core.py` for data management. Coordinator data follows structure: `coordinator.data = {"user_data": {...}, "priceInfo": [...], "currency": "EUR"}`. The `priceInfo` is a flat list containing all enriched interval dicts (yesterday + today + tomorrow). Currency is stored at top level for easy access. See `coordinator/core.py` for data management.
**5. Service Response Pattern** **5. Service Response Pattern**
Services returning data must declare `supports_response=SupportsResponse.ONLY` in registration. See `services.py` for implementation patterns. Services returning data must declare `supports_response=SupportsResponse.ONLY` in registration. See `services.py` for implementation patterns.
@ -1653,6 +1812,17 @@ When using `DataUpdateCoordinator`, entities get updates automatically. Only imp
**4. Service Response Declaration:** **4. Service Response Declaration:**
Services returning data MUST declare `supports_response` parameter. Use `SupportsResponse.ONLY` for data-only services, `OPTIONAL` for dual-purpose, `NONE` for action-only. See `services.py` for examples. Services returning data MUST declare `supports_response` parameter. Use `SupportsResponse.ONLY` for data-only services, `OPTIONAL` for dual-purpose, `NONE` for action-only. See `services.py` for examples.
**5. Entity Lifecycle & State Management:**
All entities MUST implement these patterns for proper HA integration:
- **`available` property**: Indicates if entity can be read/controlled. Return `False` when coordinator has no data yet or last update failed. See `entity.py` for base implementation. Special cases (e.g., `connection` binary_sensor) override to always return `True`.
- **State Restore**: Inherit from `RestoreSensor` (sensors) or `RestoreEntity` (binary_sensors) to restore state after HA restart. Eliminates "unavailable" gaps in history. Restore logic in `async_added_to_hass()` using `async_get_last_state()` and `async_get_last_sensor_data()`. See `sensor/core.py` and `binary_sensor/core.py` for implementation.
- **`force_update` property**: Set to `True` for entities where every state change should be recorded, even if value unchanged (e.g., `connection` sensor tracking connectivity issues). Default is `False`. See `binary_sensor/core.py` for example.
**Why this matters**: Without `available`, entities show stale data during errors. Without state restore, history has gaps after HA restart. Without `force_update`, repeated state changes aren't visible in history.
## Code Quality Rules ## Code Quality Rules
**CRITICAL: See "Linting Best Practices" section for comprehensive type checking (Pyright) and linting (Ruff) guidelines.** **CRITICAL: See "Linting Best Practices" section for comprehensive type checking (Pyright) and linting (Ruff) guidelines.**
@ -1668,12 +1838,12 @@ This is a Home Assistant standard to avoid naming conflicts between integrations
# ✅ CORRECT - Integration prefix + semantic purpose # ✅ CORRECT - Integration prefix + semantic purpose
class TibberPricesApiClient: # Integration + semantic role class TibberPricesApiClient: # Integration + semantic role
class TibberPricesDataUpdateCoordinator: # Integration + semantic role class TibberPricesDataUpdateCoordinator: # Integration + semantic role
class TibberPricesDataFetcher: # Integration + semantic role class TibberPricesPriceDataManager: # Integration + semantic role
class TibberPricesSensor: # Integration + entity type class TibberPricesSensor: # Integration + entity type
class TibberPricesEntity: # Integration + entity type class TibberPricesEntity: # Integration + entity type
# ❌ INCORRECT - Missing integration prefix # ❌ INCORRECT - Missing integration prefix
class DataFetcher: # Should be: TibberPricesDataFetcher class PriceDataManager: # Should be: TibberPricesPriceDataManager
class TimeService: # Should be: TibberPricesTimeService class TimeService: # Should be: TibberPricesTimeService
class PeriodCalculator: # Should be: TibberPricesPeriodCalculator class PeriodCalculator: # Should be: TibberPricesPeriodCalculator
@ -1685,11 +1855,11 @@ class TibberPricesSensorCalculatorTrend: # Too verbose, import path shows loca
**IMPORTANT:** Do NOT include package hierarchy in class names. Python's import system provides the namespace: **IMPORTANT:** Do NOT include package hierarchy in class names. Python's import system provides the namespace:
```python ```python
# The import path IS the full namespace: # The import path IS the full namespace:
from custom_components.tibber_prices.coordinator.data_fetching import TibberPricesDataFetcher from custom_components.tibber_prices.coordinator.price_data_manager import TibberPricesPriceDataManager
from custom_components.tibber_prices.sensor.calculators.trend import TibberPricesTrendCalculator from custom_components.tibber_prices.sensor.calculators.trend import TibberPricesTrendCalculator
# Adding package names to class would be redundant: # Adding package names to class would be redundant:
# TibberPricesCoordinatorDataFetcher ❌ NO - unnecessarily verbose # TibberPricesCoordinatorPriceDataManager ❌ NO - unnecessarily verbose
# TibberPricesSensorCalculatorsTrendCalculator ❌ NO - ridiculously long # TibberPricesSensorCalculatorsTrendCalculator ❌ NO - ridiculously long
``` ```
@ -1713,6 +1883,7 @@ Use semantic prefixes that describe the PURPOSE, not the package location.
- 🟡 Type aliases and callbacks (e.g., `TimeServiceCallback` is acceptable) - 🟡 Type aliases and callbacks (e.g., `TimeServiceCallback` is acceptable)
- 🟡 Small NamedTuples used only for internal function returns (e.g., within calculators) - 🟡 Small NamedTuples used only for internal function returns (e.g., within calculators)
- 🟡 Enums that are clearly namespaced (e.g., `QueryType` in `api.queries`) - 🟡 Enums that are clearly namespaced (e.g., `QueryType` in `api.queries`)
- 🟡 **TypedDict classes**: Documentation-only constructs (never instantiated), used solely for IDE autocomplete - shorter names improve readability (e.g., `IntervalPriceAttributes`, `PeriodAttributes`)
**Private Classes (Module-Internal):** **Private Classes (Module-Internal):**
@ -1727,8 +1898,6 @@ class _InternalHelper:
result = _InternalHelper().process() result = _InternalHelper().process()
``` ```
**Important:** Currently (Nov 2025), this project has **NO private classes** - all classes are used across module boundaries and therefore need the `TibberPrices` prefix.
**When to use private classes:** **When to use private classes:**
- ❌ **DON'T** use for code organization alone - if it deserves a class, it's usually public - ❌ **DON'T** use for code organization alone - if it deserves a class, it's usually public
- ✅ **DO** use for internal implementation details (e.g., state machines, internal builders) - ✅ **DO** use for internal implementation details (e.g., state machines, internal builders)
@ -1736,75 +1905,19 @@ result = _InternalHelper().process()
**Example of genuine private class use case:** **Example of genuine private class use case:**
```python ```python
# In coordinator/data_fetching.py # In coordinator/price_data_manager.py
class _ApiRetryStateMachine: class _ApiRetryStateMachine:
"""Internal state machine for retry logic. Never used outside this file.""" """Internal state machine for retry logic. Never used outside this file."""
def __init__(self, max_retries: int) -> None: def __init__(self, max_retries: int) -> None:
self._attempts = 0 self._attempts = 0
self._max_retries = max_retries self._max_retries = max_retries
# Only used by DataFetcher methods in this file # Only used by PriceDataManager methods in this file
``` ```
In practice, most "helper" logic should be **functions**, not classes. Reserve classes for stateful components. In practice, most "helper" logic should be **functions**, not classes. Reserve classes for stateful components.
**Current State (as of Nov 2025):** ### Ruff Code Style Guidelines
⚠️ **Known Issue**: Many classes in this project lack the `TibberPrices` prefix. This is a technical debt item that needs addressing.
**Classes that need renaming:**
```python
# Coordinator module
DataFetcher → TibberPricesDataFetcher
DataTransformer → TibberPricesDataTransformer
ListenerManager → TibberPricesListenerManager
PeriodCalculator → TibberPricesPeriodCalculator
TimeService → TibberPricesTimeService
CacheData → TibberPricesCacheData
# Config flow
CannotConnectError → TibberPricesCannotConnectError
InvalidAuthError → TibberPricesInvalidAuthError
# Entity utils
IconContext → TibberPricesIconContext
# Sensor calculators (8 classes)
BaseCalculator → TibberPricesBaseCalculator
IntervalCalculator → TibberPricesIntervalCalculator
RollingHourCalculator → TibberPricesRollingHourCalculator
DailyStatCalculator → TibberPricesDailyStatCalculator
Window24hCalculator → TibberPricesWindow24hCalculator
VolatilityCalculator → TibberPricesVolatilityCalculator
TrendCalculator → TibberPricesTrendCalculator
TimingCalculator → TibberPricesTimingCalculator
MetadataCalculator → TibberPricesMetadataCalculator
# Period handlers (NamedTuples in types.py)
IntervalCriteria → TibberPricesIntervalCriteria
PeriodConfig → TibberPricesPeriodConfig
PeriodData → TibberPricesPeriodData
PeriodStatistics → TibberPricesPeriodStatistics
ThresholdConfig → TibberPricesThresholdConfig
SpikeCandidateContext → TibberPricesSpikeCandidateContext
```
**Action Required:**
Before making changes to these classes, plan the refactoring:
1. Create a plan in `/planning/class-naming-refactoring.md`
2. Use multi_replace_string_in_file for bulk renames
3. Run `./scripts/check` after each module
4. Update imports across the codebase
5. Test thoroughly with `./scripts/develop`
**When adding NEW classes:**
- ✅ ALWAYS use `TibberPrices` prefix for public classes
- ✅ Document in docstring if prefix is intentionally omitted (with justification)
- ✅ Check HA Core integrations for similar patterns when in doubt
See `docs/development/coding-guidelines.md` for detailed naming conventions.
### Ruff and Pyright Configuration
**Ruff config (`pyproject.toml` under `[tool.ruff]`):** **Ruff config (`pyproject.toml` under `[tool.ruff]`):**
@ -1958,8 +2071,8 @@ Public entry points → direct helpers (call order) → pure utilities. Prefix p
**Legacy/Backwards compatibility:** **Legacy/Backwards compatibility:**
- **Do NOT add legacy migration code** unless the change was already released in a version tag - **Do NOT add legacy migration code** unless the change was already released in a version tag
- **Check if released**: Use `./scripts/check-if-released <commit-hash>` to verify if code is in any `v*.*.*` tag - **Check if released**: Use `./scripts/release/check-if-released <commit-hash>` to verify if code is in any `v*.*.*` tag
- **Example**: If introducing breaking config change in commit `abc123`, run `./scripts/check-if-released abc123`: - **Example**: If introducing breaking config change in commit `abc123`, run `./scripts/release/check-if-released abc123`:
- ✓ NOT RELEASED → No migration needed, just use new code - ✓ NOT RELEASED → No migration needed, just use new code
- ✗ ALREADY RELEASED → Migration may be needed for users upgrading from that version - ✗ ALREADY RELEASED → Migration may be needed for users upgrading from that version
- **Rule**: Only add backwards compatibility for changes that shipped to users via HACS/GitHub releases - **Rule**: Only add backwards compatibility for changes that shipped to users via HACS/GitHub releases
@ -1969,7 +2082,7 @@ Public entry points → direct helpers (call order) → pure utilities. Prefix p
**Documentation language:** **Documentation language:**
- **CRITICAL**: All user-facing documentation (`README.md`, `/docs/user/`, `/docs/development/`) MUST be written in **English** - **CRITICAL**: All user-facing documentation (`README.md`, `docs/user/docs/`, `docs/developer/docs/`) MUST be written in **English**
- **Code comments**: Always use English for code comments and docstrings - **Code comments**: Always use English for code comments and docstrings
- **UI translations**: Multi-language support exists in `/translations/` and `/custom_translations/` (de, en, nb, nl, sv) for UI strings only - **UI translations**: Multi-language support exists in `/translations/` and `/custom_translations/` (de, en, nb, nl, sv) for UI strings only
- **Why English-only docs**: Ensures maintainability, accessibility to global community, and consistency with Home Assistant ecosystem - **Why English-only docs**: Ensures maintainability, accessibility to global community, and consistency with Home Assistant ecosystem
@ -2007,7 +2120,7 @@ Public entry points → direct helpers (call order) → pure utilities. Prefix p
**User Documentation Quality:** **User Documentation Quality:**
When writing or updating user-facing documentation (`docs/user/`), follow these principles learned from real user feedback: When writing or updating user-facing documentation (`docs/user/docs/` or `docs/developer/docs/`), follow these principles learned from real user feedback:
- **Clarity over completeness**: Users want to understand concepts, not read technical specifications - **Clarity over completeness**: Users want to understand concepts, not read technical specifications
- ✅ Good: "Relaxation automatically loosens filters until enough periods are found" - ✅ Good: "Relaxation automatically loosens filters until enough periods are found"
@ -2199,7 +2312,7 @@ df = (
# ✅ Annotate function signatures (public functions) # ✅ Annotate function signatures (public functions)
def get_current_interval_price(coordinator: DataUpdateCoordinator) -> float: def get_current_interval_price(coordinator: DataUpdateCoordinator) -> float:
"""Get current price from coordinator.""" """Get current price from coordinator."""
return coordinator.data["priceInfo"]["today"][0]["total"] return coordinator.data["priceInfo"][0]["total"]
# ✅ Use modern type syntax (Python 3.13) # ✅ Use modern type syntax (Python 3.13)
def process_prices(prices: list[dict[str, Any]]) -> dict[str, float]: def process_prices(prices: list[dict[str, Any]]) -> dict[str, float]:
@ -2295,7 +2408,8 @@ attributes = {
"rating_level": ..., # Price rating (LOW, NORMAL, HIGH) "rating_level": ..., # Price rating (LOW, NORMAL, HIGH)
# 3. Price statistics (how much does it cost?) # 3. Price statistics (how much does it cost?)
"price_avg": ..., "price_mean": ...,
"price_median": ...,
"price_min": ..., "price_min": ...,
"price_max": ..., "price_max": ...,
@ -2311,8 +2425,8 @@ attributes = {
"interval_count": ..., "interval_count": ...,
# 6. Meta information (technical details) # 6. Meta information (technical details)
"periods": [...], # Nested structures last "pricePeriods": [...], # Nested structures last
"intervals": [...], "priceInfo": [...],
# 7. Extended descriptions (always last) # 7. Extended descriptions (always last)
"description": "...", # Short description from custom_translations (always shown) "description": "...", # Short description from custom_translations (always shown)
@ -2495,7 +2609,8 @@ This ensures timestamp is always the first key in the attribute dict, regardless
"start": "2025-11-08T14:00:00+01:00", "start": "2025-11-08T14:00:00+01:00",
"end": "2025-11-08T15:00:00+01:00", "end": "2025-11-08T15:00:00+01:00",
"rating_level": "LOW", "rating_level": "LOW",
"price_avg": 18.5, "price_mean": 18.5,
"price_median": 18.3,
"interval_count": 4, "interval_count": 4,
"intervals": [...] "intervals": [...]
} }
@ -2506,7 +2621,7 @@ This ensures timestamp is always the first key in the attribute dict, regardless
"interval_count": 4, "interval_count": 4,
"rating_level": "LOW", "rating_level": "LOW",
"start": "2025-11-08T14:00:00+01:00", "start": "2025-11-08T14:00:00+01:00",
"price_avg": 18.5, "price_mean": 18.5,
"end": "2025-11-08T15:00:00+01:00" "end": "2025-11-08T15:00:00+01:00"
} }
``` ```
@ -2551,8 +2666,8 @@ This ensures timestamp is always the first key in the attribute dict, regardless
**Price-Related Attributes:** **Price-Related Attributes:**
- Period averages: `period_price_avg` (average across the period) - Period statistics: `price_mean` (arithmetic mean), `price_median` (median value)
- Reference comparisons: `period_price_diff_from_daily_min` (period avg vs daily min) - Reference comparisons: `period_price_diff_from_daily_min` (period mean vs daily min)
- Interval-specific: `interval_price_diff_from_daily_max` (current interval vs daily max) - Interval-specific: `interval_price_diff_from_daily_max` (current interval vs daily max)
### Before Adding New Attributes ### Before Adding New Attributes
@ -2626,12 +2741,12 @@ The refactoring consolidated duplicate logic into unified methods in `sensor/cor
- Replaces: `_get_statistics_value()` (calendar day portion) - Replaces: `_get_statistics_value()` (calendar day portion)
- Handles: Min/max/avg for calendar days (today/tomorrow) - Handles: Min/max/avg for calendar days (today/tomorrow)
- Returns: Price in minor currency units (cents/øre) - Returns: Price in subunit currency units (cents/øre)
- **`_get_24h_window_value(stat_func)`** - **`_get_24h_window_value(stat_func)`**
- Replaces: `_get_average_value()`, `_get_minmax_value()` - Replaces: `_get_average_value()`, `_get_minmax_value()`
- Handles: Trailing/leading 24h window statistics - Handles: Trailing/leading 24h window statistics
- Returns: Price in minor currency units (cents/øre) - Returns: Price in subunit currency units (cents/øre)
Legacy wrapper methods still exist for backward compatibility but will be removed in a future cleanup phase. Legacy wrapper methods still exist for backward compatibility but will be removed in a future cleanup phase.
@ -2748,3 +2863,32 @@ Only after consulting the official HA docs did we discover the correct pattern:
- Translations: `sensor/definitions.py` (translation_key usage) - Translations: `sensor/definitions.py` (translation_key usage)
- Test fixtures: `tests/conftest.py` - Test fixtures: `tests/conftest.py`
- Time handling: Any file importing `dt_util` - Time handling: Any file importing `dt_util`
## Recorder History Optimization
**CRITICAL: Always exclude non-essential attributes from Recorder to prevent database bloat.**
**Implementation:**
- Use `_unrecorded_attributes = frozenset({...})` as **class attribute** in entity classes
- See `sensor/core.py` and `binary_sensor/core.py` for current implementation
**What to exclude:**
1. **Descriptions/help text** - `description`, `usage_tips` (static, large)
2. **Large nested structures** - `periods`, `data`, `*_attributes` dicts (>1KB)
3. **Frequently changing diagnostics** - `icon_color`, `cache_age`, status strings
4. **Static/rarely changing config** - `currency`, `resolution`, `*_id` mappings
5. **Temporary/time-bound data** - `next_api_poll`, `last_*` timestamps
6. **Redundant/derived data** - `price_spread`, `diff_%` (calculable from other attrs)
**What to keep:**
- `timestamp` (always), all price values, `cache_age_minutes`, `updates_today`
- Period timing (`start`, `end`, `duration_minutes`), price statistics
- Boolean status flags, `relaxation_active`
**When adding new attributes:**
- Will this be useful in history 1 week from now? No → Exclude
- Can this be calculated from other attributes? Yes → Exclude
- Is this >100 bytes and not essential? Yes → Exclude
**See:** `docs/developer/docs/recorder-optimization.md` for detailed categories and impact analysis

View file

@ -122,13 +122,23 @@ Always run before committing:
- Enrich price data before exposing to entities - Enrich price data before exposing to entities
- Follow Home Assistant entity naming conventions - Follow Home Assistant entity naming conventions
See [Coding Guidelines](docs/development/coding-guidelines.md) for complete details. See [Coding Guidelines](docs/developer/docs/coding-guidelines.md) for complete details.
## Documentation ## Documentation
- **User guides**: Place in `docs/user/` (installation, configuration, usage) Documentation is organized in two Docusaurus sites:
- **Developer guides**: Place in `docs/development/` (architecture, patterns)
- **Update translations**: When changing `translations/en.json`, update ALL language files - **User docs** (`docs/user/`): Installation, configuration, usage guides
- Markdown files in `docs/user/docs/*.md`
- Navigation via `docs/user/sidebars.ts`
- **Developer docs** (`docs/developer/`): Architecture, patterns, contribution guides
- Markdown files in `docs/developer/docs/*.md`
- Navigation via `docs/developer/sidebars.ts`
**When adding new documentation:**
1. Place file in appropriate `docs/*/docs/` directory
2. Add to corresponding `sidebars.ts` for navigation
3. Update translations when changing `translations/en.json` (update ALL language files)
## Reporting Bugs ## Reporting Bugs

103
README.md
View file

@ -1,4 +1,8 @@
# Tibber Price Information & Ratings # Tibber Prices - Custom Home Assistant Integration
<p align="center">
<img src="images/header.svg" alt="Tibber Prices Custom Integration for Tibber" width="600">
</p>
[![GitHub Release][releases-shield]][releases] [![GitHub Release][releases-shield]][releases]
[![GitHub Activity][commits-shield]][commits] [![GitHub Activity][commits-shield]][commits]
@ -6,30 +10,41 @@
[![hacs][hacsbadge]][hacs] [![hacs][hacsbadge]][hacs]
[![Project Maintenance][maintenance-shield]][user_profile] [![Project Maintenance][maintenance-shield]][user_profile]
[![BuyMeCoffee][buymecoffeebadge]][buymecoffee]
A Home Assistant integration that provides advanced price information and ratings from Tibber. This integration fetches **quarter-hourly** electricity prices, enriches them with statistical analysis, and provides smart indicators to help you optimize your energy consumption and save money. <a href="https://www.buymeacoffee.com/jpawlowski" target="_blank"><img src="images/bmc-button.svg" alt="Buy Me A Coffee" height="41" width="174"></a>
![Tibber Price Information & Ratings](images/logo.png) > **⚠️ Not affiliated with Tibber**
> This is an independent, community-maintained custom integration for Home Assistant. It is **not** an official Tibber product and is **not** affiliated with or endorsed by Tibber AS.
A custom Home Assistant integration that provides advanced electricity price information and ratings from Tibber. This integration fetches **quarter-hourly** electricity prices, enriches them with statistical analysis, and provides smart indicators to help you optimize your energy consumption and save money.
## 📖 Documentation ## 📖 Documentation
- **[User Guide](docs/user/)** - Installation, configuration, and usage guides **[📚 Complete Documentation](https://jpawlowski.github.io/hass.tibber_prices/)** - Two comprehensive documentation sites:
- **[Period Calculation](docs/user/period-calculation.md)** - How Best/Peak Price periods are calculated
- **[Developer Guide](docs/development/)** - Contributing, architecture, and release process - **[👤 User Documentation](https://jpawlowski.github.io/hass.tibber_prices/user/)** - Installation, configuration, usage guides, and examples
- **[Changelog](https://github.com/jpawlowski/hass.tibber_prices/releases)** - Release history and notes - **[🔧 Developer Documentation](https://jpawlowski.github.io/hass.tibber_prices/developer/)** - Architecture, contributing guidelines, and development setup
**Quick Links:**
- [Installation Guide](https://jpawlowski.github.io/hass.tibber_prices/user/installation) - Step-by-step setup instructions
- [Sensor Reference](https://jpawlowski.github.io/hass.tibber_prices/user/sensors) - Complete list of all sensors and attributes
- [Chart Examples](https://jpawlowski.github.io/hass.tibber_prices/user/chart-examples) - ApexCharts visualizations
- [Automation Examples](https://jpawlowski.github.io/hass.tibber_prices/user/automation-examples) - Real-world automation scenarios
- [Changelog](https://github.com/jpawlowski/hass.tibber_prices/releases) - Release history and notes
## ✨ Features ## ✨ Features
- **Quarter-Hourly Price Data**: Access detailed 15-minute interval pricing (192 data points across yesterday/today/tomorrow) - **Quarter-Hourly Price Data**: Access detailed 15-minute interval pricing (384 data points across 4 days: day before yesterday/yesterday/today/tomorrow)
- **Current and Next Interval Prices**: Get real-time price data in both major currency (€, kr) and minor units (ct, øre) - **Flexible Currency Display**: Choose between base currency (€, kr) or subunit (ct, øre) display - configurable per your preference with smart defaults
- **Multi-Currency Support**: Automatic detection and formatting for EUR, NOK, SEK, DKK, USD, and GBP - **Multi-Currency Support**: Automatic detection and formatting for EUR, NOK, SEK, DKK, USD, and GBP
- **Price Level Indicators**: Know when you're in a VERY_CHEAP, CHEAP, NORMAL, EXPENSIVE, or VERY_EXPENSIVE period - **Price Level Indicators**: Know when you're in a VERY_CHEAP, CHEAP, NORMAL, EXPENSIVE, or VERY_EXPENSIVE period
- **Statistical Sensors**: Track lowest, highest, and average prices for the day - **Statistical Sensors**: Track lowest, highest, and average prices for the day
- **Price Ratings**: Quarter-hourly ratings comparing current prices to 24-hour trailing averages - **Price Ratings**: Quarter-hourly ratings comparing current prices to 24-hour trailing averages
- **Smart Indicators**: Binary sensors to detect peak hours and best price hours for automations - **Smart Indicators**: Binary sensors to detect peak hours and best price hours for automations
- **Beautiful ApexCharts**: Auto-generated chart configurations with dynamic Y-axis scaling ([see examples](https://jpawlowski.github.io/hass.tibber_prices/user/chart-examples))
- **Chart Metadata Sensor**: Dynamic chart configuration for optimal visualization
- **Intelligent Caching**: Minimizes API calls while ensuring data freshness across Home Assistant restarts - **Intelligent Caching**: Minimizes API calls while ensuring data freshness across Home Assistant restarts
- **Custom Services**: API endpoints for advanced integrations (ApexCharts support included) - **Custom Actions** (backend services): API endpoints for advanced integrations (ApexCharts support included)
- **Diagnostic Sensors**: Monitor data freshness and availability - **Diagnostic Sensors**: Monitor data freshness and availability
- **Reliable API Usage**: Uses only official Tibber [`priceInfo`](https://developer.tibber.com/docs/reference#priceinfo) and [`priceInfoRange`](https://developer.tibber.com/docs/reference#subscription) endpoints - no legacy APIs. Price ratings and statistics are calculated locally for maximum reliability and future-proofing. - **Reliable API Usage**: Uses only official Tibber [`priceInfo`](https://developer.tibber.com/docs/reference#priceinfo) and [`priceInfoRange`](https://developer.tibber.com/docs/reference#subscription) endpoints - no legacy APIs. Price ratings and statistics are calculated locally for maximum reliability and future-proofing.
@ -79,7 +94,7 @@ This will guide you through:
- Configure additional sensors in **Settings****Devices & Services****Tibber Price Information & Ratings** → **Entities** - Configure additional sensors in **Settings****Devices & Services****Tibber Price Information & Ratings** → **Entities**
- Use sensors in automations, dashboards, and scripts - Use sensors in automations, dashboards, and scripts
📖 **[Full Installation Guide →](docs/user/installation.md)** 📖 **[Full Installation Guide →](https://jpawlowski.github.io/hass.tibber_prices/user/installation)**
## 📊 Available Entities ## 📊 Available Entities
@ -87,6 +102,8 @@ The integration provides **30+ sensors** across different categories. Key sensor
> **Rich Sensor Attributes**: All sensors include extensive attributes with timestamps, context data, and detailed explanations. Enable **Extended Descriptions** in the integration options to add `long_description` and `usage_tips` attributes to every sensor, providing in-context documentation directly in Home Assistant's UI. > **Rich Sensor Attributes**: All sensors include extensive attributes with timestamps, context data, and detailed explanations. Enable **Extended Descriptions** in the integration options to add `long_description` and `usage_tips` attributes to every sensor, providing in-context documentation directly in Home Assistant's UI.
**[📋 Complete Sensor Reference](https://jpawlowski.github.io/hass.tibber_prices/user/sensors)** - Full list with descriptions and attributes
### Core Price Sensors (Enabled by Default) ### Core Price Sensors (Enabled by Default)
| Entity | Description | | Entity | Description |
@ -129,8 +146,8 @@ The integration provides **30+ sensors** across different categories. Key sensor
| Entity | Description | | Entity | Description |
| ------------------------- | ----------------------------------------------------------------------------------------- | | ------------------------- | ----------------------------------------------------------------------------------------- |
| Peak Price Period | ON when in a detected peak price period ([how it works](docs/user/period-calculation.md)) | | Peak Price Period | ON when in a detected peak price period ([how it works](https://jpawlowski.github.io/hass.tibber_prices/user/period-calculation)) |
| Best Price Period | ON when in a detected best price period ([how it works](docs/user/period-calculation.md)) | | Best Price Period | ON when in a detected best price period ([how it works](https://jpawlowski.github.io/hass.tibber_prices/user/period-calculation)) |
| Tibber API Connection | Connection status to Tibber API | | Tibber API Connection | Connection status to Tibber API |
| Tomorrow's Data Available | Whether tomorrow's price data is available | | Tomorrow's Data Available | Whether tomorrow's price data is available |
@ -148,13 +165,15 @@ The following sensors are available but disabled by default. Enable them in `Set
- **Previous Interval Price** & **Previous Interval Price Level**: Historical data for the last 15-minute interval - **Previous Interval Price** & **Previous Interval Price Level**: Historical data for the last 15-minute interval
- **Previous Interval Price Rating**: Rating for the previous interval - **Previous Interval Price Rating**: Rating for the previous interval
- **Trailing 24h Average Price**: Average of the past 24 hours from now - **Trailing 24h Average Price**: Average of the past 24 hours from now
- **Trailing 24h Minimum/Maximum Price**: Min/max in the past 24 hours - **Trailing 24h Minimum/Maximum Price**: Min/max in the past 24 hours
> **Note**: All monetary sensors use minor currency units (ct/kWh, øre/kWh, ¢/kWh, p/kWh) automatically based on your Tibber account's currency. Supported: EUR, NOK, SEK, DKK, USD, GBP. > **Note**: Currency display is configurable during setup. Choose between:
> - **Base currency** (€/kWh, kr/kWh) - decimal values, differences visible from 3rd-4th decimal
> - **Subunit** (ct/kWh, øre/kWh) - larger values, differences visible from 1st decimal
>
> Smart defaults: EUR → subunit (German/Dutch preference), NOK/SEK/DKK → base (Scandinavian preference). Supported currencies: EUR, NOK, SEK, DKK, USD, GBP.
## Automation Examples ## Automation Examples> **Note:** See the [full automation examples guide](https://jpawlowski.github.io/hass.tibber_prices/user/automation-examples) for more advanced recipes.
> **Note:** See the [full automation examples guide](docs/user/automation-examples.md) for more advanced recipes.
### Run Appliances During Cheap Hours ### Run Appliances During Cheap Hours
@ -177,7 +196,7 @@ automation:
entity_id: switch.dishwasher entity_id: switch.dishwasher
``` ```
> **Learn more:** The [period calculation guide](docs/user/period-calculation.md) explains how Best/Peak Price periods are identified and how you can configure filters (flexibility, minimum distance from average, price level filters with gap tolerance). > **Learn more:** The [period calculation guide](https://jpawlowski.github.io/hass.tibber_prices/user/period-calculation) explains how Best/Peak Price periods are identified and how you can configure filters (flexibility, minimum distance from average, price level filters with gap tolerance).
### Notify on Extremely High Prices ### Notify on Extremely High Prices
@ -265,8 +284,9 @@ automation:
### Currency or units showing incorrectly ### Currency or units showing incorrectly
- Currency is automatically detected from your Tibber account - Currency is automatically detected from your Tibber account
- The integration supports EUR, NOK, SEK, DKK, USD, and GBP with appropriate minor units - Display mode (base currency vs. subunit) can be configured in integration options: `Settings > Devices & Services > Tibber Price Information & Ratings > Configure`
- Enable/disable major vs. minor unit sensors in `Settings > Devices & Services > Tibber Price Information & Ratings > Entities` - Supported currencies: EUR, NOK, SEK, DKK, USD, and GBP
- Smart defaults apply: EUR users get subunit (ct), Scandinavian users get base currency (kr)
## Advanced Features ## Advanced Features
@ -306,34 +326,47 @@ template:
Price at {{ timestamp }}: {{ price }} ct/kWh Price at {{ timestamp }}: {{ price }} ct/kWh
``` ```
📖 **[View all sensors and attributes →](docs/user/sensors.md)** 📖 **[View all sensors and attributes →](https://jpawlowski.github.io/hass.tibber_prices/user/sensors)**
### Custom Services ### Dynamic Icons & Visual Indicators
The integration provides custom services for advanced use cases: All sensors feature dynamic icons that change based on price levels, providing instant visual feedback in your dashboards.
- `tibber_prices.get_price` - Fetch price data for specific days/times <img src="docs/user/static/img/entities-overview.jpg" width="400" alt="Entity list showing dynamic icons for different price states">
- `tibber_prices.get_apexcharts_data` - Get formatted data for ApexCharts cards
_Dynamic icons adapt to price levels, trends, and period states - showing CHEAP prices, FALLING trend, and active Best Price Period_
📖 **[Dynamic Icons Guide →](https://jpawlowski.github.io/hass.tibber_prices/user/dynamic-icons)** | **[Icon Colors Guide →](https://jpawlowski.github.io/hass.tibber_prices/user/icon-colors)**
### Custom Actions
The integration provides custom actions (they still appear as services under the hood) for advanced use cases. These actions show up in Home Assistant under **Developer Tools → Actions**.
- `tibber_prices.get_chartdata` - Get price data in chart-friendly formats for any visualization card
- `tibber_prices.get_apexcharts_yaml` - Generate complete ApexCharts configurations - `tibber_prices.get_apexcharts_yaml` - Generate complete ApexCharts configurations
- `tibber_prices.refresh_user_data` - Manually refresh account information - `tibber_prices.refresh_user_data` - Manually refresh account information
📖 **[Service documentation and examples →](docs/user/services.md)** 📖 **[Action documentation and examples →](https://jpawlowski.github.io/hass.tibber_prices/user/actions)**
### ApexCharts Integration ### Chart Visualizations (Optional)
The integration includes built-in support for creating beautiful price visualization cards. Use the `get_apexcharts_yaml` service to generate card configurations automatically. The integration includes built-in support for creating price visualization cards with automatic Y-axis scaling and color-coded series.
📖 **[ApexCharts examples →](docs/user/automation-examples.md#apexcharts-cards)** <img src="docs/user/static/img/charts/rolling-window.jpg" width="600" alt="Example: Dynamic 48h rolling window chart">
_Optional: Dynamic 48h chart with automatic Y-axis scaling - generated via `get_apexcharts_yaml` action_
📖 **[Chart examples and setup guide →](https://jpawlowski.github.io/hass.tibber_prices/user/chart-examples)**
## 🤝 Contributing ## 🤝 Contributing
Contributions are welcome! Please read the [Contributing Guidelines](CONTRIBUTING.md) and [Developer Guide](docs/development/) before submitting pull requests. Contributions are welcome! Please read the [Contributing Guidelines](CONTRIBUTING.md) and [Developer Documentation](https://jpawlowski.github.io/hass.tibber_prices/developer/) before submitting pull requests.
### For Contributors ### For Contributors
- **[Developer Setup](docs/development/setup.md)** - Get started with DevContainer - **[Developer Setup](https://jpawlowski.github.io/hass.tibber_prices/developer/setup)** - Get started with DevContainer
- **[Architecture Guide](docs/development/architecture.md)** - Understand the codebase - **[Architecture Guide](https://jpawlowski.github.io/hass.tibber_prices/developer/architecture)** - Understand the codebase
- **[Release Management](docs/development/release-management.md)** - Release process and versioning - **[Release Management](https://jpawlowski.github.io/hass.tibber_prices/developer/release-management)** - Release process and versioning
## 🤖 Development Note ## 🤖 Development Note

View file

@ -25,8 +25,49 @@ script:
scene: scene:
energy:
# https://www.home-assistant.io/integrations/logger/ # https://www.home-assistant.io/integrations/logger/
logger: logger:
default: info default: info
logs: logs:
# Main integration logger - applies to ALL sub-loggers by default
custom_components.tibber_prices: debug custom_components.tibber_prices: debug
# Reduce verbosity for details loggers (change to 'debug' for deep debugging)
# API client details (raw requests/responses - very verbose!)
custom_components.tibber_prices.api.client.details: info
# Period calculation details (all set to 'info' by default, change to 'debug' as needed):
# Relaxation strategy details (flex levels, per-day results)
custom_components.tibber_prices.coordinator.period_handlers.relaxation.details: info
# Filter statistics and criteria checks
custom_components.tibber_prices.coordinator.period_handlers.period_building.details: info
# Outlier/spike detection details
custom_components.tibber_prices.coordinator.period_handlers.outlier_filtering.details: info
# Period overlap resolution details
custom_components.tibber_prices.coordinator.period_handlers.period_overlap.details: info
# Outlier flex capping
custom_components.tibber_prices.coordinator.period_handlers.core.details: info
# Level filtering details (min_distance scaling)
custom_components.tibber_prices.coordinator.period_handlers.level_filtering.details: info
# Interval pool details (cache operations, GC):
# Cache lookup/miss, gap detection, fetch group additions
custom_components.tibber_prices.interval_pool.manager.details: info
# Garbage collection details (eviction, dead interval cleanup)
custom_components.tibber_prices.interval_pool.garbage_collector.details: info
# Gap detection and API fetching details
custom_components.tibber_prices.interval_pool.fetcher.details: info
# API endpoint routing decisions
custom_components.tibber_prices.interval_pool.routing.details: info
# Cache fetch group operations
custom_components.tibber_prices.interval_pool.cache.details: info
# Index rebuild operations
custom_components.tibber_prices.interval_pool.index.details: info
# Storage save operations
custom_components.tibber_prices.interval_pool.storage.details: info
# API helpers details (response validation):
# Data emptiness checks, structure validation
custom_components.tibber_prices.api.helpers.details: info

View file

@ -11,15 +11,19 @@ from typing import TYPE_CHECKING, Any
import voluptuous as vol import voluptuous as vol
from homeassistant.config_entries import ConfigEntryState from homeassistant.config_entries import ConfigEntry, ConfigEntryState
from homeassistant.const import CONF_ACCESS_TOKEN, Platform from homeassistant.const import CONF_ACCESS_TOKEN, Platform
from homeassistant.exceptions import ConfigEntryAuthFailed
from homeassistant.helpers.aiohttp_client import async_get_clientsession from homeassistant.helpers.aiohttp_client import async_get_clientsession
from homeassistant.helpers.storage import Store from homeassistant.helpers.storage import Store
from homeassistant.loader import async_get_loaded_integration from homeassistant.loader import async_get_loaded_integration
from .api import TibberPricesApiClient from .api import TibberPricesApiClient
from .const import ( from .const import (
CONF_CURRENCY_DISPLAY_MODE,
DATA_CHART_CONFIG, DATA_CHART_CONFIG,
DATA_CHART_METADATA_CONFIG,
DISPLAY_MODE_SUBUNIT,
DOMAIN, DOMAIN,
LOGGER, LOGGER,
async_load_standard_translations, async_load_standard_translations,
@ -27,6 +31,12 @@ from .const import (
) )
from .coordinator import STORAGE_VERSION, TibberPricesDataUpdateCoordinator from .coordinator import STORAGE_VERSION, TibberPricesDataUpdateCoordinator
from .data import TibberPricesData from .data import TibberPricesData
from .interval_pool import (
TibberPricesIntervalPool,
async_load_pool_state,
async_remove_pool_storage,
async_save_pool_state,
)
from .services import async_setup_services from .services import async_setup_services
if TYPE_CHECKING: if TYPE_CHECKING:
@ -37,6 +47,8 @@ if TYPE_CHECKING:
PLATFORMS: list[Platform] = [ PLATFORMS: list[Platform] = [
Platform.SENSOR, Platform.SENSOR,
Platform.BINARY_SENSOR, Platform.BINARY_SENSOR,
Platform.NUMBER,
Platform.SWITCH,
] ]
# Configuration schema for configuration.yaml # Configuration schema for configuration.yaml
@ -49,7 +61,7 @@ CONFIG_SCHEMA = vol.Schema(
vol.Optional("day"): vol.All(vol.Any(str, list), vol.Coerce(list)), vol.Optional("day"): vol.All(vol.Any(str, list), vol.Coerce(list)),
vol.Optional("resolution"): str, vol.Optional("resolution"): str,
vol.Optional("output_format"): str, vol.Optional("output_format"): str,
vol.Optional("minor_currency"): bool, vol.Optional("subunit_currency"): bool,
vol.Optional("round_decimals"): vol.All(int, vol.Range(min=0, max=10)), vol.Optional("round_decimals"): vol.All(int, vol.Range(min=0, max=10)),
vol.Optional("include_level"): bool, vol.Optional("include_level"): bool,
vol.Optional("include_rating_level"): bool, vol.Optional("include_rating_level"): bool,
@ -93,9 +105,85 @@ async def async_setup(hass: HomeAssistant, config: dict[str, Any]) -> bool:
LOGGER.debug("No chart_export configuration found in configuration.yaml") LOGGER.debug("No chart_export configuration found in configuration.yaml")
hass.data[DOMAIN][DATA_CHART_CONFIG] = {} hass.data[DOMAIN][DATA_CHART_CONFIG] = {}
# Extract chart_metadata config if present
chart_metadata_config = domain_config.get("chart_metadata", {})
if chart_metadata_config:
LOGGER.debug("Loaded chart_metadata configuration from configuration.yaml")
hass.data[DOMAIN][DATA_CHART_METADATA_CONFIG] = chart_metadata_config
else:
LOGGER.debug("No chart_metadata configuration found in configuration.yaml")
hass.data[DOMAIN][DATA_CHART_METADATA_CONFIG] = {}
return True return True
async def _migrate_config_options(hass: HomeAssistant, entry: ConfigEntry) -> None:
"""
Migrate config options for backward compatibility.
This ensures existing configs get sensible defaults when new options are added.
Runs automatically on integration startup.
"""
migration_performed = False
migrated = dict(entry.options)
# Migration: Set currency_display_mode to subunit for legacy configs
# New configs (created after v1.1.0) get currency-appropriate defaults via get_default_options().
# This migration preserves legacy behavior where all prices were in subunit currency (cents/øre).
# Only runs for old config entries that don't have this option explicitly set.
if CONF_CURRENCY_DISPLAY_MODE not in migrated:
migrated[CONF_CURRENCY_DISPLAY_MODE] = DISPLAY_MODE_SUBUNIT
migration_performed = True
LOGGER.info(
"[%s] Migrated legacy config: Set currency_display_mode=%s (preserves pre-v1.1.0 behavior)",
entry.title,
DISPLAY_MODE_SUBUNIT,
)
# Save migrated options if any changes were made
if migration_performed:
hass.config_entries.async_update_entry(entry, options=migrated)
LOGGER.debug("[%s] Config migration completed", entry.title)
def _get_access_token(hass: HomeAssistant, entry: ConfigEntry) -> str:
"""
Get access token from entry or parent entry.
For parent entries, the token is stored in entry.data.
For subentries, we need to find the parent entry and get its token.
Args:
hass: HomeAssistant instance
entry: Config entry (parent or subentry)
Returns:
Access token string
Raises:
ConfigEntryAuthFailed: If no access token found
"""
# Try to get token from this entry (works for parent)
if CONF_ACCESS_TOKEN in entry.data:
return entry.data[CONF_ACCESS_TOKEN]
# This is a subentry, find parent entry
# Parent entry is the one without subentries in its data structure
# and has the same domain
for potential_parent in hass.config_entries.async_entries(DOMAIN):
# Parent has ACCESS_TOKEN and is not the current entry
if potential_parent.entry_id != entry.entry_id and CONF_ACCESS_TOKEN in potential_parent.data:
# Check if this entry is actually a subentry of this parent
# (HA Core manages this relationship internally)
return potential_parent.data[CONF_ACCESS_TOKEN]
# No token found anywhere
msg = f"No access token found for entry {entry.entry_id}"
raise ConfigEntryAuthFailed(msg)
# https://developers.home-assistant.io/docs/config_entries_index/#setting-up-an-entry # https://developers.home-assistant.io/docs/config_entries_index/#setting-up-an-entry
async def async_setup_entry( async def async_setup_entry(
hass: HomeAssistant, hass: HomeAssistant,
@ -103,6 +191,10 @@ async def async_setup_entry(
) -> bool: ) -> bool:
"""Set up this integration using UI.""" """Set up this integration using UI."""
LOGGER.debug(f"[tibber_prices] async_setup_entry called for entry_id={entry.entry_id}") LOGGER.debug(f"[tibber_prices] async_setup_entry called for entry_id={entry.entry_id}")
# Migrate config options if needed (e.g., set default currency display mode for existing configs)
await _migrate_config_options(hass, entry)
# Preload translations to populate the cache # Preload translations to populate the cache
await async_load_translations(hass, "en") await async_load_translations(hass, "en")
await async_load_standard_translations(hass, "en") await async_load_standard_translations(hass, "en")
@ -117,10 +209,59 @@ async def async_setup_entry(
integration = async_get_loaded_integration(hass, entry.domain) integration = async_get_loaded_integration(hass, entry.domain)
# Get access token (from this entry if parent, from parent if subentry)
access_token = _get_access_token(hass, entry)
# Create API client
api_client = TibberPricesApiClient(
access_token=access_token,
session=async_get_clientsession(hass),
version=str(integration.version) if integration.version else "unknown",
)
# Get home_id from config entry (required for single-home pool architecture)
home_id = entry.data.get("home_id")
if not home_id:
msg = f"[{entry.title}] Config entry missing home_id (required for interval pool)"
raise ConfigEntryAuthFailed(msg)
# Create or load interval pool for this config entry (single-home architecture)
pool_state = await async_load_pool_state(hass, entry.entry_id)
if pool_state:
interval_pool = TibberPricesIntervalPool.from_dict(
pool_state,
api=api_client,
hass=hass,
entry_id=entry.entry_id,
)
if interval_pool is None:
# Old multi-home format or corrupted → create new pool
LOGGER.info(
"[%s] Interval pool storage invalid/corrupted, creating new pool (will rebuild from API)",
entry.title,
)
interval_pool = TibberPricesIntervalPool(
home_id=home_id,
api=api_client,
hass=hass,
entry_id=entry.entry_id,
)
else:
LOGGER.debug("[%s] Interval pool restored from storage (auto-save enabled)", entry.title)
else:
interval_pool = TibberPricesIntervalPool(
home_id=home_id,
api=api_client,
hass=hass,
entry_id=entry.entry_id,
)
LOGGER.debug("[%s] Created new interval pool (auto-save enabled)", entry.title)
coordinator = TibberPricesDataUpdateCoordinator( coordinator = TibberPricesDataUpdateCoordinator(
hass=hass, hass=hass,
config_entry=entry, config_entry=entry,
version=str(integration.version) if integration.version else "unknown", api_client=api_client,
interval_pool=interval_pool,
) )
# CRITICAL: Load cache BEFORE first refresh to ensure user_data is available # CRITICAL: Load cache BEFORE first refresh to ensure user_data is available
@ -129,19 +270,17 @@ async def async_setup_entry(
await coordinator.load_cache() await coordinator.load_cache()
entry.runtime_data = TibberPricesData( entry.runtime_data = TibberPricesData(
client=TibberPricesApiClient( client=api_client,
access_token=entry.data[CONF_ACCESS_TOKEN],
session=async_get_clientsession(hass),
version=str(integration.version) if integration.version else "unknown",
),
integration=integration, integration=integration,
coordinator=coordinator, coordinator=coordinator,
interval_pool=interval_pool,
) )
# https://developers.home-assistant.io/docs/integration_fetching_data#coordinated-single-api-poll-for-data-for-all-entities # https://developers.home-assistant.io/docs/integration_fetching_data#coordinated-single-api-poll-for-data-for-all-entities
if entry.state == ConfigEntryState.SETUP_IN_PROGRESS: if entry.state == ConfigEntryState.SETUP_IN_PROGRESS:
await coordinator.async_config_entry_first_refresh() await coordinator.async_config_entry_first_refresh()
entry.async_on_unload(entry.add_update_listener(async_reload_entry)) # Note: Options update listener is registered in coordinator.__init__
# (handles cache invalidation + refresh without full reload)
else: else:
await coordinator.async_refresh() await coordinator.async_refresh()
@ -155,6 +294,15 @@ async def async_unload_entry(
entry: TibberPricesConfigEntry, entry: TibberPricesConfigEntry,
) -> bool: ) -> bool:
"""Unload a config entry.""" """Unload a config entry."""
# Save interval pool state before unloading
if entry.runtime_data is not None and entry.runtime_data.interval_pool is not None:
pool_state = entry.runtime_data.interval_pool.to_dict()
await async_save_pool_state(hass, entry.entry_id, pool_state)
LOGGER.debug("[%s] Interval pool state saved on unload", entry.title)
# Shutdown interval pool (cancels background tasks)
await entry.runtime_data.interval_pool.async_shutdown()
unload_ok = await hass.config_entries.async_unload_platforms(entry, PLATFORMS) unload_ok = await hass.config_entries.async_unload_platforms(entry, PLATFORMS)
if unload_ok and entry.runtime_data is not None: if unload_ok and entry.runtime_data is not None:
@ -163,8 +311,7 @@ async def async_unload_entry(
# Unregister services if this was the last config entry # Unregister services if this was the last config entry
if not hass.config_entries.async_entries(DOMAIN): if not hass.config_entries.async_entries(DOMAIN):
for service in [ for service in [
"get_price", "get_chartdata",
"get_apexcharts_data",
"get_apexcharts_yaml", "get_apexcharts_yaml",
"refresh_user_data", "refresh_user_data",
]: ]:
@ -179,10 +326,15 @@ async def async_remove_entry(
entry: TibberPricesConfigEntry, entry: TibberPricesConfigEntry,
) -> None: ) -> None:
"""Handle removal of an entry.""" """Handle removal of an entry."""
# Remove coordinator cache storage
if storage := Store(hass, STORAGE_VERSION, f"{DOMAIN}.{entry.entry_id}"): if storage := Store(hass, STORAGE_VERSION, f"{DOMAIN}.{entry.entry_id}"):
LOGGER.debug(f"[tibber_prices] async_remove_entry removing cache store for entry_id={entry.entry_id}") LOGGER.debug(f"[tibber_prices] async_remove_entry removing cache store for entry_id={entry.entry_id}")
await storage.async_remove() await storage.async_remove()
# Remove interval pool storage
await async_remove_pool_storage(hass, entry.entry_id)
LOGGER.debug(f"[tibber_prices] async_remove_entry removed interval pool storage for entry_id={entry.entry_id}")
async def async_reload_entry( async def async_reload_entry(
hass: HomeAssistant, hass: HomeAssistant,

View file

@ -3,14 +3,18 @@
from __future__ import annotations from __future__ import annotations
import asyncio import asyncio
import base64
import logging import logging
import re import re
import socket import socket
from datetime import timedelta from datetime import datetime, timedelta
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
from zoneinfo import ZoneInfo
import aiohttp import aiohttp
from homeassistant.util import dt as dt_utils
from .exceptions import ( from .exceptions import (
TibberPricesApiClientAuthenticationError, TibberPricesApiClientAuthenticationError,
TibberPricesApiClientCommunicationError, TibberPricesApiClientCommunicationError,
@ -19,7 +23,6 @@ from .exceptions import (
) )
from .helpers import ( from .helpers import (
flatten_price_info, flatten_price_info,
flatten_price_rating,
prepare_headers, prepare_headers,
verify_graphql_response, verify_graphql_response,
verify_response_or_raise, verify_response_or_raise,
@ -30,6 +33,7 @@ if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
_LOGGER_API_DETAILS = logging.getLogger(__name__ + ".details")
class TibberPricesApiClient: class TibberPricesApiClient:
@ -46,7 +50,7 @@ class TibberPricesApiClient:
self._session = session self._session = session
self._version = version self._version = version
self._request_semaphore = asyncio.Semaphore(2) # Max 2 concurrent requests self._request_semaphore = asyncio.Semaphore(2) # Max 2 concurrent requests
self.time: TibberPricesTimeService # Set externally by coordinator (always initialized before use) self.time: TibberPricesTimeService | None = None # Set externally by coordinator (optional during config flow)
self._last_request_time = None # Set on first request self._last_request_time = None # Set on first request
self._min_request_interval = timedelta(seconds=1) # Min 1 second between requests self._min_request_interval = timedelta(seconds=1) # Min 1 second between requests
self._max_retries = 5 self._max_retries = 5
@ -133,190 +137,528 @@ class TibberPricesApiClient:
query_type=TibberPricesQueryType.USER, query_type=TibberPricesQueryType.USER,
) )
async def async_get_price_info(self, home_ids: set[str]) -> dict: async def async_get_price_info_for_range(
self,
home_id: str,
user_data: dict[str, Any],
start_time: datetime,
end_time: datetime,
) -> dict:
""" """
Get price info data in flat format for specified homes. Get price info for a specific time range with automatic routing.
This is a convenience wrapper around interval_pool.get_price_intervals_for_range().
Args: Args:
home_ids: Set of home IDs to fetch data for. home_id: Home ID to fetch price data for.
user_data: User data dict containing home metadata (including timezone).
start_time: Start of the range (inclusive, timezone-aware).
end_time: End of the range (exclusive, timezone-aware).
Returns: Returns:
Dictionary with homes data keyed by home_id. Dict with "home_id" and "price_info" (list of intervals).
Raises:
TibberPricesApiClientError: If arguments invalid or requests fail.
""" """
return await self._get_price_info_for_specific_homes(home_ids) # Import here to avoid circular dependency (interval_pool imports TibberPricesApiClient)
from custom_components.tibber_prices.interval_pool import ( # noqa: PLC0415
get_price_intervals_for_range,
)
async def _get_price_info_for_specific_homes(self, home_ids: set[str]) -> dict: price_info = await get_price_intervals_for_range(
"""Get price info for specific homes using GraphQL aliases.""" api_client=self,
if not home_ids: home_id=home_id,
return {"homes": {}} user_data=user_data,
start_time=start_time,
end_time=end_time,
)
# Build query with aliases for each home return {
# Example: home1: home(id: "abc") { ... } "home_id": home_id,
home_queries = [] "price_info": price_info,
for idx, home_id in enumerate(sorted(home_ids)): }
alias = f"home{idx}"
home_query = f""" async def async_get_price_info(self, home_id: str, user_data: dict[str, Any]) -> dict:
{alias}: home(id: "{home_id}") {{ """
Get price info for a single home.
Uses timezone-aware cursor calculation based on the home's actual timezone
from Tibber API (not HA system timezone). This ensures correct "day before yesterday
midnight" calculation for homes in different timezones.
Args:
home_id: Home ID to fetch price data for.
user_data: User data dict containing home metadata (including timezone).
REQUIRED - must be fetched before calling this method.
Returns:
Dict with "home_id", "price_info", and other home data.
Raises:
TibberPricesApiClientError: If TimeService not initialized or user_data missing.
"""
if not self.time:
msg = "TimeService not initialized - required for price info processing"
raise TibberPricesApiClientError(msg)
if not user_data:
msg = "User data required for timezone-aware price fetching - fetch user data first"
raise TibberPricesApiClientError(msg)
if not home_id:
msg = "Home ID is required"
raise TibberPricesApiClientError(msg)
# Build home_id -> timezone mapping from user_data
home_timezones = self._extract_home_timezones(user_data)
# Get timezone for this home (fallback to HA system timezone)
home_tz = home_timezones.get(home_id)
# Calculate cursor: day before yesterday midnight in home's timezone
cursor = self._calculate_cursor_for_home(home_tz)
# Simple single-home query (no alias needed)
query = f"""
{{viewer{{
home(id: "{home_id}") {{
id id
consumption(resolution:DAILY,last:1) {{
pageInfo{{currency}}
}}
currentSubscription {{ currentSubscription {{
priceInfoRange(resolution:QUARTER_HOURLY,last:192) {{ priceInfoRange(resolution:QUARTER_HOURLY, first:192, after: "{cursor}") {{
pageInfo{{ count }}
edges{{node{{ edges{{node{{
startsAt total energy tax level startsAt total level
}}}} }}}}
}} }}
priceInfo(resolution:QUARTER_HOURLY) {{ priceInfo(resolution:QUARTER_HOURLY) {{
today{{startsAt total energy tax level}} today{{startsAt total level}}
tomorrow{{startsAt total energy tax level}} tomorrow{{startsAt total level}}
}} }}
}} }}
}} }}
""" }}}}
home_queries.append(home_query) """
query = "{viewer{" + "".join(home_queries) + "}}" _LOGGER.debug("Fetching price info for home %s", home_id)
_LOGGER.debug("Fetching price info for %d specific home(s)", len(home_ids))
data = await self._api_wrapper( data = await self._api_wrapper(
data={"query": query}, data={"query": query},
query_type=TibberPricesQueryType.PRICE_INFO, query_type=TibberPricesQueryType.PRICE_INFO,
) )
# Parse aliased response # Parse response
viewer = data.get("viewer", {}) viewer = data.get("viewer", {})
homes_data = {} home = viewer.get("home")
for idx, home_id in enumerate(sorted(home_ids)): if not home:
alias = f"home{idx}" msg = f"Home {home_id} not found in API response"
home = viewer.get(alias) _LOGGER.warning(msg)
return {"home_id": home_id, "price_info": []}
if not home: if "currentSubscription" in home and home["currentSubscription"] is not None:
_LOGGER.debug("Home %s not found in API response", home_id) price_info = flatten_price_info(home["currentSubscription"])
homes_data[home_id] = {} else:
continue _LOGGER.warning(
"Home %s has no active subscription - price data will be unavailable",
home_id,
)
price_info = []
if "currentSubscription" in home and home["currentSubscription"] is not None: return {
# Extract currency from consumption data if available "home_id": home_id,
currency = None "price_info": price_info,
if home.get("consumption"): }
page_info = home["consumption"].get("pageInfo")
if page_info:
currency = page_info.get("currency")
homes_data[home_id] = flatten_price_info( async def async_get_price_info_range(
home["currentSubscription"], self,
currency, home_id: str,
time=self.time, user_data: dict[str, Any],
start_time: datetime,
end_time: datetime,
) -> dict:
"""
Get historical price info for a specific time range using priceInfoRange endpoint.
Uses the priceInfoRange GraphQL endpoint for flexible historical data queries.
Intended for intervals BEFORE "day before yesterday midnight" (outside PRICE_INFO scope).
Automatically handles API pagination if Tibber limits batch size.
Args:
home_id: Home ID to fetch price data for.
user_data: User data dict containing home metadata (including timezone).
start_time: Start of the range (inclusive, timezone-aware).
end_time: End of the range (exclusive, timezone-aware).
Returns:
Dict with "home_id" and "price_info" (list of intervals).
Raises:
TibberPricesApiClientError: If arguments invalid or request fails.
"""
if not user_data:
msg = "User data required for timezone-aware price fetching - fetch user data first"
raise TibberPricesApiClientError(msg)
if not home_id:
msg = "Home ID is required"
raise TibberPricesApiClientError(msg)
if start_time >= end_time:
msg = f"Invalid time range: start_time ({start_time}) must be before end_time ({end_time})"
raise TibberPricesApiClientError(msg)
_LOGGER_API_DETAILS.debug(
"fetch_price_info_range called with: start_time=%s (type=%s, tzinfo=%s), end_time=%s (type=%s, tzinfo=%s)",
start_time,
type(start_time),
start_time.tzinfo,
end_time,
type(end_time),
end_time.tzinfo,
)
# Calculate cursor and interval count
start_cursor = self._encode_cursor(start_time)
interval_count = self._calculate_interval_count(start_time, end_time)
_LOGGER_API_DETAILS.debug(
"Calculated cursor for range: start_time=%s, cursor_time=%s, encoded=%s",
start_time,
start_time,
start_cursor,
)
# Fetch all intervals with automatic paging
price_info = await self._fetch_price_info_with_paging(
home_id=home_id,
start_cursor=start_cursor,
interval_count=interval_count,
)
return {
"home_id": home_id,
"price_info": price_info,
}
def _calculate_interval_count(self, start_time: datetime, end_time: datetime) -> int:
"""Calculate number of intervals needed based on date range."""
time_diff = end_time - start_time
resolution_change_date = datetime(2025, 10, 1, tzinfo=start_time.tzinfo)
if start_time < resolution_change_date:
# Pre-resolution-change: hourly intervals only
interval_count = int(time_diff.total_seconds() / 3600) # 3600s = 1h
_LOGGER_API_DETAILS.debug(
"Time range is pre-2025-10-01: expecting hourly intervals (count: %d)",
interval_count,
)
else:
# Post-resolution-change: quarter-hourly intervals
interval_count = int(time_diff.total_seconds() / 900) # 900s = 15min
_LOGGER_API_DETAILS.debug(
"Time range is post-2025-10-01: expecting quarter-hourly intervals (count: %d)",
interval_count,
)
return interval_count
async def _fetch_price_info_with_paging(
self,
home_id: str,
start_cursor: str,
interval_count: int,
) -> list[dict[str, Any]]:
"""
Fetch price info with automatic pagination if API limits batch size.
GraphQL Cursor Pagination:
- endCursor points to the last returned element (inclusive)
- Use 'after: endCursor' to get elements AFTER that cursor
- If count < requested, more pages available
Args:
home_id: Home ID to fetch price data for.
start_cursor: Base64-encoded start cursor.
interval_count: Total number of intervals to fetch.
Returns:
List of all price interval dicts across all pages.
"""
price_info = []
remaining_intervals = interval_count
cursor = start_cursor
page = 0
while remaining_intervals > 0:
page += 1
# Fetch one page
page_data = await self._fetch_single_page(
home_id=home_id,
cursor=cursor,
requested_count=remaining_intervals,
page=page,
)
if not page_data:
break
# Extract intervals and pagination info
page_intervals = page_data["intervals"]
returned_count = page_data["count"]
end_cursor = page_data["end_cursor"]
has_next_page = page_data.get("has_next_page", False)
price_info.extend(page_intervals)
_LOGGER_API_DETAILS.debug(
"Page %d: Received %d intervals for home %s (total so far: %d/%d, endCursor=%s, hasNextPage=%s)",
page,
returned_count,
home_id,
len(price_info),
interval_count,
end_cursor,
has_next_page,
)
# Update remaining count
remaining_intervals -= returned_count
# Check if we need more pages
# Continue if: (1) we still need more intervals AND (2) API has more data
if remaining_intervals > 0 and end_cursor:
cursor = end_cursor
_LOGGER_API_DETAILS.debug(
"Still need %d more intervals - fetching next page with cursor %s",
remaining_intervals,
cursor,
) )
else: else:
_LOGGER.debug( # Done: Either we have all intervals we need, or API has no more data
"Home %s has no active subscription - price data will be unavailable", if remaining_intervals > 0:
home_id, _LOGGER.warning(
"API has no more data - received %d out of %d requested intervals (missing %d)",
len(price_info),
interval_count,
remaining_intervals,
)
else:
_LOGGER.debug(
"Pagination complete - received all %d requested intervals",
interval_count,
)
break
_LOGGER_API_DETAILS.debug(
"Fetched %d total historical intervals for home %s across %d page(s)",
len(price_info),
home_id,
page,
)
return price_info
async def _fetch_single_page(
self,
home_id: str,
cursor: str,
requested_count: int,
page: int,
) -> dict[str, Any] | None:
"""
Fetch a single page of price intervals.
Args:
home_id: Home ID to fetch price data for.
cursor: Base64-encoded cursor for this page.
requested_count: Number of intervals to request.
page: Page number (for logging).
Returns:
Dict with "intervals", "count", and "end_cursor" keys, or None if no data.
"""
query = f"""
{{viewer{{
home(id: "{home_id}") {{
id
currentSubscription {{
priceInfoRange(resolution:QUARTER_HOURLY, first:{requested_count}, after: "{cursor}") {{
pageInfo{{
count
hasNextPage
startCursor
endCursor
}}
edges{{
cursor
node{{
startsAt total level
}}
}}
}}
}}
}}
}}}}
"""
_LOGGER_API_DETAILS.debug(
"Fetching historical price info for home %s (page %d): %d intervals from cursor %s",
home_id,
page,
requested_count,
cursor,
)
data = await self._api_wrapper(
data={"query": query},
query_type=TibberPricesQueryType.PRICE_INFO_RANGE,
)
# Parse response
viewer = data.get("viewer", {})
home = viewer.get("home")
if not home:
_LOGGER.warning("Home %s not found in API response", home_id)
return None
if "currentSubscription" not in home or home["currentSubscription"] is None:
_LOGGER.warning("Home %s has no active subscription - price data will be unavailable", home_id)
return None
# Extract priceInfoRange data
subscription = home["currentSubscription"]
price_info_range = subscription.get("priceInfoRange", {})
page_info = price_info_range.get("pageInfo", {})
edges = price_info_range.get("edges", [])
# Flatten edges to interval list
intervals = [edge["node"] for edge in edges if "node" in edge]
return {
"intervals": intervals,
"count": page_info.get("count", len(intervals)),
"end_cursor": page_info.get("endCursor"),
"has_next_page": page_info.get("hasNextPage", False),
}
def _extract_home_timezones(self, user_data: dict[str, Any]) -> dict[str, str]:
"""
Extract home_id -> timezone mapping from user_data.
Args:
user_data: User data dict from async_get_viewer_details() (required).
Returns:
Dict mapping home_id to timezone string (e.g., "Europe/Oslo").
"""
home_timezones = {}
viewer = user_data.get("viewer", {})
homes = viewer.get("homes", [])
for home in homes:
home_id = home.get("id")
timezone = home.get("timeZone")
if home_id and timezone:
home_timezones[home_id] = timezone
_LOGGER_API_DETAILS.debug("Extracted timezone %s for home %s", timezone, home_id)
elif home_id:
_LOGGER.warning("Home %s has no timezone in user data, will use fallback", home_id)
return home_timezones
def _calculate_day_before_yesterday_midnight(self, home_timezone: str | None) -> datetime:
"""
Calculate day before yesterday midnight in home's timezone.
CRITICAL: Uses REAL TIME (dt_utils.now()), NOT TimeService.now().
This ensures API boundary calculations are based on actual current time,
not simulated time from TimeService.
Args:
home_timezone: Timezone string (e.g., "Europe/Oslo").
If None, falls back to HA system timezone.
Returns:
Timezone-aware datetime for day before yesterday midnight.
"""
# Get current REAL time (not TimeService)
now = dt_utils.now()
# Convert to home's timezone or fallback to HA system timezone
if home_timezone:
try:
tz = ZoneInfo(home_timezone)
now_in_home_tz = now.astimezone(tz)
except (KeyError, ValueError, OSError) as error:
_LOGGER.warning(
"Invalid timezone %s (%s), falling back to HA system timezone",
home_timezone,
error,
) )
homes_data[home_id] = {} now_in_home_tz = dt_utils.as_local(now)
else:
# Fallback to HA system timezone
now_in_home_tz = dt_utils.as_local(now)
data["homes"] = homes_data # Calculate day before yesterday midnight
return data return (now_in_home_tz - timedelta(days=2)).replace(hour=0, minute=0, second=0, microsecond=0)
async def async_get_daily_price_rating(self) -> dict: def _encode_cursor(self, timestamp: datetime) -> str:
"""Get daily price rating data in flat format for all homes.""" """
data = await self._api_wrapper( Encode a timestamp as base64 cursor for GraphQL API.
data={
"query": """
{viewer{homes{id,currentSubscription{priceRating{
daily{
currency
entries{time total energy tax difference level}
}
}}}}}"""
},
query_type=TibberPricesQueryType.DAILY_RATING,
)
homes = data.get("viewer", {}).get("homes", [])
homes_data = {} Args:
for home in homes: timestamp: Timezone-aware datetime to encode.
home_id = home.get("id")
if home_id:
if "currentSubscription" in home and home["currentSubscription"] is not None:
homes_data[home_id] = flatten_price_rating(home["currentSubscription"])
else:
_LOGGER.debug(
"Home %s has no active subscription - daily rating data will be unavailable",
home_id,
)
homes_data[home_id] = {}
data["homes"] = homes_data Returns:
return data Base64-encoded ISO timestamp string.
async def async_get_hourly_price_rating(self) -> dict: """
"""Get hourly price rating data in flat format for all homes.""" iso_string = timestamp.isoformat()
data = await self._api_wrapper( return base64.b64encode(iso_string.encode()).decode()
data={
"query": """
{viewer{homes{id,currentSubscription{priceRating{
hourly{
currency
entries{time total energy tax difference level}
}
}}}}}"""
},
query_type=TibberPricesQueryType.HOURLY_RATING,
)
homes = data.get("viewer", {}).get("homes", [])
homes_data = {} def _parse_timestamp(self, timestamp_str: str) -> datetime:
for home in homes: """
home_id = home.get("id") Parse ISO timestamp string to timezone-aware datetime.
if home_id:
if "currentSubscription" in home and home["currentSubscription"] is not None:
homes_data[home_id] = flatten_price_rating(home["currentSubscription"])
else:
_LOGGER.debug(
"Home %s has no active subscription - hourly rating data will be unavailable",
home_id,
)
homes_data[home_id] = {}
data["homes"] = homes_data Args:
return data timestamp_str: ISO format timestamp string.
async def async_get_monthly_price_rating(self) -> dict: Returns:
"""Get monthly price rating data in flat format for all homes.""" Timezone-aware datetime object.
data = await self._api_wrapper(
data={
"query": """
{viewer{homes{id,currentSubscription{priceRating{
monthly{
currency
entries{time total energy tax difference level}
}
}}}}}"""
},
query_type=TibberPricesQueryType.MONTHLY_RATING,
)
homes = data.get("viewer", {}).get("homes", [])
homes_data = {} """
for home in homes: return dt_utils.parse_datetime(timestamp_str) or dt_utils.now()
home_id = home.get("id")
if home_id:
if "currentSubscription" in home and home["currentSubscription"] is not None:
homes_data[home_id] = flatten_price_rating(home["currentSubscription"])
else:
_LOGGER.debug(
"Home %s has no active subscription - monthly rating data will be unavailable",
home_id,
)
homes_data[home_id] = {}
data["homes"] = homes_data def _calculate_cursor_for_home(self, home_timezone: str | None) -> str:
return data """
Calculate cursor (day before yesterday midnight) for a home's timezone.
Convenience wrapper around _calculate_day_before_yesterday_midnight()
and _encode_cursor() for backward compatibility with existing code.
Args:
home_timezone: Timezone string (e.g., "Europe/Oslo", "America/New_York").
If None, falls back to HA system timezone.
Returns:
Base64-encoded ISO timestamp string for use as GraphQL cursor.
"""
day_before_yesterday_midnight = self._calculate_day_before_yesterday_midnight(home_timezone)
return self._encode_cursor(day_before_yesterday_midnight)
async def _make_request( async def _make_request(
self, self,
@ -325,7 +667,7 @@ class TibberPricesApiClient:
query_type: TibberPricesQueryType, query_type: TibberPricesQueryType,
) -> dict[str, Any]: ) -> dict[str, Any]:
"""Make an API request with comprehensive error handling for network issues.""" """Make an API request with comprehensive error handling for network issues."""
_LOGGER.debug("Making API request with data: %s", data) _LOGGER_API_DETAILS.debug("Making API request with data: %s", data)
try: try:
# More granular timeout configuration for better network failure handling # More granular timeout configuration for better network failure handling
@ -345,7 +687,7 @@ class TibberPricesApiClient:
verify_response_or_raise(response) verify_response_or_raise(response)
response_json = await response.json() response_json = await response.json()
_LOGGER.debug("Received API response: %s", response_json) _LOGGER_API_DETAILS.debug("Received API response: %s", response_json)
await verify_graphql_response(response_json, query_type) await verify_graphql_response(response_json, query_type)
@ -453,7 +795,7 @@ class TibberPricesApiClient:
time_since_last_request = now - self._last_request_time time_since_last_request = now - self._last_request_time
if time_since_last_request < self._min_request_interval: if time_since_last_request < self._min_request_interval:
sleep_time = (self._min_request_interval - time_since_last_request).total_seconds() sleep_time = (self._min_request_interval - time_since_last_request).total_seconds()
_LOGGER.debug( _LOGGER_API_DETAILS.debug(
"Rate limiting: waiting %s seconds before next request", "Rate limiting: waiting %s seconds before next request",
sleep_time, sleep_time,
) )
@ -499,23 +841,18 @@ class TibberPricesApiClient:
"""Handle retry logic for API-specific errors.""" """Handle retry logic for API-specific errors."""
error_msg = str(error) error_msg = str(error)
# Non-retryable: Invalid queries # Non-retryable: Invalid queries, bad requests, empty data
if "Invalid GraphQL query" in error_msg or "Bad request" in error_msg: # Empty data means API has no data for the requested range - retrying won't help
if "Invalid GraphQL query" in error_msg or "Bad request" in error_msg or "Empty data received" in error_msg:
return False, 0 return False, 0
# Rate limits - special handling with extracted delay # Rate limits - only retry if server explicitly says so
if "Rate limit exceeded" in error_msg or "rate limited" in error_msg.lower(): if "Rate limit exceeded" in error_msg or "rate limited" in error_msg.lower():
delay = self._extract_retry_delay(error, retry) delay = self._extract_retry_delay(error, retry)
return True, delay return True, delay
# Empty data - retryable with capped exponential backoff # Other API errors - not retryable (assume permanent issue)
if "Empty data received" in error_msg: return False, 0
delay = min(self._retry_delay * (2**retry), 60) # Cap at 60 seconds
return True, delay
# Other API errors - retryable with capped exponential backoff
delay = min(self._retry_delay * (2**retry), 30) # Cap at 30 seconds
return True, delay
def _extract_retry_delay(self, error: Exception, retry: int) -> int: def _extract_retry_delay(self, error: Exception, retry: int) -> int:
"""Extract retry delay from rate limit error or use exponential backoff.""" """Extract retry delay from rate limit error or use exponential backoff."""
@ -549,7 +886,24 @@ class TibberPricesApiClient:
headers: dict | None = None, headers: dict | None = None,
query_type: TibberPricesQueryType = TibberPricesQueryType.USER, query_type: TibberPricesQueryType = TibberPricesQueryType.USER,
) -> Any: ) -> Any:
"""Get information from the API with rate limiting and retry logic.""" """
Get information from the API with rate limiting and retry logic.
Exception Handling Strategy:
- AuthenticationError: Immediate raise, triggers reauth flow
- PermissionError: Immediate raise, non-retryable
- CommunicationError: Retry with exponential backoff
- ApiClientError (Rate Limit): Retry with Retry-After delay
- ApiClientError (Other): Retry only if explicitly retryable
- Network errors (aiohttp.ClientError, socket.gaierror, TimeoutError):
Converted to CommunicationError and retried
Retry Logic:
- Max retries: 5 (configurable via _max_retries)
- Base delay: 2 seconds (exponential backoff: 2s, 4s, 8s, 16s, 32s)
- Rate limit delay: Uses Retry-After header or falls back to exponential
- Caps: 30s for network errors, 120s for rate limits, 300s for Retry-After
"""
headers = headers or prepare_headers(self._access_token, self._version) headers = headers or prepare_headers(self._access_token, self._version)
last_error: Exception | None = None last_error: Exception | None = None

View file

@ -3,7 +3,6 @@
from __future__ import annotations from __future__ import annotations
import logging import logging
from datetime import timedelta
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from homeassistant.const import __version__ as ha_version from homeassistant.const import __version__ as ha_version
@ -11,8 +10,6 @@ from homeassistant.const import __version__ as ha_version
if TYPE_CHECKING: if TYPE_CHECKING:
import aiohttp import aiohttp
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from .queries import TibberPricesQueryType from .queries import TibberPricesQueryType
from .exceptions import ( from .exceptions import (
@ -22,36 +19,85 @@ from .exceptions import (
) )
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
HTTP_BAD_REQUEST = 400 HTTP_BAD_REQUEST = 400
HTTP_UNAUTHORIZED = 401 HTTP_UNAUTHORIZED = 401
HTTP_FORBIDDEN = 403 HTTP_FORBIDDEN = 403
HTTP_TOO_MANY_REQUESTS = 429 HTTP_TOO_MANY_REQUESTS = 429
HTTP_INTERNAL_SERVER_ERROR = 500
HTTP_BAD_GATEWAY = 502
HTTP_SERVICE_UNAVAILABLE = 503
HTTP_GATEWAY_TIMEOUT = 504
def verify_response_or_raise(response: aiohttp.ClientResponse) -> None: def verify_response_or_raise(response: aiohttp.ClientResponse) -> None:
"""Verify that the response is valid.""" """
Verify HTTP response and map to appropriate exceptions.
Error Mapping:
- 401 Unauthorized AuthenticationError (non-retryable)
- 403 Forbidden PermissionError (non-retryable)
- 429 Rate Limit ApiClientError with retry support
- 400 Bad Request ApiClientError (non-retryable, invalid query)
- 5xx Server Errors CommunicationError (retryable)
- Other errors Let aiohttp.raise_for_status() handle
"""
# Authentication failures - non-retryable
if response.status == HTTP_UNAUTHORIZED: if response.status == HTTP_UNAUTHORIZED:
_LOGGER.error("Tibber API authentication failed - check access token") _LOGGER.error("Tibber API authentication failed - check access token")
raise TibberPricesApiClientAuthenticationError(TibberPricesApiClientAuthenticationError.INVALID_CREDENTIALS) raise TibberPricesApiClientAuthenticationError(TibberPricesApiClientAuthenticationError.INVALID_CREDENTIALS)
# Permission denied - non-retryable
if response.status == HTTP_FORBIDDEN: if response.status == HTTP_FORBIDDEN:
_LOGGER.error("Tibber API access forbidden - insufficient permissions") _LOGGER.error("Tibber API access forbidden - insufficient permissions")
raise TibberPricesApiClientPermissionError(TibberPricesApiClientPermissionError.INSUFFICIENT_PERMISSIONS) raise TibberPricesApiClientPermissionError(TibberPricesApiClientPermissionError.INSUFFICIENT_PERMISSIONS)
# Rate limiting - retryable with explicit delay
if response.status == HTTP_TOO_MANY_REQUESTS: if response.status == HTTP_TOO_MANY_REQUESTS:
# Check for Retry-After header that Tibber might send # Check for Retry-After header that Tibber might send
retry_after = response.headers.get("Retry-After", "unknown") retry_after = response.headers.get("Retry-After", "unknown")
_LOGGER.warning("Tibber API rate limit exceeded - retry after %s seconds", retry_after) _LOGGER.warning("Tibber API rate limit exceeded - retry after %s seconds", retry_after)
raise TibberPricesApiClientError(TibberPricesApiClientError.RATE_LIMIT_ERROR.format(retry_after=retry_after)) raise TibberPricesApiClientError(TibberPricesApiClientError.RATE_LIMIT_ERROR.format(retry_after=retry_after))
# Bad request - non-retryable (invalid query)
if response.status == HTTP_BAD_REQUEST: if response.status == HTTP_BAD_REQUEST:
_LOGGER.error("Tibber API rejected request - likely invalid GraphQL query") _LOGGER.error("Tibber API rejected request - likely invalid GraphQL query")
raise TibberPricesApiClientError( raise TibberPricesApiClientError(
TibberPricesApiClientError.INVALID_QUERY_ERROR.format(message="Bad request - likely invalid GraphQL query") TibberPricesApiClientError.INVALID_QUERY_ERROR.format(message="Bad request - likely invalid GraphQL query")
) )
# Server errors 5xx - retryable (temporary server issues)
if response.status in (
HTTP_INTERNAL_SERVER_ERROR,
HTTP_BAD_GATEWAY,
HTTP_SERVICE_UNAVAILABLE,
HTTP_GATEWAY_TIMEOUT,
):
_LOGGER.warning(
"Tibber API server error %d - temporary issue, will retry",
response.status,
)
# Let this be caught as aiohttp.ClientResponseError in _api_wrapper
# where it's converted to CommunicationError with retry logic
response.raise_for_status()
# All other HTTP errors - let aiohttp handle
response.raise_for_status() response.raise_for_status()
async def verify_graphql_response(response_json: dict, query_type: TibberPricesQueryType) -> None: async def verify_graphql_response(response_json: dict, query_type: TibberPricesQueryType) -> None:
"""Verify the GraphQL response for errors and data completeness, including empty data.""" """
Verify GraphQL response and map error codes to appropriate exceptions.
GraphQL Error Code Mapping:
- UNAUTHENTICATED AuthenticationError (triggers reauth flow)
- FORBIDDEN PermissionError (non-retryable)
- RATE_LIMITED/TOO_MANY_REQUESTS ApiClientError (retryable)
- VALIDATION_ERROR/GRAPHQL_VALIDATION_FAILED ApiClientError (non-retryable)
- Other codes Generic ApiClientError (with code in message)
- Empty data ApiClientError (non-retryable, API has no data)
"""
if "errors" in response_json: if "errors" in response_json:
errors = response_json["errors"] errors = response_json["errors"]
if not errors: if not errors:
@ -98,14 +144,137 @@ async def verify_graphql_response(response_json: dict, query_type: TibberPricesQ
TibberPricesApiClientError.GRAPHQL_ERROR.format(message="Response missing data object") TibberPricesApiClientError.GRAPHQL_ERROR.format(message="Response missing data object")
) )
# Empty data check (for retry logic) - always check, regardless of query_type # Empty data check - validate response completeness
# This is NOT a retryable error - API simply has no data for the requested range
if is_data_empty(response_json["data"], query_type.value): if is_data_empty(response_json["data"], query_type.value):
_LOGGER.debug("Empty data detected for query_type: %s", query_type) _LOGGER_DETAILS.debug("Empty data detected for query_type: %s - API has no data available", query_type)
raise TibberPricesApiClientError( raise TibberPricesApiClientError(
TibberPricesApiClientError.EMPTY_DATA_ERROR.format(query_type=query_type.value) TibberPricesApiClientError.EMPTY_DATA_ERROR.format(query_type=query_type.value)
) )
def _check_user_data_empty(data: dict) -> bool:
"""Check if user data is empty or incomplete."""
has_user_id = (
"viewer" in data
and isinstance(data["viewer"], dict)
and "userId" in data["viewer"]
and data["viewer"]["userId"] is not None
)
has_homes = (
"viewer" in data
and isinstance(data["viewer"], dict)
and "homes" in data["viewer"]
and isinstance(data["viewer"]["homes"], list)
and len(data["viewer"]["homes"]) > 0
)
is_empty = not has_user_id or not has_homes
_LOGGER_DETAILS.debug(
"Viewer check - has_user_id: %s, has_homes: %s, is_empty: %s",
has_user_id,
has_homes,
is_empty,
)
return is_empty
def _check_price_info_empty(data: dict) -> bool:
"""
Check if price_info data is empty or incomplete.
Note: Missing currentSubscription is VALID (home without active contract).
Only check for structural issues, not missing data that legitimately might not exist.
"""
viewer = data.get("viewer", {})
home_data = viewer.get("home")
if not home_data:
_LOGGER_DETAILS.debug("No home data found in price_info response")
return True
_LOGGER_DETAILS.debug("Checking price_info for single home")
# Missing currentSubscription is VALID - home has no active contract
# This is not an "empty data" error, it's a legitimate state
if "currentSubscription" not in home_data or home_data["currentSubscription"] is None:
_LOGGER_DETAILS.debug("No currentSubscription - home has no active contract (valid state)")
return False # NOT empty - this is expected for homes without subscription
subscription = home_data["currentSubscription"]
# Check priceInfoRange (yesterday data - optional, may not be available)
has_yesterday = (
"priceInfoRange" in subscription
and subscription["priceInfoRange"] is not None
and "edges" in subscription["priceInfoRange"]
and subscription["priceInfoRange"]["edges"]
)
# Check priceInfo for today's data (required if subscription exists)
has_price_info = "priceInfo" in subscription and subscription["priceInfo"] is not None
has_today = (
has_price_info
and "today" in subscription["priceInfo"]
and subscription["priceInfo"]["today"] is not None
and len(subscription["priceInfo"]["today"]) > 0
)
# Only require today's data - yesterday is optional
# If subscription exists but no today data, that's a structural problem
is_empty = not has_today
_LOGGER_DETAILS.debug(
"Price info check - priceInfoRange: %s, today: %s, is_empty: %s",
bool(has_yesterday),
bool(has_today),
is_empty,
)
return is_empty
def _check_price_info_range_empty(data: dict) -> bool:
"""
Check if price_info_range data is empty or incomplete.
For historical range queries, empty edges array is VALID (no data available
for that time range, e.g., too old). Only structural problems are errors.
"""
viewer = data.get("viewer", {})
home_data = viewer.get("home")
if not home_data:
_LOGGER_DETAILS.debug("No home data found in price_info_range response")
return True
subscription = home_data.get("currentSubscription")
if not subscription:
_LOGGER_DETAILS.debug("Missing currentSubscription in home")
return True
# For price_info_range, check if the structure exists
# Empty edges array is VALID (no data for that time range)
price_info_range = subscription.get("priceInfoRange")
if price_info_range is None:
_LOGGER_DETAILS.debug("Missing priceInfoRange in subscription")
return True
if "edges" not in price_info_range:
_LOGGER_DETAILS.debug("Missing edges key in priceInfoRange")
return True
edges = price_info_range["edges"]
if not isinstance(edges, list):
_LOGGER_DETAILS.debug("priceInfoRange edges is not a list")
return True
# Empty edges is VALID for historical queries (data not available)
_LOGGER_DETAILS.debug(
"Price info range check - structure valid, edge_count: %s (empty is OK for old data)",
len(edges),
)
return False # Structure is valid, even if edges is empty
def is_data_empty(data: dict, query_type: str) -> bool: def is_data_empty(data: dict, query_type: str) -> bool:
""" """
Check if the response data is empty or incomplete. Check if the response data is empty or incomplete.
@ -121,126 +290,27 @@ def is_data_empty(data: dict, query_type: str) -> bool:
- Must have today data - Must have today data
- tomorrow can be empty if we have valid historical and today data - tomorrow can be empty if we have valid historical and today data
For rating data: For price info range:
- Must have thresholdPercentages - Must have priceInfoRange with edges
- Must have non-empty entries for the specific rating type Used by interval pool for historical data fetching
""" """
_LOGGER.debug("Checking if data is empty for query_type %s", query_type) _LOGGER_DETAILS.debug("Checking if data is empty for query_type %s", query_type)
is_empty = False
try: try:
if query_type == "user": if query_type == "user":
has_user_id = ( return _check_user_data_empty(data)
"viewer" in data if query_type == "price_info":
and isinstance(data["viewer"], dict) return _check_price_info_empty(data)
and "userId" in data["viewer"] if query_type == "price_info_range":
and data["viewer"]["userId"] is not None return _check_price_info_range_empty(data)
)
has_homes = (
"viewer" in data
and isinstance(data["viewer"], dict)
and "homes" in data["viewer"]
and isinstance(data["viewer"]["homes"], list)
and len(data["viewer"]["homes"]) > 0
)
is_empty = not has_user_id or not has_homes
_LOGGER.debug(
"Viewer check - has_user_id: %s, has_homes: %s, is_empty: %s",
has_user_id,
has_homes,
is_empty,
)
elif query_type == "price_info": # Unknown query type
# Check for home aliases (home0, home1, etc.) _LOGGER_DETAILS.debug("Unknown query type %s, treating as non-empty", query_type)
viewer = data.get("viewer", {})
home_aliases = [key for key in viewer if key.startswith("home") and key[4:].isdigit()]
if not home_aliases:
_LOGGER.debug("No home aliases found in price_info response")
is_empty = True
else:
# Check first home for valid data
_LOGGER.debug("Checking price_info with %d home(s)", len(home_aliases))
first_home = viewer.get(home_aliases[0])
if (
not first_home
or "currentSubscription" not in first_home
or first_home["currentSubscription"] is None
):
_LOGGER.debug("Missing currentSubscription in first home")
is_empty = True
else:
subscription = first_home["currentSubscription"]
# Check priceInfoRange (192 quarter-hourly intervals)
has_historical = (
"priceInfoRange" in subscription
and subscription["priceInfoRange"] is not None
and "edges" in subscription["priceInfoRange"]
and subscription["priceInfoRange"]["edges"]
)
# Check priceInfo for today's data
has_price_info = "priceInfo" in subscription and subscription["priceInfo"] is not None
has_today = (
has_price_info
and "today" in subscription["priceInfo"]
and subscription["priceInfo"]["today"] is not None
and len(subscription["priceInfo"]["today"]) > 0
)
# Data is empty if we don't have historical data or today's data
is_empty = not has_historical or not has_today
_LOGGER.debug(
"Price info check - priceInfoRange: %s, today: %s, is_empty: %s",
bool(has_historical),
bool(has_today),
is_empty,
)
elif query_type in ["daily", "hourly", "monthly"]:
# Check for homes existence and non-emptiness before accessing
if (
"viewer" not in data
or "homes" not in data["viewer"]
or not isinstance(data["viewer"]["homes"], list)
or len(data["viewer"]["homes"]) == 0
or "currentSubscription" not in data["viewer"]["homes"][0]
or data["viewer"]["homes"][0]["currentSubscription"] is None
or "priceRating" not in data["viewer"]["homes"][0]["currentSubscription"]
):
_LOGGER.debug("Missing homes/currentSubscription/priceRating in rating check")
is_empty = True
else:
rating = data["viewer"]["homes"][0]["currentSubscription"]["priceRating"]
# Check rating entries
has_entries = (
query_type in rating
and rating[query_type] is not None
and "entries" in rating[query_type]
and rating[query_type]["entries"] is not None
and len(rating[query_type]["entries"]) > 0
)
is_empty = not has_entries
_LOGGER.debug(
"%s rating check - entries count: %d, is_empty: %s",
query_type,
len(rating[query_type]["entries"]) if has_entries else 0,
is_empty,
)
else:
_LOGGER.debug("Unknown query type %s, treating as non-empty", query_type)
is_empty = False
except (KeyError, IndexError, TypeError) as error: except (KeyError, IndexError, TypeError) as error:
_LOGGER.debug("Error checking data emptiness: %s", error) _LOGGER_DETAILS.debug("Error checking data emptiness: %s", error)
is_empty = True return True
else:
return is_empty return False
def prepare_headers(access_token: str, version: str) -> dict[str, str]: def prepare_headers(access_token: str, version: str) -> dict[str, str]:
@ -252,23 +322,30 @@ def prepare_headers(access_token: str, version: str) -> dict[str, str]:
} }
def flatten_price_info(subscription: dict, currency: str | None = None, *, time: TibberPricesTimeService) -> dict: def flatten_price_info(subscription: dict) -> list[dict]:
""" """
Transform and flatten priceInfo from full API data structure. Transform and flatten priceInfo from full API data structure.
Now handles priceInfoRange (192 quarter-hourly intervals) separately from Returns a flat list of all price intervals ordered as:
priceInfo (today and tomorrow data). Currency is stored as a separate attribute. [day_before_yesterday_prices, yesterday_prices, today_prices, tomorrow_prices]
priceInfoRange fetches 192 quarter-hourly intervals starting from the day before
yesterday midnight (2 days of historical data), which provides sufficient data
for calculating trailing 24h averages for all intervals including yesterday.
Args:
subscription: The currentSubscription dictionary from API response.
Returns:
A flat list containing all price dictionaries (startsAt, total, level).
""" """
price_info = subscription.get("priceInfo", {}) # Use 'or {}' to handle None values (API may return None during maintenance)
price_info_range = subscription.get("priceInfoRange", {}) price_info_range = subscription.get("priceInfoRange") or {}
# Get today and yesterday dates using TimeService # Transform priceInfoRange edges data (extract historical quarter-hourly prices)
today_local = time.now().date() # This contains 192 intervals (2 days) starting from day before yesterday midnight
yesterday_local = today_local - timedelta(days=1) historical_prices = []
_LOGGER.debug("Processing data for yesterday's date: %s", yesterday_local)
# Transform priceInfoRange edges data (extract yesterday's quarter-hourly prices)
yesterday_prices = []
if "edges" in price_info_range: if "edges" in price_info_range:
edges = price_info_range["edges"] edges = price_info_range["edges"]
@ -276,47 +353,9 @@ def flatten_price_info(subscription: dict, currency: str | None = None, *, time:
if "node" not in edge: if "node" not in edge:
_LOGGER.debug("Skipping edge without node: %s", edge) _LOGGER.debug("Skipping edge without node: %s", edge)
continue continue
historical_prices.append(edge["node"])
price_data = edge["node"] # Return all intervals as a single flattened array
# Parse timestamp using TimeService for proper timezone handling # Use 'or {}' to handle None values (API may return None during maintenance)
starts_at = time.get_interval_time(price_data) price_info = subscription.get("priceInfo") or {}
if starts_at is None: return historical_prices + (price_info.get("today") or []) + (price_info.get("tomorrow") or [])
_LOGGER.debug("Could not parse timestamp: %s", price_data["startsAt"])
continue
price_date = starts_at.date()
# Only include prices from yesterday
if price_date == yesterday_local:
yesterday_prices.append(price_data)
_LOGGER.debug("Found %d price entries for yesterday", len(yesterday_prices))
return {
"yesterday": yesterday_prices,
"today": price_info.get("today", []),
"tomorrow": price_info.get("tomorrow", []),
"currency": currency,
}
def flatten_price_rating(subscription: dict) -> dict:
"""Extract and flatten priceRating from subscription, including currency."""
price_rating = subscription.get("priceRating", {})
def extract_entries_and_currency(rating: dict) -> tuple[list, str | None]:
if rating is None:
return [], None
return rating.get("entries", []), rating.get("currency")
hourly_entries, hourly_currency = extract_entries_and_currency(price_rating.get("hourly"))
daily_entries, daily_currency = extract_entries_and_currency(price_rating.get("daily"))
monthly_entries, monthly_currency = extract_entries_and_currency(price_rating.get("monthly"))
# Prefer hourly, then daily, then monthly for top-level currency
currency = hourly_currency or daily_currency or monthly_currency
return {
"hourly": hourly_entries,
"daily": daily_entries,
"monthly": monthly_entries,
"currency": currency,
}

View file

@ -6,10 +6,43 @@ from enum import Enum
class TibberPricesQueryType(Enum): class TibberPricesQueryType(Enum):
"""Types of queries that can be made to the API.""" """
Types of queries that can be made to the API.
CRITICAL: Query type selection is dictated by Tibber's API design and caching strategy.
PRICE_INFO:
- Used for current day-relative data (day before yesterday/yesterday/today/tomorrow)
- API automatically determines "today" and "tomorrow" based on current time
- MUST be used when querying any data from these 4 days, even if you only need
specific intervals, because Tibber's API requires this endpoint for current data
- Provides the core dataset needed for live data, recent historical context
(important until tomorrow's data arrives), and tomorrow's forecast
- Tibber likely has optimized caching for this frequently-accessed data range
- Boundary: FROM "day before yesterday midnight" (real time) onwards
PRICE_INFO_RANGE:
- Used for historical data older than day before yesterday
- Allows flexible date range queries with cursor-based pagination
- Required for any intervals beyond the 4-day window of PRICE_INFO
- Use this for historical analysis, comparisons, or trend calculations
- Boundary: BEFORE "day before yesterday midnight" (real time)
ROUTING:
- Use async_get_price_info_for_range() wrapper for automatic routing
- Wrapper intelligently splits requests spanning the boundary:
* Fully historical range (end < boundary) PRICE_INFO_RANGE only
* Fully recent range (start >= boundary) PRICE_INFO only
* Spanning range Both queries, merged results
- Boundary calculated using REAL TIME (dt_utils.now()), not TimeService
to ensure predictable API responses
USER:
- Fetches user account data and home metadata
- Separate from price data queries
"""
PRICE_INFO = "price_info" PRICE_INFO = "price_info"
DAILY_RATING = "daily" PRICE_INFO_RANGE = "price_info_range"
HOURLY_RATING = "hourly"
MONTHLY_RATING = "monthly"
USER = "user" USER = "user"

View file

@ -4,8 +4,17 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.const import get_display_unit_factor
from custom_components.tibber_prices.coordinator.helpers import get_intervals_for_day_offsets
from custom_components.tibber_prices.entity_utils import add_icon_color_attribute from custom_components.tibber_prices.entity_utils import add_icon_color_attribute
# Constants for price display conversion
_SUBUNIT_FACTOR = 100 # Conversion factor for subunit currency (ct/øre)
_SUBUNIT_PRECISION = 2 # Decimal places for subunit currency
_BASE_PRECISION = 4 # Decimal places for base currency
# Import TypedDict definitions for documentation (not used in signatures)
if TYPE_CHECKING: if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
@ -24,6 +33,8 @@ def get_tomorrow_data_available_attributes(
""" """
Build attributes for tomorrow_data_available sensor. Build attributes for tomorrow_data_available sensor.
Returns TomorrowDataAvailableAttributes structure.
Args: Args:
coordinator_data: Coordinator data dict coordinator_data: Coordinator data dict
time: TibberPricesTimeService instance time: TibberPricesTimeService instance
@ -35,12 +46,12 @@ def get_tomorrow_data_available_attributes(
if not coordinator_data: if not coordinator_data:
return None return None
price_info = coordinator_data.get("priceInfo", {}) # Use helper to get tomorrow's intervals
tomorrow_prices = price_info.get("tomorrow", []) tomorrow_prices = get_intervals_for_day_offsets(coordinator_data, [1])
tomorrow_date = time.get_local_date(offset_days=1)
interval_count = len(tomorrow_prices) interval_count = len(tomorrow_prices)
# Get expected intervals for tomorrow (handles DST) # Get expected intervals for tomorrow (handles DST)
tomorrow_date = time.get_local_date(offset_days=1)
expected_intervals = time.get_expected_intervals_for_day(tomorrow_date) expected_intervals = time.get_expected_intervals_for_day(tomorrow_date)
if interval_count == 0: if interval_count == 0:
@ -61,19 +72,24 @@ def get_price_intervals_attributes(
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
reverse_sort: bool, reverse_sort: bool,
config_entry: TibberPricesConfigEntry,
) -> dict | None: ) -> dict | None:
""" """
Build attributes for period-based sensors (best/peak price). Build attributes for period-based sensors (best/peak price).
Returns PeriodAttributes structure.
All data is already calculated in the coordinator - we just need to: All data is already calculated in the coordinator - we just need to:
1. Get period summaries from coordinator (already filtered and fully calculated) 1. Get period summaries from coordinator (already filtered and fully calculated)
2. Add the current timestamp 2. Add the current timestamp
3. Find current or next period based on time 3. Find current or next period based on time
4. Convert prices to display units based on user configuration
Args: Args:
coordinator_data: Coordinator data dict coordinator_data: Coordinator data dict
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required)
reverse_sort: True for peak_price (highest first), False for best_price (lowest first) reverse_sort: True for peak_price (highest first), False for best_price (lowest first)
config_entry: Config entry for display unit configuration
Returns: Returns:
Attributes dict with current/next period and all periods list Attributes dict with current/next period and all periods list
@ -83,7 +99,7 @@ def get_price_intervals_attributes(
return build_no_periods_result(time=time) return build_no_periods_result(time=time)
# Get precomputed period summaries from coordinator # Get precomputed period summaries from coordinator
periods_data = coordinator_data.get("periods", {}) periods_data = coordinator_data.get("pricePeriods", {})
period_type = "peak_price" if reverse_sort else "best_price" period_type = "peak_price" if reverse_sort else "best_price"
period_data = periods_data.get(period_type) period_data = periods_data.get(period_type)
@ -94,11 +110,20 @@ def get_price_intervals_attributes(
if not period_summaries: if not period_summaries:
return build_no_periods_result(time=time) return build_no_periods_result(time=time)
# Filter periods for today+tomorrow (sensors don't show yesterday's periods)
# Coordinator cache contains yesterday/today/tomorrow, but sensors only need today+tomorrow
now = time.now()
today_start = time.start_of_local_day(now)
filtered_periods = [period for period in period_summaries if period.get("end") and period["end"] >= today_start]
if not filtered_periods:
return build_no_periods_result(time=time)
# Find current or next period based on current time # Find current or next period based on current time
current_period = None current_period = None
# First pass: find currently active period # First pass: find currently active period
for period in period_summaries: for period in filtered_periods:
start = period.get("start") start = period.get("start")
end = period.get("end") end = period.get("end")
if start and end and time.is_current_interval(start, end): if start and end and time.is_current_interval(start, end):
@ -107,14 +132,14 @@ def get_price_intervals_attributes(
# Second pass: find next future period if none is active # Second pass: find next future period if none is active
if not current_period: if not current_period:
for period in period_summaries: for period in filtered_periods:
start = period.get("start") start = period.get("start")
if start and time.is_in_future(start): if start and time.is_in_future(start):
current_period = period current_period = period
break break
# Build final attributes # Build final attributes (use filtered_periods for display)
return build_final_attributes_simple(current_period, period_summaries, time=time) return build_final_attributes_simple(current_period, filtered_periods, time=time, config_entry=config_entry)
def build_no_periods_result(*, time: TibberPricesTimeService) -> dict: def build_no_periods_result(*, time: TibberPricesTimeService) -> dict:
@ -159,26 +184,60 @@ def add_decision_attributes(attributes: dict, current_period: dict) -> None:
attributes["rating_difference_%"] = current_period["rating_difference_%"] attributes["rating_difference_%"] = current_period["rating_difference_%"]
def add_price_attributes(attributes: dict, current_period: dict) -> None: def add_price_attributes(attributes: dict, current_period: dict, factor: int) -> None:
"""Add price statistics attributes (priority 3).""" """
if "price_avg" in current_period: Add price statistics attributes (priority 3).
attributes["price_avg"] = current_period["price_avg"]
Args:
attributes: Target dict to add attributes to
current_period: Period dict with price data (in base currency)
factor: Display unit conversion factor (100 for subunit, 1 for base)
"""
# Convert prices from base currency to display units
precision = _SUBUNIT_PRECISION if factor == _SUBUNIT_FACTOR else _BASE_PRECISION
if "price_mean" in current_period:
attributes["price_mean"] = round(current_period["price_mean"] * factor, precision)
if "price_median" in current_period:
attributes["price_median"] = round(current_period["price_median"] * factor, precision)
if "price_min" in current_period: if "price_min" in current_period:
attributes["price_min"] = current_period["price_min"] attributes["price_min"] = round(current_period["price_min"] * factor, precision)
if "price_max" in current_period: if "price_max" in current_period:
attributes["price_max"] = current_period["price_max"] attributes["price_max"] = round(current_period["price_max"] * factor, precision)
if "price_spread" in current_period: if "price_spread" in current_period:
attributes["price_spread"] = current_period["price_spread"] attributes["price_spread"] = round(current_period["price_spread"] * factor, precision)
if "price_coefficient_variation_%" in current_period:
attributes["price_coefficient_variation_%"] = current_period["price_coefficient_variation_%"]
if "volatility" in current_period: if "volatility" in current_period:
attributes["volatility"] = current_period["volatility"] attributes["volatility"] = current_period["volatility"] # Volatility is not a price, keep as-is
def add_comparison_attributes(attributes: dict, current_period: dict) -> None: def add_comparison_attributes(attributes: dict, current_period: dict, factor: int) -> None:
"""Add price comparison attributes (priority 4).""" """
Add price comparison attributes (priority 4).
Args:
attributes: Target dict to add attributes to
current_period: Period dict with price diff data (in base currency)
factor: Display unit conversion factor (100 for subunit, 1 for base)
"""
# Convert price differences from base currency to display units
precision = _SUBUNIT_PRECISION if factor == _SUBUNIT_FACTOR else _BASE_PRECISION
if "period_price_diff_from_daily_min" in current_period: if "period_price_diff_from_daily_min" in current_period:
attributes["period_price_diff_from_daily_min"] = current_period["period_price_diff_from_daily_min"] attributes["period_price_diff_from_daily_min"] = round(
current_period["period_price_diff_from_daily_min"] * factor, precision
)
if "period_price_diff_from_daily_min_%" in current_period: if "period_price_diff_from_daily_min_%" in current_period:
attributes["period_price_diff_from_daily_min_%"] = current_period["period_price_diff_from_daily_min_%"] attributes["period_price_diff_from_daily_min_%"] = current_period["period_price_diff_from_daily_min_%"]
if "period_price_diff_from_daily_max" in current_period:
attributes["period_price_diff_from_daily_max"] = round(
current_period["period_price_diff_from_daily_max"] * factor, precision
)
if "period_price_diff_from_daily_max_%" in current_period:
attributes["period_price_diff_from_daily_max_%"] = current_period["period_price_diff_from_daily_max_%"]
def add_detail_attributes(attributes: dict, current_period: dict) -> None: def add_detail_attributes(attributes: dict, current_period: dict) -> None:
@ -210,11 +269,51 @@ def add_relaxation_attributes(attributes: dict, current_period: dict) -> None:
attributes["relaxation_threshold_applied_%"] = current_period["relaxation_threshold_applied_%"] attributes["relaxation_threshold_applied_%"] = current_period["relaxation_threshold_applied_%"]
def _convert_periods_to_display_units(period_summaries: list[dict], factor: int) -> list[dict]:
"""
Convert price values in periods array to display units.
Args:
period_summaries: List of period dicts with price data (in base currency)
factor: Display unit conversion factor (100 for subunit, 1 for base)
Returns:
New list with converted period dicts
"""
precision = _SUBUNIT_PRECISION if factor == _SUBUNIT_FACTOR else _BASE_PRECISION
converted_periods = []
for period in period_summaries:
converted = period.copy()
# Convert all price fields
price_fields = ["price_mean", "price_median", "price_min", "price_max", "price_spread"]
for field in price_fields:
if field in converted:
converted[field] = round(converted[field] * factor, precision)
# Convert price differences (not percentages)
if "period_price_diff_from_daily_min" in converted:
converted["period_price_diff_from_daily_min"] = round(
converted["period_price_diff_from_daily_min"] * factor, precision
)
if "period_price_diff_from_daily_max" in converted:
converted["period_price_diff_from_daily_max"] = round(
converted["period_price_diff_from_daily_max"] * factor, precision
)
converted_periods.append(converted)
return converted_periods
def build_final_attributes_simple( def build_final_attributes_simple(
current_period: dict | None, current_period: dict | None,
period_summaries: list[dict], period_summaries: list[dict],
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
config_entry: TibberPricesConfigEntry,
) -> dict: ) -> dict:
""" """
Build the final attributes dictionary from coordinator's period summaries. Build the final attributes dictionary from coordinator's period summaries.
@ -223,11 +322,12 @@ def build_final_attributes_simple(
1. Adds the current timestamp (only thing calculated every 15min) 1. Adds the current timestamp (only thing calculated every 15min)
2. Uses the current/next period from summaries 2. Uses the current/next period from summaries
3. Adds nested period summaries 3. Adds nested period summaries
4. Converts prices to display units based on user configuration
Attributes are ordered following the documented priority: Attributes are ordered following the documented priority:
1. Time information (timestamp, start, end, duration) 1. Time information (timestamp, start, end, duration)
2. Core decision attributes (level, rating_level, rating_difference_%) 2. Core decision attributes (level, rating_level, rating_difference_%)
3. Price statistics (price_avg, price_min, price_max, price_spread, volatility) 3. Price statistics (price_mean, price_median, price_min, price_max, price_spread, volatility)
4. Price differences (period_price_diff_from_daily_min, period_price_diff_from_daily_min_%) 4. Price differences (period_price_diff_from_daily_min, period_price_diff_from_daily_min_%)
5. Detail information (period_interval_count, period_position, periods_total, periods_remaining) 5. Detail information (period_interval_count, period_position, periods_total, periods_remaining)
6. Relaxation information (relaxation_active, relaxation_level, relaxation_threshold_original_%, 6. Relaxation information (relaxation_active, relaxation_level, relaxation_threshold_original_%,
@ -238,6 +338,7 @@ def build_final_attributes_simple(
current_period: The current or next period (already complete from coordinator) current_period: The current or next period (already complete from coordinator)
period_summaries: All period summaries from coordinator period_summaries: All period summaries from coordinator
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required)
config_entry: Config entry for display unit configuration
Returns: Returns:
Complete attributes dict with all fields Complete attributes dict with all fields
@ -247,6 +348,9 @@ def build_final_attributes_simple(
current_minute = (now.minute // 15) * 15 current_minute = (now.minute // 15) * 15
timestamp = now.replace(minute=current_minute, second=0, microsecond=0) timestamp = now.replace(minute=current_minute, second=0, microsecond=0)
# Get display unit factor (100 for subunit, 1 for base currency)
factor = get_display_unit_factor(config_entry)
if current_period: if current_period:
# Build attributes in priority order using helper methods # Build attributes in priority order using helper methods
attributes = {} attributes = {}
@ -257,11 +361,11 @@ def build_final_attributes_simple(
# 2. Core decision attributes # 2. Core decision attributes
add_decision_attributes(attributes, current_period) add_decision_attributes(attributes, current_period)
# 3. Price statistics # 3. Price statistics (converted to display units)
add_price_attributes(attributes, current_period) add_price_attributes(attributes, current_period, factor)
# 4. Price differences # 4. Price differences (converted to display units)
add_comparison_attributes(attributes, current_period) add_comparison_attributes(attributes, current_period, factor)
# 5. Detail information # 5. Detail information
add_detail_attributes(attributes, current_period) add_detail_attributes(attributes, current_period)
@ -269,15 +373,15 @@ def build_final_attributes_simple(
# 6. Relaxation information (only if period was relaxed) # 6. Relaxation information (only if period was relaxed)
add_relaxation_attributes(attributes, current_period) add_relaxation_attributes(attributes, current_period)
# 7. Meta information (periods array) # 7. Meta information (periods array - prices converted to display units)
attributes["periods"] = period_summaries attributes["periods"] = _convert_periods_to_display_units(period_summaries, factor)
return attributes return attributes
# No current/next period found - return all periods with timestamp # No current/next period found - return all periods with timestamp (prices converted)
return { return {
"timestamp": timestamp, "timestamp": timestamp,
"periods": period_summaries, "periods": _convert_periods_to_display_units(period_summaries, factor),
} }

View file

@ -5,6 +5,8 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.coordinator import TIME_SENSITIVE_ENTITY_KEYS from custom_components.tibber_prices.coordinator import TIME_SENSITIVE_ENTITY_KEYS
from custom_components.tibber_prices.coordinator.core import get_connection_state
from custom_components.tibber_prices.coordinator.helpers import get_intervals_for_day_offsets
from custom_components.tibber_prices.entity import TibberPricesEntity from custom_components.tibber_prices.entity import TibberPricesEntity
from custom_components.tibber_prices.entity_utils import get_binary_sensor_icon from custom_components.tibber_prices.entity_utils import get_binary_sensor_icon
from homeassistant.components.binary_sensor import ( from homeassistant.components.binary_sensor import (
@ -12,6 +14,8 @@ from homeassistant.components.binary_sensor import (
BinarySensorEntityDescription, BinarySensorEntityDescription,
) )
from homeassistant.core import callback from homeassistant.core import callback
from homeassistant.exceptions import ConfigEntryAuthFailed
from homeassistant.helpers.restore_state import RestoreEntity
from .attributes import ( from .attributes import (
build_async_extra_state_attributes, build_async_extra_state_attributes,
@ -19,7 +23,6 @@ from .attributes import (
get_price_intervals_attributes, get_price_intervals_attributes,
get_tomorrow_data_available_attributes, get_tomorrow_data_available_attributes,
) )
from .definitions import PERIOD_LOOKAHEAD_HOURS
if TYPE_CHECKING: if TYPE_CHECKING:
from collections.abc import Callable from collections.abc import Callable
@ -30,8 +33,41 @@ if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity): class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity, RestoreEntity):
"""tibber_prices binary_sensor class.""" """tibber_prices binary_sensor class with state restoration."""
# Attributes excluded from recorder history
# See: https://developers.home-assistant.io/docs/core/entity/#excluding-state-attributes-from-recorder-history
_unrecorded_attributes = frozenset(
{
"timestamp",
# Descriptions/Help Text (static, large)
"description",
"usage_tips",
# Large Nested Structures
"periods", # Array of all period summaries
# Frequently Changing Diagnostics
"icon_color",
"data_status",
# Static/Rarely Changing
"level_value",
"rating_value",
"level_id",
"rating_id",
# Relaxation Details
"relaxation_level",
"relaxation_threshold_original_%",
"relaxation_threshold_applied_%",
# Redundant/Derived
"price_spread",
"volatility",
"rating_difference_%",
"period_price_diff_from_daily_min",
"period_price_diff_from_daily_min_%",
"periods_total",
"periods_remaining",
}
)
def __init__( def __init__(
self, self,
@ -49,6 +85,11 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
"""When entity is added to hass.""" """When entity is added to hass."""
await super().async_added_to_hass() await super().async_added_to_hass()
# Restore last state if available
if (last_state := await self.async_get_last_state()) is not None and last_state.state in ("on", "off"):
# Restore binary state (on/off) - will be used until first coordinator update
self._attr_is_on = last_state.state == "on"
# Register with coordinator for time-sensitive updates if applicable # Register with coordinator for time-sensitive updates if applicable
if self.entity_description.key in TIME_SENSITIVE_ENTITY_KEYS: if self.entity_description.key in TIME_SENSITIVE_ENTITY_KEYS:
self._time_sensitive_remove_listener = self.coordinator.async_add_time_sensitive_listener( self._time_sensitive_remove_listener = self.coordinator.async_add_time_sensitive_listener(
@ -85,7 +126,7 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
state_getters = { state_getters = {
"peak_price_period": self._peak_price_state, "peak_price_period": self._peak_price_state,
"best_price_period": self._best_price_state, "best_price_period": self._best_price_state,
"connection": lambda: True if self.coordinator.data else None, "connection": lambda: get_connection_state(self.coordinator),
"tomorrow_data_available": self._tomorrow_data_available_state, "tomorrow_data_available": self._tomorrow_data_available_state,
"has_ventilation_system": self._has_ventilation_system_state, "has_ventilation_system": self._has_ventilation_system_state,
"realtime_consumption_enabled": self._realtime_consumption_enabled_state, "realtime_consumption_enabled": self._realtime_consumption_enabled_state,
@ -97,7 +138,12 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
"""Return True if the current time is within a best price period.""" """Return True if the current time is within a best price period."""
if not self.coordinator.data: if not self.coordinator.data:
return None return None
attrs = get_price_intervals_attributes(self.coordinator.data, reverse_sort=False, time=self.coordinator.time) attrs = get_price_intervals_attributes(
self.coordinator.data,
reverse_sort=False,
time=self.coordinator.time,
config_entry=self.coordinator.config_entry,
)
if not attrs: if not attrs:
return False # Should not happen, but safety fallback return False # Should not happen, but safety fallback
start = attrs.get("start") start = attrs.get("start")
@ -111,7 +157,12 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
"""Return True if the current time is within a peak price period.""" """Return True if the current time is within a peak price period."""
if not self.coordinator.data: if not self.coordinator.data:
return None return None
attrs = get_price_intervals_attributes(self.coordinator.data, reverse_sort=True, time=self.coordinator.time) attrs = get_price_intervals_attributes(
self.coordinator.data,
reverse_sort=True,
time=self.coordinator.time,
config_entry=self.coordinator.config_entry,
)
if not attrs: if not attrs:
return False # Should not happen, but safety fallback return False # Should not happen, but safety fallback
start = attrs.get("start") start = attrs.get("start")
@ -123,14 +174,21 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
def _tomorrow_data_available_state(self) -> bool | None: def _tomorrow_data_available_state(self) -> bool | None:
"""Return True if tomorrow's data is fully available, False if not, None if unknown.""" """Return True if tomorrow's data is fully available, False if not, None if unknown."""
# Auth errors: Cannot reliably check - return unknown
# User must fix auth via reauth flow before we can determine tomorrow data availability
if isinstance(self.coordinator.last_exception, ConfigEntryAuthFailed):
return None
# No data: unknown state (initializing or error)
if not self.coordinator.data: if not self.coordinator.data:
return None return None
price_info = self.coordinator.data.get("priceInfo", {})
tomorrow_prices = price_info.get("tomorrow", []) # Check tomorrow data availability (normal operation)
tomorrow_prices = get_intervals_for_day_offsets(self.coordinator.data, [1])
tomorrow_date = self.coordinator.time.get_local_date(offset_days=1)
interval_count = len(tomorrow_prices) interval_count = len(tomorrow_prices)
# Get expected intervals for tomorrow (handles DST) # Get expected intervals for tomorrow (handles DST)
tomorrow_date = self.coordinator.time.get_local_date(offset_days=1)
expected_intervals = self.coordinator.time.get_expected_intervals_for_day(tomorrow_date) expected_intervals = self.coordinator.time.get_expected_intervals_for_day(tomorrow_date)
if interval_count == expected_intervals: if interval_count == expected_intervals:
@ -139,6 +197,31 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
return False return False
return False return False
@property
def available(self) -> bool:
"""
Return if entity is available.
Override base implementation for connection sensor which should
always be available to show connection state.
"""
# Connection sensor is always available (shows connection state)
if self.entity_description.key == "connection":
return True
# All other binary sensors use base availability logic
return super().available
@property
def force_update(self) -> bool:
"""
Force update for connection sensor to record all state changes.
Connection sensor should write every state change to history,
even if the state (on/off) is the same, to track connectivity issues.
"""
return self.entity_description.key == "connection"
def _has_ventilation_system_state(self) -> bool | None: def _has_ventilation_system_state(self) -> bool | None:
"""Return True if the home has a ventilation system.""" """Return True if the home has a ventilation system."""
if not self.coordinator.data: if not self.coordinator.data:
@ -197,9 +280,19 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
key = self.entity_description.key key = self.entity_description.key
if key == "peak_price_period": if key == "peak_price_period":
return get_price_intervals_attributes(self.coordinator.data, reverse_sort=True, time=self.coordinator.time) return get_price_intervals_attributes(
self.coordinator.data,
reverse_sort=True,
time=self.coordinator.time,
config_entry=self.coordinator.config_entry,
)
if key == "best_price_period": if key == "best_price_period":
return get_price_intervals_attributes(self.coordinator.data, reverse_sort=False, time=self.coordinator.time) return get_price_intervals_attributes(
self.coordinator.data,
reverse_sort=False,
time=self.coordinator.time,
config_entry=self.coordinator.config_entry,
)
if key == "tomorrow_data_available": if key == "tomorrow_data_available":
return self._get_tomorrow_data_available_attributes() return self._get_tomorrow_data_available_attributes()
@ -208,11 +301,13 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
@callback @callback
def _handle_coordinator_update(self) -> None: def _handle_coordinator_update(self) -> None:
"""Handle updated data from the coordinator.""" """Handle updated data from the coordinator."""
# Chart data export: No automatic refresh needed. # All binary sensors get push updates when coordinator has new data:
# Data only refreshes on: # - tomorrow_data_available: Reflects new data availability immediately after API fetch
# 1. Initial sensor activation (async_added_to_hass) # - connection: Reflects connection state changes immediately
# 2. Config changes via Options Flow (triggers re-add) # - chart_data_export: Updates chart data when price data changes
# Hourly coordinator updates don't change the chart data content. # - peak_price_period, best_price_period: Update when periods change (also get Timer #2 updates)
# - data_lifecycle_status: Gets both push and Timer #2 updates
self.async_write_ha_state()
@property @property
def is_on(self) -> bool | None: def is_on(self) -> bool | None:
@ -250,10 +345,10 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
def _has_future_periods(self) -> bool: def _has_future_periods(self) -> bool:
""" """
Check if there are periods starting within the next 6 hours. Check if there are any future periods.
Returns True if any period starts between now and PERIOD_LOOKAHEAD_HOURS from now. Returns True if any period starts in the future (no time limit).
This provides a practical planning horizon instead of hard midnight cutoff. This ensures icons show "waiting" state whenever periods are scheduled.
""" """
attrs = self._get_sensor_attributes() attrs = self._get_sensor_attributes()
if not attrs or "periods" not in attrs: if not attrs or "periods" not in attrs:
@ -262,15 +357,15 @@ class TibberPricesBinarySensor(TibberPricesEntity, BinarySensorEntity):
time = self.coordinator.time time = self.coordinator.time
periods = attrs.get("periods", []) periods = attrs.get("periods", [])
# Check if any period starts within the look-ahead window # Check if any period starts in the future (no time limit)
for period in periods: for period in periods:
start_str = period.get("start") start_str = period.get("start")
if start_str: if start_str:
# Already datetime object (periods come from coordinator.data) # Already datetime object (periods come from coordinator.data)
start_time = start_str if not isinstance(start_str, str) else time.parse_datetime(start_str) start_time = start_str if not isinstance(start_str, str) else time.parse_datetime(start_str)
# Period starts in the future but within our horizon # Period starts in the future
if start_time and time.is_time_within_horizon(start_time, hours=PERIOD_LOOKAHEAD_HOURS): if start_time and time.is_in_future(start_time):
return True return True
return False return False

View file

@ -8,9 +8,8 @@ from homeassistant.components.binary_sensor import (
) )
from homeassistant.const import EntityCategory from homeassistant.const import EntityCategory
# Look-ahead window for future period detection (hours) # Period lookahead removed - icons show "waiting" state if ANY future periods exist
# Icons will show "waiting" state if a period starts within this window # No artificial time limit - show all periods until midnight
PERIOD_LOOKAHEAD_HOURS = 6
ENTITY_DESCRIPTIONS = ( ENTITY_DESCRIPTIONS = (
BinarySensorEntityDescription( BinarySensorEntityDescription(
@ -39,6 +38,7 @@ ENTITY_DESCRIPTIONS = (
icon="mdi:calendar-check", icon="mdi:calendar-check",
device_class=None, # No specific device_class = shows generic "On/Off" device_class=None, # No specific device_class = shows generic "On/Off"
entity_category=EntityCategory.DIAGNOSTIC, entity_category=EntityCategory.DIAGNOSTIC,
entity_registry_enabled_default=True, # Critical for automations
), ),
BinarySensorEntityDescription( BinarySensorEntityDescription(
key="has_ventilation_system", key="has_ventilation_system",

View file

@ -0,0 +1,173 @@
"""
Type definitions for Tibber Prices binary sensor attributes.
These TypedDict definitions serve as **documentation** of the attribute structure
for each binary sensor type. They enable IDE autocomplete and type checking when
working with attribute dictionaries.
NOTE: In function signatures, we still use dict[str, Any] for flexibility,
but these TypedDict definitions document what keys and types are expected.
IMPORTANT: PriceLevel and PriceRating types are duplicated here to avoid
cross-platform dependencies. Keep in sync with sensor/types.py.
"""
from __future__ import annotations
from typing import Literal, TypedDict
# ============================================================================
# Literal Type Definitions (Duplicated from sensor/types.py)
# ============================================================================
# SYNC: Keep these in sync with:
# 1. sensor/types.py (Literal type definitions)
# 2. const.py (runtime string constants - single source of truth)
#
# const.py defines:
# - PRICE_LEVEL_VERY_CHEAP, PRICE_LEVEL_CHEAP, etc.
# - PRICE_RATING_LOW, PRICE_RATING_NORMAL, etc.
#
# These types are intentionally duplicated here to avoid cross-platform imports.
# Binary sensor attributes need these types for type safety without importing
# from sensor/ package (maintains platform separation).
# Price level literals (shared with sensor platform - keep in sync!)
PriceLevel = Literal[
"VERY_CHEAP",
"CHEAP",
"NORMAL",
"EXPENSIVE",
"VERY_EXPENSIVE",
]
# Price rating literals (shared with sensor platform - keep in sync!)
PriceRating = Literal[
"LOW",
"NORMAL",
"HIGH",
]
class BaseAttributes(TypedDict, total=False):
"""
Base attributes common to all binary sensors.
All binary sensor attributes include at minimum:
- timestamp: ISO 8601 string indicating when the state/attributes are valid
- error: Optional error message if something went wrong
"""
timestamp: str
error: str
class TomorrowDataAvailableAttributes(BaseAttributes, total=False):
"""
Attributes for tomorrow_data_available binary sensor.
Indicates whether tomorrow's price data is available from Tibber API.
"""
intervals_available: int # Number of intervals available for tomorrow
data_status: Literal["none", "partial", "full"] # Data completeness status
class PeriodSummary(TypedDict, total=False):
"""
Structure for period summary nested in period attributes.
Each period summary contains all calculated information about one period.
"""
# Time information (priority 1)
start: str # ISO 8601 timestamp of period start
end: str # ISO 8601 timestamp of period end
duration_minutes: int # Duration in minutes
# Core decision attributes (priority 2)
level: PriceLevel # Price level classification
rating_level: PriceRating # Price rating classification
rating_difference_pct: float # Difference from daily average (%)
# Price statistics (priority 3)
price_mean: float # Arithmetic mean price in period
price_median: float # Median price in period
price_min: float # Minimum price in period
price_max: float # Maximum price in period
price_spread: float # Price spread (max - min)
volatility: float # Price volatility within period
# Price comparison (priority 4)
period_price_diff_from_daily_min: float # Difference from daily min
period_price_diff_from_daily_min_pct: float # Difference from daily min (%)
# Detail information (priority 5)
period_interval_count: int # Number of intervals in period
period_position: int # Period position (1-based)
periods_total: int # Total number of periods
periods_remaining: int # Remaining periods after this one
# Relaxation information (priority 6 - only if period was relaxed)
relaxation_active: bool # Whether this period was found via relaxation
relaxation_level: int # Relaxation level used (1-based)
relaxation_threshold_original_pct: float # Original flex threshold (%)
relaxation_threshold_applied_pct: float # Applied flex threshold after relaxation (%)
class PeriodAttributes(BaseAttributes, total=False):
"""
Attributes for period-based binary sensors (best_price_period, peak_price_period).
These sensors indicate whether the current/next cheap/expensive period is active.
Attributes follow priority ordering:
1. Time information (timestamp, start, end, duration_minutes)
2. Core decision attributes (level, rating_level, rating_difference_%)
3. Price statistics (price_mean, price_median, price_min, price_max, price_spread, volatility)
4. Price comparison (period_price_diff_from_daily_min, period_price_diff_from_daily_min_%)
5. Detail information (period_interval_count, period_position, periods_total, periods_remaining)
6. Relaxation information (only if period was relaxed)
7. Meta information (periods list)
"""
# Time information (priority 1) - start/end refer to current/next period
start: str | None # ISO 8601 timestamp of current/next period start
end: str | None # ISO 8601 timestamp of current/next period end
duration_minutes: int # Duration of current/next period in minutes
# Core decision attributes (priority 2)
level: PriceLevel # Price level of current/next period
rating_level: PriceRating # Price rating of current/next period
rating_difference_pct: float # Difference from daily average (%)
# Price statistics (priority 3)
price_mean: float # Arithmetic mean price in current/next period
price_median: float # Median price in current/next period
price_min: float # Minimum price in current/next period
price_max: float # Maximum price in current/next period
price_spread: float # Price spread (max - min) in current/next period
volatility: float # Price volatility within current/next period
# Price comparison (priority 4)
period_price_diff_from_daily_min: float # Difference from daily min
period_price_diff_from_daily_min_pct: float # Difference from daily min (%)
# Detail information (priority 5)
period_interval_count: int # Number of intervals in current/next period
period_position: int # Period position (1-based)
periods_total: int # Total number of periods found
periods_remaining: int # Remaining periods after current/next one
# Relaxation information (priority 6 - only if period was relaxed)
relaxation_active: bool # Whether current/next period was found via relaxation
relaxation_level: int # Relaxation level used (1-based)
relaxation_threshold_original_pct: float # Original flex threshold (%)
relaxation_threshold_applied_pct: float # Applied flex threshold after relaxation (%)
# Meta information (priority 7)
periods: list[PeriodSummary] # All periods found (sorted by start time)
# Union type for all binary sensor attributes (for documentation purposes)
# In actual code, use dict[str, Any] for flexibility
BinarySensorAttributes = TomorrowDataAvailableAttributes | PeriodAttributes

View file

@ -14,6 +14,7 @@ from .config_flow_handlers.schemas import (
get_best_price_schema, get_best_price_schema,
get_options_init_schema, get_options_init_schema,
get_peak_price_schema, get_peak_price_schema,
get_price_level_schema,
get_price_rating_schema, get_price_rating_schema,
get_price_trend_schema, get_price_trend_schema,
get_reauth_confirm_schema, get_reauth_confirm_schema,
@ -25,7 +26,7 @@ from .config_flow_handlers.schemas import (
from .config_flow_handlers.subentry_flow import ( from .config_flow_handlers.subentry_flow import (
TibberPricesSubentryFlowHandler as SubentryFlowHandler, TibberPricesSubentryFlowHandler as SubentryFlowHandler,
) )
from .config_flow_handlers.user_flow import TibberPricesFlowHandler as ConfigFlow from .config_flow_handlers.user_flow import TibberPricesConfigFlowHandler as ConfigFlow
from .config_flow_handlers.validators import ( from .config_flow_handlers.validators import (
TibberPricesCannotConnectError, TibberPricesCannotConnectError,
TibberPricesInvalidAuthError, TibberPricesInvalidAuthError,
@ -41,6 +42,7 @@ __all__ = [
"get_best_price_schema", "get_best_price_schema",
"get_options_init_schema", "get_options_init_schema",
"get_peak_price_schema", "get_peak_price_schema",
"get_price_level_schema",
"get_price_rating_schema", "get_price_rating_schema",
"get_price_trend_schema", "get_price_trend_schema",
"get_reauth_confirm_schema", "get_reauth_confirm_schema",

View file

@ -27,6 +27,7 @@ from custom_components.tibber_prices.config_flow_handlers.schemas import (
get_best_price_schema, get_best_price_schema,
get_options_init_schema, get_options_init_schema,
get_peak_price_schema, get_peak_price_schema,
get_price_level_schema,
get_price_rating_schema, get_price_rating_schema,
get_price_trend_schema, get_price_trend_schema,
get_reauth_confirm_schema, get_reauth_confirm_schema,
@ -39,7 +40,7 @@ from custom_components.tibber_prices.config_flow_handlers.subentry_flow import (
TibberPricesSubentryFlowHandler, TibberPricesSubentryFlowHandler,
) )
from custom_components.tibber_prices.config_flow_handlers.user_flow import ( from custom_components.tibber_prices.config_flow_handlers.user_flow import (
TibberPricesFlowHandler, TibberPricesConfigFlowHandler,
) )
from custom_components.tibber_prices.config_flow_handlers.validators import ( from custom_components.tibber_prices.config_flow_handlers.validators import (
TibberPricesCannotConnectError, TibberPricesCannotConnectError,
@ -49,13 +50,14 @@ from custom_components.tibber_prices.config_flow_handlers.validators import (
__all__ = [ __all__ = [
"TibberPricesCannotConnectError", "TibberPricesCannotConnectError",
"TibberPricesFlowHandler", "TibberPricesConfigFlowHandler",
"TibberPricesInvalidAuthError", "TibberPricesInvalidAuthError",
"TibberPricesOptionsFlowHandler", "TibberPricesOptionsFlowHandler",
"TibberPricesSubentryFlowHandler", "TibberPricesSubentryFlowHandler",
"get_best_price_schema", "get_best_price_schema",
"get_options_init_schema", "get_options_init_schema",
"get_peak_price_schema", "get_peak_price_schema",
"get_price_level_schema",
"get_price_rating_schema", "get_price_rating_schema",
"get_price_trend_schema", "get_price_trend_schema",
"get_reauth_confirm_schema", "get_reauth_confirm_schema",

View file

@ -0,0 +1,243 @@
"""
Entity check utilities for options flow.
This module provides functions to check if relevant entities are enabled
for specific options flow steps. If no relevant entities are enabled,
a warning can be displayed to users.
"""
from __future__ import annotations
import logging
from typing import TYPE_CHECKING
from custom_components.tibber_prices.const import DOMAIN
from homeassistant.helpers.entity_registry import async_get as async_get_entity_registry
if TYPE_CHECKING:
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant
_LOGGER = logging.getLogger(__name__)
# Maximum number of example sensors to show in warning message
MAX_EXAMPLE_SENSORS = 3
# Threshold for using "and" vs "," in formatted names
NAMES_SIMPLE_JOIN_THRESHOLD = 2
# Mapping of options flow steps to affected sensor keys
# These are the entity keys (from sensor/definitions.py and binary_sensor/definitions.py)
# that are affected by each settings page
STEP_TO_SENSOR_KEYS: dict[str, list[str]] = {
# Price Rating settings affect all rating sensors
"current_interval_price_rating": [
# Interval rating sensors
"current_interval_price_rating",
"next_interval_price_rating",
"previous_interval_price_rating",
# Rolling hour rating sensors
"current_hour_price_rating",
"next_hour_price_rating",
# Daily rating sensors
"yesterday_price_rating",
"today_price_rating",
"tomorrow_price_rating",
],
# Price Level settings affect level sensors and period binary sensors
"price_level": [
# Interval level sensors
"current_interval_price_level",
"next_interval_price_level",
"previous_interval_price_level",
# Rolling hour level sensors
"current_hour_price_level",
"next_hour_price_level",
# Daily level sensors
"yesterday_price_level",
"today_price_level",
"tomorrow_price_level",
# Binary sensors that use level filtering
"best_price_period",
"peak_price_period",
],
# Volatility settings affect volatility sensors
"volatility": [
"today_volatility",
"tomorrow_volatility",
"next_24h_volatility",
"today_tomorrow_volatility",
# Also affects trend sensors (adaptive thresholds)
"current_price_trend",
"next_price_trend_change",
"price_trend_1h",
"price_trend_2h",
"price_trend_3h",
"price_trend_4h",
"price_trend_5h",
"price_trend_6h",
"price_trend_8h",
"price_trend_12h",
],
# Best Price settings affect best price binary sensor and timing sensors
"best_price": [
# Binary sensor
"best_price_period",
# Timing sensors
"best_price_end_time",
"best_price_period_duration",
"best_price_remaining_minutes",
"best_price_progress",
"best_price_next_start_time",
"best_price_next_in_minutes",
],
# Peak Price settings affect peak price binary sensor and timing sensors
"peak_price": [
# Binary sensor
"peak_price_period",
# Timing sensors
"peak_price_end_time",
"peak_price_period_duration",
"peak_price_remaining_minutes",
"peak_price_progress",
"peak_price_next_start_time",
"peak_price_next_in_minutes",
],
# Price Trend settings affect trend sensors
"price_trend": [
"current_price_trend",
"next_price_trend_change",
"price_trend_1h",
"price_trend_2h",
"price_trend_3h",
"price_trend_4h",
"price_trend_5h",
"price_trend_6h",
"price_trend_8h",
"price_trend_12h",
],
}
def check_relevant_entities_enabled(
hass: HomeAssistant,
config_entry: ConfigEntry,
step_id: str,
) -> tuple[bool, list[str]]:
"""
Check if any relevant entities for a settings step are enabled.
Args:
hass: Home Assistant instance
config_entry: Current config entry
step_id: The options flow step ID
Returns:
Tuple of (has_enabled_entities, list_of_example_sensor_names)
- has_enabled_entities: True if at least one relevant entity is enabled
- list_of_example_sensor_names: List of example sensor keys for the warning message
"""
sensor_keys = STEP_TO_SENSOR_KEYS.get(step_id)
if not sensor_keys:
# No mapping for this step - no check needed
return True, []
entity_registry = async_get_entity_registry(hass)
entry_id = config_entry.entry_id
enabled_count = 0
example_sensors: list[str] = []
for entity in entity_registry.entities.values():
# Check if entity belongs to our integration and config entry
if entity.config_entry_id != entry_id:
continue
if entity.platform != DOMAIN:
continue
# Extract the sensor key from unique_id
# unique_id format: "{home_id}_{sensor_key}" or "{entry_id}_{sensor_key}"
unique_id = entity.unique_id or ""
# The sensor key is after the last underscore that separates the ID prefix
# We check if any of our target keys is contained in the unique_id
for sensor_key in sensor_keys:
if unique_id.endswith(f"_{sensor_key}") or unique_id == sensor_key:
# Found a matching entity
if entity.disabled_by is None:
# Entity is enabled
enabled_count += 1
break
# Entity is disabled - add to examples (max MAX_EXAMPLE_SENSORS)
if len(example_sensors) < MAX_EXAMPLE_SENSORS and sensor_key not in example_sensors:
example_sensors.append(sensor_key)
break
# If we found enabled entities, return success
if enabled_count > 0:
return True, []
# No enabled entities - return the example sensors for the warning
# If we haven't collected any examples yet, use the first from the mapping
if not example_sensors:
example_sensors = sensor_keys[:MAX_EXAMPLE_SENSORS]
return False, example_sensors
def format_sensor_names_for_warning(sensor_keys: list[str]) -> str:
"""
Format sensor keys into human-readable names for warning message.
Args:
sensor_keys: List of sensor keys
Returns:
Formatted string like "Best Price Period, Best Price End Time, ..."
"""
# Convert snake_case keys to Title Case names
names = []
for key in sensor_keys:
# Replace underscores with spaces and title case
name = key.replace("_", " ").title()
names.append(name)
if len(names) <= NAMES_SIMPLE_JOIN_THRESHOLD:
return " and ".join(names)
return ", ".join(names[:-1]) + ", and " + names[-1]
def check_chart_data_export_enabled(
hass: HomeAssistant,
config_entry: ConfigEntry,
) -> bool:
"""
Check if the Chart Data Export sensor is enabled.
Args:
hass: Home Assistant instance
config_entry: Current config entry
Returns:
True if the Chart Data Export sensor is enabled, False otherwise
"""
entity_registry = async_get_entity_registry(hass)
entry_id = config_entry.entry_id
for entity in entity_registry.entities.values():
# Check if entity belongs to our integration and config entry
if entity.config_entry_id != entry_id:
continue
if entity.platform != DOMAIN:
continue
# Check for chart_data_export sensor
unique_id = entity.unique_id or ""
if unique_id.endswith("_chart_data_export") or unique_id == "chart_data_export":
# Found the entity - check if enabled
return entity.disabled_by is None
# Entity not found (shouldn't happen, but treat as disabled)
return False

View file

@ -3,19 +3,81 @@
from __future__ import annotations from __future__ import annotations
import logging import logging
from typing import Any, ClassVar from copy import deepcopy
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from collections.abc import Mapping
from custom_components.tibber_prices.config_flow_handlers.entity_check import (
check_chart_data_export_enabled,
check_relevant_entities_enabled,
format_sensor_names_for_warning,
)
from custom_components.tibber_prices.config_flow_handlers.schemas import ( from custom_components.tibber_prices.config_flow_handlers.schemas import (
ConfigOverrides,
get_best_price_schema, get_best_price_schema,
get_chart_data_export_schema, get_chart_data_export_schema,
get_display_settings_schema,
get_options_init_schema, get_options_init_schema,
get_peak_price_schema, get_peak_price_schema,
get_price_level_schema,
get_price_rating_schema, get_price_rating_schema,
get_price_trend_schema, get_price_trend_schema,
get_reset_to_defaults_schema,
get_volatility_schema, get_volatility_schema,
) )
from custom_components.tibber_prices.const import DOMAIN from custom_components.tibber_prices.config_flow_handlers.validators import (
validate_best_price_distance_percentage,
validate_distance_percentage,
validate_flex_percentage,
validate_gap_count,
validate_min_periods,
validate_period_length,
validate_price_rating_threshold_high,
validate_price_rating_threshold_low,
validate_price_rating_thresholds,
validate_price_trend_falling,
validate_price_trend_rising,
validate_price_trend_strongly_falling,
validate_price_trend_strongly_rising,
validate_relaxation_attempts,
validate_volatility_threshold_high,
validate_volatility_threshold_moderate,
validate_volatility_threshold_very_high,
validate_volatility_thresholds,
)
from custom_components.tibber_prices.const import (
CONF_BEST_PRICE_FLEX,
CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT,
CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG,
CONF_BEST_PRICE_MIN_PERIOD_LENGTH,
CONF_MIN_PERIODS_BEST,
CONF_MIN_PERIODS_PEAK,
CONF_PEAK_PRICE_FLEX,
CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT,
CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG,
CONF_PEAK_PRICE_MIN_PERIOD_LENGTH,
CONF_PRICE_RATING_THRESHOLD_HIGH,
CONF_PRICE_RATING_THRESHOLD_LOW,
CONF_PRICE_TREND_THRESHOLD_FALLING,
CONF_PRICE_TREND_THRESHOLD_RISING,
CONF_PRICE_TREND_THRESHOLD_STRONGLY_FALLING,
CONF_PRICE_TREND_THRESHOLD_STRONGLY_RISING,
CONF_RELAXATION_ATTEMPTS_BEST,
CONF_RELAXATION_ATTEMPTS_PEAK,
CONF_VOLATILITY_THRESHOLD_HIGH,
CONF_VOLATILITY_THRESHOLD_MODERATE,
CONF_VOLATILITY_THRESHOLD_VERY_HIGH,
DEFAULT_VOLATILITY_THRESHOLD_HIGH,
DEFAULT_VOLATILITY_THRESHOLD_MODERATE,
DEFAULT_VOLATILITY_THRESHOLD_VERY_HIGH,
DOMAIN,
async_get_translation,
get_default_options,
)
from homeassistant.config_entries import ConfigFlowResult, OptionsFlow from homeassistant.config_entries import ConfigFlowResult, OptionsFlow
from homeassistant.helpers import entity_registry as er
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
@ -23,131 +85,811 @@ _LOGGER = logging.getLogger(__name__)
class TibberPricesOptionsFlowHandler(OptionsFlow): class TibberPricesOptionsFlowHandler(OptionsFlow):
"""Handle options for tibber_prices entries.""" """Handle options for tibber_prices entries."""
# Step progress tracking
_TOTAL_STEPS: ClassVar[int] = 7
_STEP_INFO: ClassVar[dict[str, int]] = {
"init": 1,
"current_interval_price_rating": 2,
"volatility": 3,
"best_price": 4,
"peak_price": 5,
"price_trend": 6,
"chart_data_export": 7,
}
def __init__(self) -> None: def __init__(self) -> None:
"""Initialize options flow.""" """Initialize options flow."""
self._options: dict[str, Any] = {} self._options: dict[str, Any] = {}
def _get_step_description_placeholders(self, step_id: str) -> dict[str, str]: def _merge_section_data(self, user_input: dict[str, Any]) -> None:
"""Get description placeholders with step progress.""" """
if step_id not in self._STEP_INFO: Merge section data from form input into options.
return {}
step_num = self._STEP_INFO[step_id] Home Assistant forms with section() return nested dicts like:
{"section_name": {"setting1": value1, "setting2": value2}}
# Get translations loaded by Home Assistant We need to preserve this structure in config_entry.options.
standard_translations_key = f"{DOMAIN}_standard_translations_{self.hass.config.language}"
translations = self.hass.data.get(standard_translations_key, {})
# Get step progress text from translations with placeholders Args:
step_progress_template = translations.get("common", {}).get("step_progress", "Step {step_num} of {total_steps}") user_input: Nested user input from form with sections
step_progress = step_progress_template.format(step_num=step_num, total_steps=self._TOTAL_STEPS)
"""
for section_key, section_data in user_input.items():
if isinstance(section_data, dict):
# This is a section - ensure the section exists in options
if section_key not in self._options:
self._options[section_key] = {}
# Update the section with new values
self._options[section_key].update(section_data)
else:
# This is a direct value - keep it as is
self._options[section_key] = section_data
def _migrate_config_options(self, options: Mapping[str, Any]) -> dict[str, Any]:
"""
Migrate deprecated config options to current format.
This removes obsolete keys and renames changed keys to maintain
compatibility with older config entries.
Args:
options: Original options dict from config_entry
Returns:
Migrated options dict with deprecated keys removed/renamed
"""
# CRITICAL: Use deepcopy to avoid modifying the original config_entry.options
# If we use dict(options), nested dicts are still referenced, causing
# self._options modifications to leak into config_entry.options
migrated = deepcopy(dict(options))
migration_performed = False
# Migration 1: Rename relaxation_step_* to relaxation_attempts_*
# (Changed in v0.6.0 - commit 5a5c8ca)
if "relaxation_step_best" in migrated:
migrated["relaxation_attempts_best"] = migrated.pop("relaxation_step_best")
migration_performed = True
_LOGGER.info(
"Migrated config option: relaxation_step_best -> relaxation_attempts_best (value: %s)",
migrated["relaxation_attempts_best"],
)
if "relaxation_step_peak" in migrated:
migrated["relaxation_attempts_peak"] = migrated.pop("relaxation_step_peak")
migration_performed = True
_LOGGER.info(
"Migrated config option: relaxation_step_peak -> relaxation_attempts_peak (value: %s)",
migrated["relaxation_attempts_peak"],
)
# Migration 2: Remove obsolete volatility filter options
# (Removed in v0.9.0 - volatility filter feature removed)
obsolete_keys = [
"best_price_min_volatility",
"peak_price_min_volatility",
"min_volatility_for_periods",
]
for key in obsolete_keys:
if key in migrated:
old_value = migrated.pop(key)
migration_performed = True
_LOGGER.info(
"Removed obsolete config option: %s (was: %s)",
key,
old_value,
)
if migration_performed:
_LOGGER.info("Config migration completed - deprecated options cleaned up")
return migrated
def _save_options_if_changed(self) -> bool:
"""
Save options only if they actually changed.
Returns:
True if options were updated, False if no changes detected
"""
# Compare old and new options
if self.config_entry.options != self._options:
self.hass.config_entries.async_update_entry(
self.config_entry,
options=self._options,
)
return True
return False
def _get_entity_warning_placeholders(self, step_id: str) -> dict[str, str]:
"""
Get description placeholders for entity availability warning.
Checks if any relevant entities for the step are enabled.
If not, adds a warning placeholder to display in the form description.
Args:
step_id: The options flow step ID
Returns:
Dictionary with placeholder keys for the form description
"""
has_enabled, example_sensors = check_relevant_entities_enabled(self.hass, self.config_entry, step_id)
if has_enabled:
# No warning needed - return empty placeholder
return {"entity_warning": ""}
# Build warning message with example sensor names
sensor_names = format_sensor_names_for_warning(example_sensors)
return { return {
"step_progress": step_progress, "entity_warning": f"\n\n⚠️ **Note:** No sensors affected by these settings are currently enabled. "
f"To use these settings, first enable relevant sensors like *{sensor_names}* "
f"in **Settings → Devices & Services → Tibber Prices → Entities**."
} }
async def async_step_init(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult: def _get_enabled_config_entities(self) -> set[str]:
"""Manage the options - General Settings.""" """
# Initialize options from config_entry on first call Get config keys that have their config entity enabled.
if not self._options:
self._options = dict(self.config_entry.options)
Checks the entity registry for number/switch entities that override
config values. Returns the config_key for each enabled entity.
Returns:
Set of config keys (e.g., "best_price_flex", "enable_min_periods_best")
"""
enabled_keys: set[str] = set()
ent_reg = er.async_get(self.hass)
_LOGGER.debug(
"Checking for enabled config override entities for entry %s",
self.config_entry.entry_id,
)
# Map entity keys to their config keys
# Entity keys are defined in number/definitions.py and switch/definitions.py
override_entities = {
# Number entities (best price)
"number.best_price_flex_override": "best_price_flex",
"number.best_price_min_distance_override": "best_price_min_distance_from_avg",
"number.best_price_min_period_length_override": "best_price_min_period_length",
"number.best_price_min_periods_override": "min_periods_best",
"number.best_price_relaxation_attempts_override": "relaxation_attempts_best",
"number.best_price_gap_count_override": "best_price_max_level_gap_count",
# Number entities (peak price)
"number.peak_price_flex_override": "peak_price_flex",
"number.peak_price_min_distance_override": "peak_price_min_distance_from_avg",
"number.peak_price_min_period_length_override": "peak_price_min_period_length",
"number.peak_price_min_periods_override": "min_periods_peak",
"number.peak_price_relaxation_attempts_override": "relaxation_attempts_peak",
"number.peak_price_gap_count_override": "peak_price_max_level_gap_count",
# Switch entities
"switch.best_price_enable_relaxation_override": "enable_min_periods_best",
"switch.peak_price_enable_relaxation_override": "enable_min_periods_peak",
}
# Check each possible override entity
for entity_id_suffix, config_key in override_entities.items():
# Entity IDs include device name, so we need to search by unique_id pattern
# The unique_id follows pattern: {config_entry_id}_{entity_key}
domain, entity_key = entity_id_suffix.split(".", 1)
# Find entity by iterating through registry
for entity_entry in ent_reg.entities.values():
if (
entity_entry.domain == domain
and entity_entry.config_entry_id == self.config_entry.entry_id
and entity_entry.unique_id
and entity_entry.unique_id.endswith(entity_key)
and not entity_entry.disabled
):
_LOGGER.debug(
"Found enabled config override entity: %s -> config_key=%s",
entity_entry.entity_id,
config_key,
)
enabled_keys.add(config_key)
break
_LOGGER.debug("Enabled config override keys: %s", enabled_keys)
return enabled_keys
def _get_active_overrides(self) -> ConfigOverrides:
"""
Build override dict from enabled config entities.
Returns a dict structure compatible with schema functions.
"""
enabled_keys = self._get_enabled_config_entities()
if not enabled_keys:
_LOGGER.debug("No enabled config override entities found")
return {}
# Build structure expected by schema: {section: {key: True}}
# Section doesn't matter for read_only check, we just need the key present
overrides: ConfigOverrides = {"_enabled": {}}
for key in enabled_keys:
overrides["_enabled"][key] = True
_LOGGER.debug("Active overrides structure: %s", overrides)
return overrides
def _get_override_warning_placeholder(self, step_id: str, overrides: ConfigOverrides) -> dict[str, str]:
"""
Get description placeholder for config override warning.
Args:
step_id: The options flow step ID (e.g., "best_price", "peak_price")
overrides: Active overrides dictionary
Returns:
Dictionary with 'override_warning' placeholder
"""
# Define which config keys belong to each step
step_keys: dict[str, set[str]] = {
"best_price": {
"best_price_flex",
"best_price_min_distance_from_avg",
"best_price_min_period_length",
"min_periods_best",
"relaxation_attempts_best",
"enable_min_periods_best",
},
"peak_price": {
"peak_price_flex",
"peak_price_min_distance_from_avg",
"peak_price_min_period_length",
"min_periods_peak",
"relaxation_attempts_peak",
"enable_min_periods_peak",
},
}
keys_to_check = step_keys.get(step_id, set())
enabled_keys = overrides.get("_enabled", {})
override_count = sum(1 for k in enabled_keys if k in keys_to_check)
if override_count > 0:
field_word = "field is" if override_count == 1 else "fields are"
return {
"override_warning": (
f"\n\n🔒 **{override_count} {field_word} managed by configuration entities** "
"(grayed out). Disable the config entity to edit here, "
"or change the value directly via the entity."
)
}
return {"override_warning": ""}
async def _get_override_translations(self) -> dict[str, Any]:
"""
Load override translations from common section.
Uses the system language setting from Home Assistant.
Note: HA Options Flow does not provide user_id in context,
so we cannot determine the individual user's language preference.
Returns:
Dictionary with override_warning_template, override_warning_and,
and override_field_label_* keys for each config field.
"""
# Use system language - HA Options Flow context doesn't include user_id
language = self.hass.config.language or "en"
_LOGGER.debug("Loading override translations for language: %s", language)
translations: dict[str, Any] = {}
# Load template and connector from common section
template = await async_get_translation(self.hass, ["common", "override_warning_template"], language)
_LOGGER.debug("Loaded template: %s", template)
if template:
translations["override_warning_template"] = template
and_connector = await async_get_translation(self.hass, ["common", "override_warning_and"], language)
if and_connector:
translations["override_warning_and"] = and_connector
# Load flat field label translations
field_keys = [
"best_price_min_period_length",
"best_price_max_level_gap_count",
"best_price_flex",
"best_price_min_distance_from_avg",
"enable_min_periods_best",
"min_periods_best",
"relaxation_attempts_best",
"peak_price_min_period_length",
"peak_price_max_level_gap_count",
"peak_price_flex",
"peak_price_min_distance_from_avg",
"enable_min_periods_peak",
"min_periods_peak",
"relaxation_attempts_peak",
]
for field_key in field_keys:
translation_key = f"override_field_label_{field_key}"
label = await async_get_translation(self.hass, ["common", translation_key], language)
if label:
translations[translation_key] = label
return translations
async def async_step_init(self, _user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Manage the options - show menu."""
# Always reload options from config_entry to get latest saved state
# This ensures changes from previous steps are visible
self._options = self._migrate_config_options(self.config_entry.options)
# Show menu with all configuration categories
return self.async_show_menu(
step_id="init",
menu_options=[
"general_settings",
"display_settings",
"current_interval_price_rating",
"price_level",
"volatility",
"best_price",
"peak_price",
"price_trend",
"chart_data_export",
"reset_to_defaults",
"finish",
],
)
async def async_step_reset_to_defaults(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Reset all settings to factory defaults."""
if user_input is not None: if user_input is not None:
# Check if user confirmed the reset
if user_input.get("confirm_reset", False):
# Get currency from config_entry.data (this is immutable and safe)
currency_code = self.config_entry.data.get("currency", None)
# Completely replace options with fresh defaults (factory reset)
# This discards ALL old data including legacy structures
self._options = get_default_options(currency_code)
# Force save the new options
self._save_options_if_changed()
_LOGGER.info(
"Factory reset performed for config entry '%s' - all settings restored to defaults",
self.config_entry.title,
)
# Show success message and return to menu
return self.async_abort(reason="reset_successful")
# User didn't check the box - they want to cancel
# Show info message (not error) and return to menu
return self.async_abort(reason="reset_cancelled")
# Show confirmation form with checkbox
return self.async_show_form(
step_id="reset_to_defaults",
data_schema=get_reset_to_defaults_schema(),
)
async def async_step_finish(self, _user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Close the options flow."""
# Use empty reason to close without any message
return self.async_abort(reason="finished")
async def async_step_general_settings(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Configure general settings."""
if user_input is not None:
# Update options with new values
self._options.update(user_input) self._options.update(user_input)
return await self.async_step_current_interval_price_rating() # Save options only if changed (triggers listeners automatically)
self._save_options_if_changed()
# Return to menu for more changes
return await self.async_step_init()
return self.async_show_form( return self.async_show_form(
step_id="init", step_id="general_settings",
data_schema=get_options_init_schema(self.config_entry.options), data_schema=get_options_init_schema(self.config_entry.options),
description_placeholders={ description_placeholders={
**self._get_step_description_placeholders("init"),
"user_login": self.config_entry.data.get("user_login", "N/A"), "user_login": self.config_entry.data.get("user_login", "N/A"),
}, },
) )
async def async_step_display_settings(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Configure currency display settings."""
# Get currency from coordinator data (if available)
# During options flow setup, integration might not be fully loaded yet
currency_code = None
if DOMAIN in self.hass.data and self.config_entry.entry_id in self.hass.data[DOMAIN]:
tibber_data = self.hass.data[DOMAIN][self.config_entry.entry_id]
if tibber_data.coordinator.data:
currency_code = tibber_data.coordinator.data.get("currency")
if user_input is not None:
# Update options with new values
self._options.update(user_input)
# async_create_entry automatically handles change detection and listener triggering
self._save_options_if_changed()
# Return to menu for more changes
return await self.async_step_init()
return self.async_show_form(
step_id="display_settings",
data_schema=get_display_settings_schema(self.config_entry.options, currency_code),
)
async def async_step_current_interval_price_rating( async def async_step_current_interval_price_rating(
self, user_input: dict[str, Any] | None = None self, user_input: dict[str, Any] | None = None
) -> ConfigFlowResult: ) -> ConfigFlowResult:
"""Configure price rating thresholds.""" """Configure price rating thresholds."""
errors: dict[str, str] = {}
if user_input is not None: if user_input is not None:
self._options.update(user_input) # Schema is now flattened - fields come directly in user_input
return await self.async_step_volatility() # But we still need to store them in nested structure for coordinator
# Validate low price rating threshold
if CONF_PRICE_RATING_THRESHOLD_LOW in user_input and not validate_price_rating_threshold_low(
user_input[CONF_PRICE_RATING_THRESHOLD_LOW]
):
errors[CONF_PRICE_RATING_THRESHOLD_LOW] = "invalid_price_rating_low"
# Validate high price rating threshold
if CONF_PRICE_RATING_THRESHOLD_HIGH in user_input and not validate_price_rating_threshold_high(
user_input[CONF_PRICE_RATING_THRESHOLD_HIGH]
):
errors[CONF_PRICE_RATING_THRESHOLD_HIGH] = "invalid_price_rating_high"
# Cross-validate both thresholds together (LOW must be < HIGH)
if not errors:
# Get current values directly from options (now flat)
low_val = user_input.get(
CONF_PRICE_RATING_THRESHOLD_LOW, self._options.get(CONF_PRICE_RATING_THRESHOLD_LOW, -10)
)
high_val = user_input.get(
CONF_PRICE_RATING_THRESHOLD_HIGH, self._options.get(CONF_PRICE_RATING_THRESHOLD_HIGH, 10)
)
if not validate_price_rating_thresholds(low_val, high_val):
# This should never happen given the range constraints, but add error for safety
errors["base"] = "invalid_price_rating_thresholds"
if not errors:
# Store flat data directly in options (no section wrapping)
self._options.update(user_input)
# async_create_entry automatically handles change detection and listener triggering
self._save_options_if_changed()
# Return to menu for more changes
return await self.async_step_init()
return self.async_show_form( return self.async_show_form(
step_id="current_interval_price_rating", step_id="current_interval_price_rating",
data_schema=get_price_rating_schema(self.config_entry.options), data_schema=get_price_rating_schema(self.config_entry.options),
description_placeholders=self._get_step_description_placeholders("current_interval_price_rating"), errors=errors,
description_placeholders=self._get_entity_warning_placeholders("current_interval_price_rating"),
)
async def async_step_price_level(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Configure Tibber price level gap tolerance (smoothing for API 'level' field)."""
errors: dict[str, str] = {}
if user_input is not None:
# No validation needed - slider constraints ensure valid range
# Store flat data directly in options
self._options.update(user_input)
# async_create_entry automatically handles change detection and listener triggering
self._save_options_if_changed()
# Return to menu for more changes
return await self.async_step_init()
return self.async_show_form(
step_id="price_level",
data_schema=get_price_level_schema(self.config_entry.options),
errors=errors,
description_placeholders=self._get_entity_warning_placeholders("price_level"),
) )
async def async_step_best_price(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult: async def async_step_best_price(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Configure best price period settings.""" """Configure best price period settings."""
errors: dict[str, str] = {}
if user_input is not None: if user_input is not None:
self._options.update(user_input) # Extract settings from sections
return await self.async_step_peak_price() period_settings = user_input.get("period_settings", {})
flexibility_settings = user_input.get("flexibility_settings", {})
relaxation_settings = user_input.get("relaxation_and_target_periods", {})
# Validate period length
if CONF_BEST_PRICE_MIN_PERIOD_LENGTH in period_settings and not validate_period_length(
period_settings[CONF_BEST_PRICE_MIN_PERIOD_LENGTH]
):
errors[CONF_BEST_PRICE_MIN_PERIOD_LENGTH] = "invalid_period_length"
# Validate flex percentage
if CONF_BEST_PRICE_FLEX in flexibility_settings and not validate_flex_percentage(
flexibility_settings[CONF_BEST_PRICE_FLEX]
):
errors[CONF_BEST_PRICE_FLEX] = "invalid_flex"
# Validate distance from average (Best Price uses negative values)
if (
CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG in flexibility_settings
and not validate_best_price_distance_percentage(
flexibility_settings[CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG]
)
):
errors[CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG] = "invalid_best_price_distance"
# Validate minimum periods count
if CONF_MIN_PERIODS_BEST in relaxation_settings and not validate_min_periods(
relaxation_settings[CONF_MIN_PERIODS_BEST]
):
errors[CONF_MIN_PERIODS_BEST] = "invalid_min_periods"
# Validate gap count
if CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT in period_settings and not validate_gap_count(
period_settings[CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT]
):
errors[CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT] = "invalid_gap_count"
# Validate relaxation attempts
if CONF_RELAXATION_ATTEMPTS_BEST in relaxation_settings and not validate_relaxation_attempts(
relaxation_settings[CONF_RELAXATION_ATTEMPTS_BEST]
):
errors[CONF_RELAXATION_ATTEMPTS_BEST] = "invalid_relaxation_attempts"
if not errors:
# Merge section data into options
self._merge_section_data(user_input)
# async_create_entry automatically handles change detection and listener triggering
self._save_options_if_changed()
# Return to menu for more changes
return await self.async_step_init()
overrides = self._get_active_overrides()
placeholders = self._get_entity_warning_placeholders("best_price")
placeholders.update(self._get_override_warning_placeholder("best_price", overrides))
# Load translations for override warnings
override_translations = await self._get_override_translations()
return self.async_show_form( return self.async_show_form(
step_id="best_price", step_id="best_price",
data_schema=get_best_price_schema(self.config_entry.options), data_schema=get_best_price_schema(
description_placeholders=self._get_step_description_placeholders("best_price"), self.config_entry.options,
overrides=overrides,
translations=override_translations,
),
errors=errors,
description_placeholders=placeholders,
) )
async def async_step_peak_price(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult: async def async_step_peak_price(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Configure peak price period settings.""" """Configure peak price period settings."""
errors: dict[str, str] = {}
if user_input is not None: if user_input is not None:
self._options.update(user_input) # Extract settings from sections
return await self.async_step_price_trend() period_settings = user_input.get("period_settings", {})
flexibility_settings = user_input.get("flexibility_settings", {})
relaxation_settings = user_input.get("relaxation_and_target_periods", {})
# Validate period length
if CONF_PEAK_PRICE_MIN_PERIOD_LENGTH in period_settings and not validate_period_length(
period_settings[CONF_PEAK_PRICE_MIN_PERIOD_LENGTH]
):
errors[CONF_PEAK_PRICE_MIN_PERIOD_LENGTH] = "invalid_period_length"
# Validate flex percentage (peak uses negative values)
if CONF_PEAK_PRICE_FLEX in flexibility_settings and not validate_flex_percentage(
flexibility_settings[CONF_PEAK_PRICE_FLEX]
):
errors[CONF_PEAK_PRICE_FLEX] = "invalid_flex"
# Validate distance from average (Peak Price uses positive values)
if CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG in flexibility_settings and not validate_distance_percentage(
flexibility_settings[CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG]
):
errors[CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG] = "invalid_peak_price_distance"
# Validate minimum periods count
if CONF_MIN_PERIODS_PEAK in relaxation_settings and not validate_min_periods(
relaxation_settings[CONF_MIN_PERIODS_PEAK]
):
errors[CONF_MIN_PERIODS_PEAK] = "invalid_min_periods"
# Validate gap count
if CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT in period_settings and not validate_gap_count(
period_settings[CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT]
):
errors[CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT] = "invalid_gap_count"
# Validate relaxation attempts
if CONF_RELAXATION_ATTEMPTS_PEAK in relaxation_settings and not validate_relaxation_attempts(
relaxation_settings[CONF_RELAXATION_ATTEMPTS_PEAK]
):
errors[CONF_RELAXATION_ATTEMPTS_PEAK] = "invalid_relaxation_attempts"
if not errors:
# Merge section data into options
self._merge_section_data(user_input)
# async_create_entry automatically handles change detection and listener triggering
self._save_options_if_changed()
# Return to menu for more changes
return await self.async_step_init()
overrides = self._get_active_overrides()
placeholders = self._get_entity_warning_placeholders("peak_price")
placeholders.update(self._get_override_warning_placeholder("peak_price", overrides))
# Load translations for override warnings
override_translations = await self._get_override_translations()
return self.async_show_form( return self.async_show_form(
step_id="peak_price", step_id="peak_price",
data_schema=get_peak_price_schema(self.config_entry.options), data_schema=get_peak_price_schema(
description_placeholders=self._get_step_description_placeholders("peak_price"), self.config_entry.options,
overrides=overrides,
translations=override_translations,
),
errors=errors,
description_placeholders=placeholders,
) )
async def async_step_price_trend(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult: async def async_step_price_trend(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Configure price trend thresholds.""" """Configure price trend thresholds."""
errors: dict[str, str] = {}
if user_input is not None: if user_input is not None:
self._options.update(user_input) # Schema is now flattened - fields come directly in user_input
return await self.async_step_chart_data_export() # Store them flat in options (no nested structure)
# Validate rising trend threshold
if CONF_PRICE_TREND_THRESHOLD_RISING in user_input and not validate_price_trend_rising(
user_input[CONF_PRICE_TREND_THRESHOLD_RISING]
):
errors[CONF_PRICE_TREND_THRESHOLD_RISING] = "invalid_price_trend_rising"
# Validate falling trend threshold
if CONF_PRICE_TREND_THRESHOLD_FALLING in user_input and not validate_price_trend_falling(
user_input[CONF_PRICE_TREND_THRESHOLD_FALLING]
):
errors[CONF_PRICE_TREND_THRESHOLD_FALLING] = "invalid_price_trend_falling"
# Validate strongly rising trend threshold
if CONF_PRICE_TREND_THRESHOLD_STRONGLY_RISING in user_input and not validate_price_trend_strongly_rising(
user_input[CONF_PRICE_TREND_THRESHOLD_STRONGLY_RISING]
):
errors[CONF_PRICE_TREND_THRESHOLD_STRONGLY_RISING] = "invalid_price_trend_strongly_rising"
# Validate strongly falling trend threshold
if CONF_PRICE_TREND_THRESHOLD_STRONGLY_FALLING in user_input and not validate_price_trend_strongly_falling(
user_input[CONF_PRICE_TREND_THRESHOLD_STRONGLY_FALLING]
):
errors[CONF_PRICE_TREND_THRESHOLD_STRONGLY_FALLING] = "invalid_price_trend_strongly_falling"
# Cross-validation: Ensure rising < strongly_rising and falling > strongly_falling
if not errors:
rising = user_input.get(CONF_PRICE_TREND_THRESHOLD_RISING)
strongly_rising = user_input.get(CONF_PRICE_TREND_THRESHOLD_STRONGLY_RISING)
falling = user_input.get(CONF_PRICE_TREND_THRESHOLD_FALLING)
strongly_falling = user_input.get(CONF_PRICE_TREND_THRESHOLD_STRONGLY_FALLING)
if rising is not None and strongly_rising is not None and rising >= strongly_rising:
errors[CONF_PRICE_TREND_THRESHOLD_STRONGLY_RISING] = (
"invalid_trend_strongly_rising_less_than_rising"
)
if falling is not None and strongly_falling is not None and falling <= strongly_falling:
errors[CONF_PRICE_TREND_THRESHOLD_STRONGLY_FALLING] = (
"invalid_trend_strongly_falling_greater_than_falling"
)
if not errors:
# Store flat data directly in options (no section wrapping)
self._options.update(user_input)
# async_create_entry automatically handles change detection and listener triggering
self._save_options_if_changed()
# Return to menu for more changes
return await self.async_step_init()
return self.async_show_form( return self.async_show_form(
step_id="price_trend", step_id="price_trend",
data_schema=get_price_trend_schema(self.config_entry.options), data_schema=get_price_trend_schema(self.config_entry.options),
description_placeholders=self._get_step_description_placeholders("price_trend"), errors=errors,
description_placeholders=self._get_entity_warning_placeholders("price_trend"),
) )
async def async_step_chart_data_export(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult: async def async_step_chart_data_export(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Info page for chart data export sensor.""" """Info page for chart data export sensor."""
if user_input is not None: if user_input is not None:
# No validation needed - just an info page # No changes to save - just return to menu
return self.async_create_entry(title="", data=self._options) return await self.async_step_init()
# Show info-only form (no input fields) # Check if the chart data export sensor is enabled
is_enabled = check_chart_data_export_enabled(self.hass, self.config_entry)
# Show info-only form with status-dependent description
return self.async_show_form( return self.async_show_form(
step_id="chart_data_export", step_id="chart_data_export",
data_schema=get_chart_data_export_schema(self.config_entry.options), data_schema=get_chart_data_export_schema(self.config_entry.options),
description_placeholders=self._get_step_description_placeholders("chart_data_export"), description_placeholders={
"sensor_status_info": self._get_chart_export_status_info(is_enabled=is_enabled),
},
)
def _get_chart_export_status_info(self, *, is_enabled: bool) -> str:
"""Get the status info block for chart data export sensor."""
if is_enabled:
return (
"✅ **Status: Sensor is enabled**\n\n"
"The Chart Data Export sensor is currently active and providing data as attributes.\n\n"
"**Configuration (optional):**\n\n"
"Default settings work out-of-the-box (today+tomorrow, 15-minute intervals, prices only).\n\n"
"For customization, add to **`configuration.yaml`**:\n\n"
"```yaml\n"
"tibber_prices:\n"
" chart_export:\n"
" day:\n"
" - today\n"
" - tomorrow\n"
" include_level: true\n"
" include_rating_level: true\n"
"```\n\n"
"**All parameters:** See `tibber_prices.get_chartdata` service documentation"
)
return (
"❌ **Status: Sensor is disabled**\n\n"
"**Enable the sensor:**\n\n"
"1. Open **Settings → Devices & Services → Tibber Prices**\n"
"2. Select your home → Find **'Chart Data Export'** (Diagnostic section)\n"
"3. **Enable the sensor** (disabled by default)"
) )
async def async_step_volatility(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult: async def async_step_volatility(self, user_input: dict[str, Any] | None = None) -> ConfigFlowResult:
"""Configure volatility thresholds and period filtering.""" """Configure volatility thresholds and period filtering."""
errors: dict[str, str] = {}
if user_input is not None: if user_input is not None:
self._options.update(user_input) # Schema is now flattened - fields come directly in user_input
return await self.async_step_best_price()
# Validate moderate volatility threshold
if CONF_VOLATILITY_THRESHOLD_MODERATE in user_input and not validate_volatility_threshold_moderate(
user_input[CONF_VOLATILITY_THRESHOLD_MODERATE]
):
errors[CONF_VOLATILITY_THRESHOLD_MODERATE] = "invalid_volatility_threshold_moderate"
# Validate high volatility threshold
if CONF_VOLATILITY_THRESHOLD_HIGH in user_input and not validate_volatility_threshold_high(
user_input[CONF_VOLATILITY_THRESHOLD_HIGH]
):
errors[CONF_VOLATILITY_THRESHOLD_HIGH] = "invalid_volatility_threshold_high"
# Validate very high volatility threshold
if CONF_VOLATILITY_THRESHOLD_VERY_HIGH in user_input and not validate_volatility_threshold_very_high(
user_input[CONF_VOLATILITY_THRESHOLD_VERY_HIGH]
):
errors[CONF_VOLATILITY_THRESHOLD_VERY_HIGH] = "invalid_volatility_threshold_very_high"
# Cross-validation: Ensure MODERATE < HIGH < VERY_HIGH
if not errors:
# Get current values directly from options (now flat)
moderate = user_input.get(
CONF_VOLATILITY_THRESHOLD_MODERATE,
self._options.get(CONF_VOLATILITY_THRESHOLD_MODERATE, DEFAULT_VOLATILITY_THRESHOLD_MODERATE),
)
high = user_input.get(
CONF_VOLATILITY_THRESHOLD_HIGH,
self._options.get(CONF_VOLATILITY_THRESHOLD_HIGH, DEFAULT_VOLATILITY_THRESHOLD_HIGH),
)
very_high = user_input.get(
CONF_VOLATILITY_THRESHOLD_VERY_HIGH,
self._options.get(CONF_VOLATILITY_THRESHOLD_VERY_HIGH, DEFAULT_VOLATILITY_THRESHOLD_VERY_HIGH),
)
if not validate_volatility_thresholds(moderate, high, very_high):
errors["base"] = "invalid_volatility_thresholds"
if not errors:
# Store flat data directly in options (no section wrapping)
self._options.update(user_input)
# async_create_entry automatically handles change detection and listener triggering
self._save_options_if_changed()
# Return to menu for more changes
return await self.async_step_init()
return self.async_show_form( return self.async_show_form(
step_id="volatility", step_id="volatility",
data_schema=get_volatility_schema(self.config_entry.options), data_schema=get_volatility_schema(self.config_entry.options),
description_placeholders=self._get_step_description_placeholders("volatility"), errors=errors,
description_placeholders=self._get_entity_warning_placeholders("volatility"),
) )

View file

@ -1,126 +1,309 @@
"""Subentry config flow for adding additional Tibber homes.""" """Subentry config flow for creating time-travel views."""
from __future__ import annotations from __future__ import annotations
from typing import Any from typing import Any
from custom_components.tibber_prices.config_flow_handlers.schemas import ( import voluptuous as vol
get_select_home_schema,
get_subentry_init_schema,
)
from custom_components.tibber_prices.const import ( from custom_components.tibber_prices.const import (
CONF_EXTENDED_DESCRIPTIONS, CONF_VIRTUAL_TIME_OFFSET_DAYS,
DEFAULT_EXTENDED_DESCRIPTIONS, CONF_VIRTUAL_TIME_OFFSET_HOURS,
CONF_VIRTUAL_TIME_OFFSET_MINUTES,
DOMAIN, DOMAIN,
) )
from homeassistant.config_entries import ConfigSubentryFlow, SubentryFlowResult from homeassistant.config_entries import ConfigSubentryFlow, SubentryFlowResult
from homeassistant.helpers.selector import SelectOptionDict from homeassistant.helpers.selector import (
DurationSelector,
DurationSelectorConfig,
NumberSelector,
NumberSelectorConfig,
NumberSelectorMode,
SelectOptionDict,
SelectSelector,
SelectSelectorConfig,
SelectSelectorMode,
)
class TibberPricesSubentryFlowHandler(ConfigSubentryFlow): class TibberPricesSubentryFlowHandler(ConfigSubentryFlow):
"""Handle subentry flows for tibber_prices.""" """Handle subentry flows for tibber_prices (time-travel views)."""
def __init__(self) -> None:
"""Initialize the subentry flow handler."""
super().__init__()
self._selected_parent_entry_id: str | None = None
async def async_step_user(self, user_input: dict[str, Any] | None = None) -> SubentryFlowResult: async def async_step_user(self, user_input: dict[str, Any] | None = None) -> SubentryFlowResult:
"""User flow to add a new home.""" """Step 1: Select which config entry should get a time-travel subentry."""
parent_entry = self._get_entry() errors: dict[str, str] = {}
if not parent_entry or not hasattr(parent_entry, "runtime_data") or not parent_entry.runtime_data:
return self.async_abort(reason="no_parent_entry")
coordinator = parent_entry.runtime_data.coordinator
# Force refresh user data to get latest homes from Tibber API
await coordinator.refresh_user_data()
homes = coordinator.get_user_homes()
if not homes:
return self.async_abort(reason="no_available_homes")
if user_input is not None: if user_input is not None:
selected_home_id = user_input["home_id"] self._selected_parent_entry_id = user_input["parent_entry_id"]
selected_home = next((home for home in homes if home["id"] == selected_home_id), None) return await self.async_step_time_offset()
if not selected_home: # Get all main config entries (not subentries)
return self.async_abort(reason="home_not_found") # Subentries have "_hist_" in their unique_id
main_entries = [
home_title = self._get_home_title(selected_home) entry
home_id = selected_home["id"]
return self.async_create_entry(
title=home_title,
data={
"home_id": home_id,
"home_data": selected_home,
},
description=f"Subentry for {home_title}",
description_placeholders={"home_id": home_id},
unique_id=home_id,
)
# Get existing home IDs by checking all entries (parent + subentries)
existing_home_ids = {
entry.data["home_id"]
for entry in self.hass.config_entries.async_entries(DOMAIN) for entry in self.hass.config_entries.async_entries(DOMAIN)
if entry.data.get("home_id") if entry.unique_id and "_hist_" not in entry.unique_id
} ]
# Also include parent entry's home_id if it exists if not main_entries:
if parent_entry.data.get("home_id"): return self.async_abort(reason="no_main_entries")
existing_home_ids.add(parent_entry.data["home_id"])
available_homes = [home for home in homes if home["id"] not in existing_home_ids] # Build options for entry selection
entry_options = [
if not available_homes:
return self.async_abort(reason="no_available_homes")
home_options = [
SelectOptionDict( SelectOptionDict(
value=home["id"], value=entry.entry_id,
label=self._get_home_title(home), label=f"{entry.title} ({entry.data.get('user_login', 'N/A')})",
) )
for home in available_homes for entry in main_entries
] ]
return self.async_show_form( return self.async_show_form(
step_id="user", step_id="user",
data_schema=get_select_home_schema(home_options), data_schema=vol.Schema(
{
vol.Required("parent_entry_id"): SelectSelector(
SelectSelectorConfig(
options=entry_options,
mode=SelectSelectorMode.DROPDOWN,
)
),
}
),
description_placeholders={}, description_placeholders={},
errors={}, errors=errors,
) )
def _get_home_title(self, home: dict) -> str: async def async_step_time_offset(self, user_input: dict[str, Any] | None = None) -> SubentryFlowResult:
"""Generate a user-friendly title for a home.""" """Step 2: Configure time offset for the time-travel view."""
title = home.get("appNickname") errors: dict[str, str] = {}
if title and title.strip():
return title.strip()
address = home.get("address", {}) if user_input is not None:
if address: # Extract values (convert days to int to avoid float from slider)
parts = [] offset_days = int(user_input.get(CONF_VIRTUAL_TIME_OFFSET_DAYS, 0))
if address.get("address1"):
parts.append(address["address1"])
if address.get("city"):
parts.append(address["city"])
if parts:
return ", ".join(parts)
return home.get("id", "Unknown Home") # DurationSelector returns dict with 'hours', 'minutes', and 'seconds' keys
# We normalize to minute precision (ignore seconds)
time_offset = user_input.get("time_offset", {})
offset_hours = -abs(int(time_offset.get("hours", 0))) # Always negative for historical data
offset_minutes = -abs(int(time_offset.get("minutes", 0))) # Always negative for historical data
# Note: Seconds are ignored - we only support minute-level precision
# Validate that at least one offset is negative (historical data only)
if offset_days >= 0 and offset_hours >= 0 and offset_minutes >= 0:
errors["base"] = "no_time_offset"
if not errors:
# Get parent entry
if not self._selected_parent_entry_id:
return self.async_abort(reason="parent_entry_not_found")
parent_entry = self.hass.config_entries.async_get_entry(self._selected_parent_entry_id)
if not parent_entry:
return self.async_abort(reason="parent_entry_not_found")
# Get home data from parent entry
home_id = parent_entry.data.get("home_id")
home_data = parent_entry.data.get("home_data", {})
user_login = parent_entry.data.get("user_login", "N/A")
# Build unique_id with time offset signature
offset_str = f"d{offset_days}h{offset_hours}m{offset_minutes}"
user_id = parent_entry.unique_id.split("_")[0] if parent_entry.unique_id else home_id
unique_id = f"{user_id}_{home_id}_hist_{offset_str}"
# Check if this exact time offset already exists
for entry in self.hass.config_entries.async_entries(DOMAIN):
if entry.unique_id == unique_id:
return self.async_abort(reason="already_configured")
# No duplicate found - create the entry
offset_desc = self._format_offset_description(offset_days, offset_hours, offset_minutes)
subentry_title = f"{parent_entry.title} ({offset_desc})"
# Note: Subentries inherit options from parent entry automatically
# Options parameter is not supported by ConfigSubentryFlow.async_create_entry()
return self.async_create_entry(
title=subentry_title,
data={
"home_id": home_id,
"home_data": home_data,
"user_login": user_login,
CONF_VIRTUAL_TIME_OFFSET_DAYS: offset_days,
CONF_VIRTUAL_TIME_OFFSET_HOURS: offset_hours,
CONF_VIRTUAL_TIME_OFFSET_MINUTES: offset_minutes,
},
description=f"Time-travel view: {offset_desc}",
description_placeholders={"offset": offset_desc},
unique_id=unique_id,
)
return self.async_show_form(
step_id="time_offset",
data_schema=vol.Schema(
{
vol.Required(CONF_VIRTUAL_TIME_OFFSET_DAYS, default=0): NumberSelector(
NumberSelectorConfig(
mode=NumberSelectorMode.SLIDER,
min=-374,
max=0,
step=1,
)
),
vol.Optional("time_offset", default={"hours": 0, "minutes": 0}): DurationSelector(
DurationSelectorConfig(
allow_negative=False, # We handle sign automatically
enable_day=False, # Days are handled by the slider above
)
),
}
),
description_placeholders={},
errors=errors,
)
def _format_offset_description(self, days: int, hours: int, minutes: int) -> str:
"""
Format time offset into human-readable description.
Examples:
-7, 0, 0 -> "7 days ago" (English) / "vor 7 Tagen" (German)
0, -2, 0 -> "2 hours ago" (English) / "vor 2 Stunden" (German)
-7, -2, -30 -> "7 days - 02:30" (compact format when time is added)
"""
# Get translations from custom_translations (loaded via async_load_translations)
translations_key = f"{DOMAIN}_translations_{self.hass.config.language}"
translations = self.hass.data.get(translations_key, {})
time_units = translations.get("time_units", {})
# Fallback to English if translations not available
if not time_units:
time_units = {
"day": "{count} day",
"days": "{count} days",
"hour": "{count} hour",
"hours": "{count} hours",
"minute": "{count} minute",
"minutes": "{count} minutes",
"ago": "{parts} ago",
"now": "now",
}
# Check if we have hours or minutes (need compact format)
has_time = hours != 0 or minutes != 0
if days != 0 and has_time:
# Compact format: "7 days - 02:30"
count = abs(days)
unit_key = "days" if count != 1 else "day"
day_part = time_units[unit_key].format(count=count)
time_part = f"{abs(hours):02d}:{abs(minutes):02d}"
return f"{day_part} - {time_part}"
# Standard format: separate parts with spaces
parts = []
if days != 0:
count = abs(days)
unit_key = "days" if count != 1 else "day"
parts.append(time_units[unit_key].format(count=count))
if hours != 0:
count = abs(hours)
unit_key = "hours" if count != 1 else "hour"
parts.append(time_units[unit_key].format(count=count))
if minutes != 0:
count = abs(minutes)
unit_key = "minutes" if count != 1 else "minute"
parts.append(time_units[unit_key].format(count=count))
if not parts:
return time_units.get("now", "now")
# All offsets should be negative (historical data only)
# Join parts with space and apply "ago" template
return time_units["ago"].format(parts=" ".join(parts))
async def async_step_init(self, user_input: dict | None = None) -> SubentryFlowResult: async def async_step_init(self, user_input: dict | None = None) -> SubentryFlowResult:
"""Manage the options for a subentry.""" """Manage the options for an existing subentry (time-travel settings)."""
subentry = self._get_reconfigure_subentry() subentry = self._get_reconfigure_subentry()
errors: dict[str, str] = {} errors: dict[str, str] = {}
if user_input is not None: if user_input is not None:
return self.async_update_and_abort( # Extract values (convert days to int to avoid float from slider)
self._get_entry(), offset_days = int(user_input.get(CONF_VIRTUAL_TIME_OFFSET_DAYS, 0))
subentry,
data_updates=user_input,
)
extended_descriptions = subentry.data.get(CONF_EXTENDED_DESCRIPTIONS, DEFAULT_EXTENDED_DESCRIPTIONS) # DurationSelector returns dict with 'hours', 'minutes', and 'seconds' keys
# We normalize to minute precision (ignore seconds)
time_offset = user_input.get("time_offset", {})
offset_hours = -abs(int(time_offset.get("hours", 0))) # Always negative for historical data
offset_minutes = -abs(int(time_offset.get("minutes", 0))) # Always negative for historical data
# Note: Seconds are ignored - we only support minute-level precision
# Validate that at least one offset is negative (historical data only)
if offset_days >= 0 and offset_hours >= 0 and offset_minutes >= 0:
errors["base"] = "no_time_offset"
else:
# Get parent entry to extract home_id and user_id
parent_entry = self._get_entry()
home_id = parent_entry.data.get("home_id")
# Build new unique_id with updated offset signature
offset_str = f"d{offset_days}h{offset_hours}m{offset_minutes}"
user_id = parent_entry.unique_id.split("_")[0] if parent_entry.unique_id else home_id
new_unique_id = f"{user_id}_{home_id}_hist_{offset_str}"
# Generate new title with updated offset description
offset_desc = self._format_offset_description(offset_days, offset_hours, offset_minutes)
# Extract parent title (remove old offset description in parentheses)
parent_title = parent_entry.title.split(" (")[0] if " (" in parent_entry.title else parent_entry.title
new_title = f"{parent_title} ({offset_desc})"
return self.async_update_and_abort(
parent_entry,
subentry,
unique_id=new_unique_id,
title=new_title,
data_updates=user_input,
)
offset_days = subentry.data.get(CONF_VIRTUAL_TIME_OFFSET_DAYS, 0)
offset_hours = subentry.data.get(CONF_VIRTUAL_TIME_OFFSET_HOURS, 0)
offset_minutes = subentry.data.get(CONF_VIRTUAL_TIME_OFFSET_MINUTES, 0)
# Prepare time offset dict for DurationSelector (always positive, we negate on save)
time_offset_dict = {"hours": 0, "minutes": 0} # Default to zeros
if offset_hours != 0:
time_offset_dict["hours"] = abs(offset_hours)
if offset_minutes != 0:
time_offset_dict["minutes"] = abs(offset_minutes)
return self.async_show_form( return self.async_show_form(
step_id="init", step_id="init",
data_schema=get_subentry_init_schema(extended_descriptions=extended_descriptions), data_schema=vol.Schema(
{
vol.Required(CONF_VIRTUAL_TIME_OFFSET_DAYS, default=offset_days): NumberSelector(
NumberSelectorConfig(
mode=NumberSelectorMode.SLIDER,
min=-374,
max=0,
step=1,
)
),
vol.Optional("time_offset", default=time_offset_dict): DurationSelector(
DurationSelectorConfig(
allow_negative=False, # We handle sign automatically
enable_day=False, # Days are handled by the slider above
)
),
}
),
errors=errors, errors=errors,
) )

View file

@ -2,8 +2,11 @@
from __future__ import annotations from __future__ import annotations
from datetime import datetime
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
import voluptuous as vol
from custom_components.tibber_prices.config_flow_handlers.options_flow import ( from custom_components.tibber_prices.config_flow_handlers.options_flow import (
TibberPricesOptionsFlowHandler, TibberPricesOptionsFlowHandler,
) )
@ -12,15 +15,17 @@ from custom_components.tibber_prices.config_flow_handlers.schemas import (
get_select_home_schema, get_select_home_schema,
get_user_schema, get_user_schema,
) )
from custom_components.tibber_prices.config_flow_handlers.subentry_flow import (
TibberPricesSubentryFlowHandler,
)
from custom_components.tibber_prices.config_flow_handlers.validators import ( from custom_components.tibber_prices.config_flow_handlers.validators import (
TibberPricesCannotConnectError, TibberPricesCannotConnectError,
TibberPricesInvalidAuthError, TibberPricesInvalidAuthError,
validate_api_token, validate_api_token,
) )
from custom_components.tibber_prices.const import DOMAIN, LOGGER from custom_components.tibber_prices.const import (
DOMAIN,
LOGGER,
get_default_options,
get_translation,
)
from homeassistant.config_entries import ( from homeassistant.config_entries import (
ConfigEntry, ConfigEntry,
ConfigFlow, ConfigFlow,
@ -29,13 +34,18 @@ from homeassistant.config_entries import (
) )
from homeassistant.const import CONF_ACCESS_TOKEN from homeassistant.const import CONF_ACCESS_TOKEN
from homeassistant.core import callback from homeassistant.core import callback
from homeassistant.helpers.selector import SelectOptionDict from homeassistant.helpers.selector import (
SelectOptionDict,
SelectSelector,
SelectSelectorConfig,
SelectSelectorMode,
)
if TYPE_CHECKING: if TYPE_CHECKING:
from homeassistant.config_entries import ConfigSubentryFlow from homeassistant.config_entries import ConfigSubentryFlow
class TibberPricesFlowHandler(ConfigFlow, domain=DOMAIN): class TibberPricesConfigFlowHandler(ConfigFlow, domain=DOMAIN):
"""Config flow for tibber_prices.""" """Config flow for tibber_prices."""
VERSION = 1 VERSION = 1
@ -58,7 +68,12 @@ class TibberPricesFlowHandler(ConfigFlow, domain=DOMAIN):
config_entry: ConfigEntry, # noqa: ARG003 config_entry: ConfigEntry, # noqa: ARG003
) -> dict[str, type[ConfigSubentryFlow]]: ) -> dict[str, type[ConfigSubentryFlow]]:
"""Return subentries supported by this integration.""" """Return subentries supported by this integration."""
return {"home": TibberPricesSubentryFlowHandler} # Temporarily disabled: Time-travel feature not yet fully implemented
# When enabled, this causes "Devices that don't belong to a sub-entry" warning
# because subentries don't have their own entities yet.
# See: https://github.com/home-assistant/core/issues/147570
# Will be re-enabled when time-travel functionality is implemented
return {}
@staticmethod @staticmethod
@callback @callback
@ -126,13 +141,117 @@ class TibberPricesFlowHandler(ConfigFlow, domain=DOMAIN):
step_id="reauth_confirm", step_id="reauth_confirm",
data_schema=get_reauth_confirm_schema(), data_schema=get_reauth_confirm_schema(),
errors=_errors, errors=_errors,
description_placeholders={"tibber_url": "https://developer.tibber.com"},
) )
async def async_step_user( async def async_step_user(
self, self,
user_input: dict | None = None, user_input: dict | None = None,
) -> ConfigFlowResult: ) -> ConfigFlowResult:
"""Handle a flow initialized by the user. Only ask for access token.""" """Handle a flow initialized by the user. Choose account or enter new token."""
# Get existing accounts
existing_entries = self.hass.config_entries.async_entries(DOMAIN)
# If there are existing accounts, offer choice
if existing_entries and user_input is None:
return await self.async_step_account_choice()
# Otherwise, go directly to token input
return await self.async_step_new_token(user_input)
async def async_step_account_choice(
self,
user_input: dict | None = None,
) -> ConfigFlowResult:
"""Let user choose between existing account or new token."""
if user_input is not None:
choice = user_input["account_choice"]
if choice == "new_token":
return await self.async_step_new_token()
# User selected an existing account - copy its token
selected_entry_id = choice
selected_entry = next(
(
entry
for entry in self.hass.config_entries.async_entries(DOMAIN)
if entry.entry_id == selected_entry_id
),
None,
)
if not selected_entry:
return self.async_abort(reason="unknown")
# Copy token from selected entry and proceed
access_token = selected_entry.data.get(CONF_ACCESS_TOKEN)
if not access_token:
return self.async_abort(reason="unknown")
return await self.async_step_new_token({CONF_ACCESS_TOKEN: access_token})
# Build options: unique user accounts (grouped by user_id) + "New Token" option
existing_entries = self.hass.config_entries.async_entries(DOMAIN)
# Group entries by user_id to show unique accounts
# Minimum parts in unique_id format: user_id_home_id
min_unique_id_parts = 2
seen_users = {}
for entry in existing_entries:
# Extract user_id from unique_id (format: user_id_home_id or user_id_home_id_sub/hist_...)
unique_id = entry.unique_id
if unique_id:
# Split by underscore and take first part as user_id
parts = unique_id.split("_")
if len(parts) >= min_unique_id_parts:
user_id = parts[0]
if user_id not in seen_users:
seen_users[user_id] = entry
# Build dropdown options from unique user accounts
account_options = [
SelectOptionDict(
value=entry.entry_id,
label=f"{entry.title} ({entry.data.get('user_login', 'N/A')})",
)
for entry in seen_users.values()
]
# Add "new_token" option with translated label
new_token_label = (
get_translation(
["selector", "account_choice", "options", "new_token"],
self.hass.config.language,
)
or "Add new Tibber account API token"
)
account_options.append(
SelectOptionDict(
value="new_token",
label=new_token_label,
)
)
return self.async_show_form(
step_id="account_choice",
data_schema=vol.Schema(
{
vol.Required("account_choice"): SelectSelector(
SelectSelectorConfig(
options=account_options,
mode=SelectSelectorMode.DROPDOWN,
)
),
}
),
)
async def async_step_new_token(
self,
user_input: dict | None = None,
) -> ConfigFlowResult:
"""Handle token input (new or copied from existing account)."""
_errors = {} _errors = {}
if user_input is not None: if user_input is not None:
try: try:
@ -159,9 +278,6 @@ class TibberPricesFlowHandler(ConfigFlow, domain=DOMAIN):
LOGGER.debug("Viewer data received: %s", viewer) LOGGER.debug("Viewer data received: %s", viewer)
await self.async_set_unique_id(unique_id=str(user_id))
self._abort_if_unique_id_configured()
# Store viewer data in the flow for use in the next step # Store viewer data in the flow for use in the next step
self._viewer = viewer self._viewer = viewer
self._access_token = user_input[CONF_ACCESS_TOKEN] self._access_token = user_input[CONF_ACCESS_TOKEN]
@ -173,25 +289,95 @@ class TibberPricesFlowHandler(ConfigFlow, domain=DOMAIN):
return await self.async_step_select_home() return await self.async_step_select_home()
return self.async_show_form( return self.async_show_form(
step_id="user", step_id="new_token",
data_schema=get_user_schema((user_input or {}).get(CONF_ACCESS_TOKEN)), data_schema=get_user_schema((user_input or {}).get(CONF_ACCESS_TOKEN)),
errors=_errors, errors=_errors,
description_placeholders={"tibber_url": "https://developer.tibber.com"},
) )
async def async_step_select_home(self, user_input: dict | None = None) -> ConfigFlowResult: async def async_step_select_home(self, user_input: dict | None = None) -> ConfigFlowResult: # noqa: PLR0911
"""Handle home selection during initial setup.""" """Handle home selection during initial setup."""
homes = self._viewer.get("homes", []) if self._viewer else [] homes = self._viewer.get("homes", []) if self._viewer else []
if not homes: if not homes:
return self.async_abort(reason="unknown") return self.async_abort(reason="unknown")
# Filter out already configured homes
configured_home_ids = {
entry.data.get("home_id")
for entry in self.hass.config_entries.async_entries(DOMAIN)
if entry.data.get("home_id")
}
available_homes = [home for home in homes if home["id"] not in configured_home_ids]
# If no homes available, abort
if not available_homes:
return self.async_abort(reason="already_configured")
if user_input is not None: if user_input is not None:
selected_home_id = user_input["home_id"] selected_home_id = user_input["home_id"]
selected_home = next((home for home in homes if home["id"] == selected_home_id), None) selected_home = next((home for home in available_homes if home["id"] == selected_home_id), None)
if not selected_home: if not selected_home:
return self.async_abort(reason="unknown") return self.async_abort(reason="unknown")
# Validate that home has an active or future subscription
subscription_status = self._get_subscription_status(selected_home)
if subscription_status == "none":
return self.async_show_form(
step_id="select_home",
data_schema=get_select_home_schema(
[
SelectOptionDict(
value=home["id"],
label=self._get_home_title_with_status(home),
)
for home in available_homes
]
),
errors={"home_id": "no_active_subscription"},
)
if subscription_status == "expired":
return self.async_show_form(
step_id="select_home",
data_schema=get_select_home_schema(
[
SelectOptionDict(
value=home["id"],
label=self._get_home_title_with_status(home),
)
for home in available_homes
]
),
errors={"home_id": "subscription_expired"},
)
# Set unique_id to user_id + home_id combination
# This allows multiple homes per user account (single-home architecture)
unique_id = f"{self._user_id}_{selected_home_id}"
await self.async_set_unique_id(unique_id)
self._abort_if_unique_id_configured()
# Note: This check is now redundant since we filter available_homes upfront,
# but kept as defensive programming in case of race conditions
for entry in self.hass.config_entries.async_entries(DOMAIN):
if entry.data.get("home_id") == selected_home_id:
return self.async_show_form(
step_id="select_home",
data_schema=get_select_home_schema(
[
SelectOptionDict(
value=home["id"],
label=self._get_home_title(home),
)
for home in available_homes
]
),
errors={"home_id": "home_already_configured"},
)
data = { data = {
CONF_ACCESS_TOKEN: self._access_token or "", CONF_ACCESS_TOKEN: self._access_token or "",
"home_id": selected_home_id, "home_id": selected_home_id,
@ -200,18 +386,32 @@ class TibberPricesFlowHandler(ConfigFlow, domain=DOMAIN):
"user_login": self._user_login or "N/A", "user_login": self._user_login or "N/A",
} }
# Extract currency from home data for intelligent defaults
currency_code = None
if (
selected_home
and (subscription := selected_home.get("currentSubscription"))
and (price_info := subscription.get("priceInfo"))
and (current_price := price_info.get("current"))
):
currency_code = current_price.get("currency")
# Generate entry title from home address (not appNickname)
entry_title = self._get_entry_title(selected_home)
return self.async_create_entry( return self.async_create_entry(
title=self._user_name or "Unknown User", title=entry_title,
data=data, data=data,
description=f"{self._user_login} ({self._user_id})", description=f"{self._user_login} ({self._user_id})",
options=get_default_options(currency_code),
) )
home_options = [ home_options = [
SelectOptionDict( SelectOptionDict(
value=home["id"], value=home["id"],
label=self._get_home_title(home), label=self._get_home_title_with_status(home),
) )
for home in homes for home in available_homes
] ]
return self.async_show_form( return self.async_show_form(
@ -234,9 +434,138 @@ class TibberPricesFlowHandler(ConfigFlow, domain=DOMAIN):
return home_ids return home_ids
@staticmethod
def _get_subscription_status(home: dict) -> str:
"""
Check subscription status of home.
Returns:
- "active": Subscription is currently active
- "future": Subscription exists but starts in the future (validFrom > now)
- "expired": Subscription exists but has ended (validTo < now)
- "none": No subscription exists
"""
subscription = home.get("currentSubscription")
if subscription is None or subscription.get("status") is None:
return "none"
# Check validTo (contract end date)
valid_to = subscription.get("validTo")
if valid_to:
try:
valid_to_dt = datetime.fromisoformat(valid_to)
if valid_to_dt < datetime.now(valid_to_dt.tzinfo):
return "expired"
except (ValueError, AttributeError):
pass # If parsing fails, continue with other checks
# Check validFrom (contract start date)
valid_from = subscription.get("validFrom")
if valid_from:
try:
valid_from_dt = datetime.fromisoformat(valid_from)
if valid_from_dt > datetime.now(valid_from_dt.tzinfo):
return "future"
except (ValueError, AttributeError):
pass # If parsing fails, assume active
return "active"
def _get_home_title_with_status(self, home: dict) -> str:
"""Generate a user-friendly title for a home with subscription status."""
base_title = self._get_home_title(home)
status = self._get_subscription_status(home)
if status == "none":
return f"{base_title} ⚠️ (No active contract)"
if status == "expired":
return f"{base_title} ⚠️ (Contract expired)"
if status == "future":
return f"{base_title} ⚠️ (Contract starts soon)"
return base_title
@staticmethod
def _format_city_name(city: str) -> str:
"""
Format city name to title case.
Converts 'MÜNCHEN' to 'München', handles multi-word cities like
'BAD TÖLZ' -> 'Bad Tölz', and hyphenated cities like
'GARMISCH-PARTENKIRCHEN' -> 'Garmisch-Partenkirchen'.
"""
if not city:
return city
# Split by space and hyphen while preserving delimiters
words = []
current_word = ""
for char in city:
if char in (" ", "-"):
if current_word:
words.append(current_word)
words.append(char) # Preserve delimiter
current_word = ""
else:
current_word += char
if current_word: # Add last word
words.append(current_word)
# Capitalize first letter of each word (not delimiters)
formatted_words = []
for word in words:
if word in (" ", "-"):
formatted_words.append(word)
else:
# Capitalize first letter, lowercase rest
formatted_words.append(word.capitalize())
return "".join(formatted_words)
@staticmethod
def _get_entry_title(home: dict) -> str:
"""
Generate entry title from address (for config entry title).
Uses 'address1, City' format, e.g. 'Pählstraße 6B, München'.
Does NOT use appNickname (that's for _get_home_title).
"""
address = home.get("address", {})
if not address:
# Fallback to home ID if no address
return home.get("id", "Unknown Home")
parts = []
# Always prefer address1
address1 = address.get("address1")
if address1 and address1.strip():
parts.append(address1.strip())
# Format city name (convert MÜNCHEN -> München)
city = address.get("city")
if city and city.strip():
formatted_city = TibberPricesConfigFlowHandler._format_city_name(city.strip())
parts.append(formatted_city)
if parts:
return ", ".join(parts)
# Final fallback
return home.get("id", "Unknown Home")
@staticmethod @staticmethod
def _get_home_title(home: dict) -> str: def _get_home_title(home: dict) -> str:
"""Generate a user-friendly title for a home.""" """
Generate a user-friendly title for a home (for dropdown display).
Prefers appNickname, falls back to address.
"""
title = home.get("appNickname") title = home.get("appNickname")
if title and title.strip(): if title and title.strip():
return title.strip() return title.strip()
@ -247,7 +576,10 @@ class TibberPricesFlowHandler(ConfigFlow, domain=DOMAIN):
if address.get("address1"): if address.get("address1"):
parts.append(address["address1"]) parts.append(address["address1"])
if address.get("city"): if address.get("city"):
parts.append(address["city"]) # Format city for display too
city = address["city"]
formatted_city = TibberPricesConfigFlowHandler._format_city_name(city)
parts.append(formatted_city)
if parts: if parts:
return ", ".join(parts) return ", ".join(parts)

View file

@ -10,7 +10,35 @@ from custom_components.tibber_prices.api import (
TibberPricesApiClientCommunicationError, TibberPricesApiClientCommunicationError,
TibberPricesApiClientError, TibberPricesApiClientError,
) )
from custom_components.tibber_prices.const import DOMAIN from custom_components.tibber_prices.const import (
DOMAIN,
MAX_DISTANCE_PERCENTAGE,
MAX_FLEX_PERCENTAGE,
MAX_GAP_COUNT,
MAX_MIN_PERIODS,
MAX_PRICE_RATING_THRESHOLD_HIGH,
MAX_PRICE_RATING_THRESHOLD_LOW,
MAX_PRICE_TREND_FALLING,
MAX_PRICE_TREND_RISING,
MAX_PRICE_TREND_STRONGLY_FALLING,
MAX_PRICE_TREND_STRONGLY_RISING,
MAX_RELAXATION_ATTEMPTS,
MAX_VOLATILITY_THRESHOLD_HIGH,
MAX_VOLATILITY_THRESHOLD_MODERATE,
MAX_VOLATILITY_THRESHOLD_VERY_HIGH,
MIN_GAP_COUNT,
MIN_PERIOD_LENGTH,
MIN_PRICE_RATING_THRESHOLD_HIGH,
MIN_PRICE_RATING_THRESHOLD_LOW,
MIN_PRICE_TREND_FALLING,
MIN_PRICE_TREND_RISING,
MIN_PRICE_TREND_STRONGLY_FALLING,
MIN_PRICE_TREND_STRONGLY_RISING,
MIN_RELAXATION_ATTEMPTS,
MIN_VOLATILITY_THRESHOLD_HIGH,
MIN_VOLATILITY_THRESHOLD_MODERATE,
MIN_VOLATILITY_THRESHOLD_VERY_HIGH,
)
from homeassistant.exceptions import HomeAssistantError from homeassistant.exceptions import HomeAssistantError
from homeassistant.helpers.aiohttp_client import async_create_clientsession from homeassistant.helpers.aiohttp_client import async_create_clientsession
from homeassistant.loader import async_get_integration from homeassistant.loader import async_get_integration
@ -18,10 +46,6 @@ from homeassistant.loader import async_get_integration
if TYPE_CHECKING: if TYPE_CHECKING:
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
# Constants for validation
MAX_FLEX_PERCENTAGE = 100.0
MAX_MIN_PERIODS = 10 # Arbitrary upper limit for sanity
class TibberPricesInvalidAuthError(HomeAssistantError): class TibberPricesInvalidAuthError(HomeAssistantError):
"""Error to indicate invalid authentication.""" """Error to indicate invalid authentication."""
@ -64,34 +88,18 @@ async def validate_api_token(hass: HomeAssistant, token: str) -> dict:
raise TibberPricesCannotConnectError from exception raise TibberPricesCannotConnectError from exception
def validate_threshold_range(value: float, min_val: float, max_val: float) -> bool:
"""
Validate threshold is within allowed range.
Args:
value: Value to validate
min_val: Minimum allowed value
max_val: Maximum allowed value
Returns:
True if value is within range
"""
return min_val <= value <= max_val
def validate_period_length(minutes: int) -> bool: def validate_period_length(minutes: int) -> bool:
""" """
Validate period length is multiple of 15 minutes. Validate period length is a positive multiple of 15 minutes.
Args: Args:
minutes: Period length in minutes minutes: Period length in minutes
Returns: Returns:
True if length is valid True if length is valid (multiple of 15, at least MIN_PERIOD_LENGTH)
""" """
return minutes > 0 and minutes % 15 == 0 return minutes % 15 == 0 and minutes >= MIN_PERIOD_LENGTH
def validate_flex_percentage(flex: float) -> bool: def validate_flex_percentage(flex: float) -> bool:
@ -99,13 +107,13 @@ def validate_flex_percentage(flex: float) -> bool:
Validate flexibility percentage is within bounds. Validate flexibility percentage is within bounds.
Args: Args:
flex: Flexibility percentage flex: Flexibility percentage (can be negative for peak price)
Returns: Returns:
True if percentage is valid True if percentage is valid (-MAX_FLEX to +MAX_FLEX)
""" """
return 0.0 <= flex <= MAX_FLEX_PERCENTAGE return -MAX_FLEX_PERCENTAGE <= flex <= MAX_FLEX_PERCENTAGE
def validate_min_periods(count: int) -> bool: def validate_min_periods(count: int) -> bool:
@ -113,10 +121,251 @@ def validate_min_periods(count: int) -> bool:
Validate minimum periods count is reasonable. Validate minimum periods count is reasonable.
Args: Args:
count: Number of minimum periods count: Number of minimum periods per day
Returns: Returns:
True if count is valid True if count is valid (1 to MAX_MIN_PERIODS)
""" """
return count > 0 and count <= MAX_MIN_PERIODS return 1 <= count <= MAX_MIN_PERIODS
def validate_distance_percentage(distance: float) -> bool:
"""
Validate distance from average percentage (for Peak Price - positive values).
Args:
distance: Distance percentage (0-50% is typical range)
Returns:
True if distance is valid (0-MAX_DISTANCE_PERCENTAGE)
"""
return 0.0 <= distance <= MAX_DISTANCE_PERCENTAGE
def validate_best_price_distance_percentage(distance: float) -> bool:
"""
Validate distance from average percentage (for Best Price - negative values).
Args:
distance: Distance percentage (-50% to 0% range, negative = below average)
Returns:
True if distance is valid (-MAX_DISTANCE_PERCENTAGE to 0)
"""
return -MAX_DISTANCE_PERCENTAGE <= distance <= 0.0
def validate_gap_count(count: int) -> bool:
"""
Validate gap count is within bounds.
Args:
count: Gap count (0-8)
Returns:
True if count is valid (MIN_GAP_COUNT to MAX_GAP_COUNT)
"""
return MIN_GAP_COUNT <= count <= MAX_GAP_COUNT
def validate_relaxation_attempts(attempts: int) -> bool:
"""
Validate relaxation attempts count is within bounds.
Args:
attempts: Number of relaxation attempts (1-12)
Returns:
True if attempts is valid (MIN_RELAXATION_ATTEMPTS to MAX_RELAXATION_ATTEMPTS)
"""
return MIN_RELAXATION_ATTEMPTS <= attempts <= MAX_RELAXATION_ATTEMPTS
def validate_price_rating_threshold_low(threshold: int) -> bool:
"""
Validate low price rating threshold.
Args:
threshold: Low rating threshold percentage (-50 to -5)
Returns:
True if threshold is valid (MIN_PRICE_RATING_THRESHOLD_LOW to MAX_PRICE_RATING_THRESHOLD_LOW)
"""
return MIN_PRICE_RATING_THRESHOLD_LOW <= threshold <= MAX_PRICE_RATING_THRESHOLD_LOW
def validate_price_rating_threshold_high(threshold: int) -> bool:
"""
Validate high price rating threshold.
Args:
threshold: High rating threshold percentage (5 to 50)
Returns:
True if threshold is valid (MIN_PRICE_RATING_THRESHOLD_HIGH to MAX_PRICE_RATING_THRESHOLD_HIGH)
"""
return MIN_PRICE_RATING_THRESHOLD_HIGH <= threshold <= MAX_PRICE_RATING_THRESHOLD_HIGH
def validate_price_rating_thresholds(threshold_low: int, threshold_high: int) -> bool:
"""
Cross-validate both price rating thresholds together.
Ensures that LOW threshold < HIGH threshold with proper gap to avoid
overlap at 0%. LOW should be negative (below average), HIGH should be
positive (above average).
Args:
threshold_low: Low rating threshold percentage (-50 to -5)
threshold_high: High rating threshold percentage (5 to 50)
Returns:
True if both thresholds are valid individually AND threshold_low < threshold_high
"""
# Validate individual ranges first
if not validate_price_rating_threshold_low(threshold_low):
return False
if not validate_price_rating_threshold_high(threshold_high):
return False
# Ensure LOW is always less than HIGH (should always be true given the ranges,
# but explicit check for safety)
return threshold_low < threshold_high
def validate_volatility_threshold_moderate(threshold: float) -> bool:
"""
Validate moderate volatility threshold.
Args:
threshold: Moderate volatility threshold percentage (5.0 to 25.0)
Returns:
True if threshold is valid (MIN_VOLATILITY_THRESHOLD_MODERATE to MAX_VOLATILITY_THRESHOLD_MODERATE)
"""
return MIN_VOLATILITY_THRESHOLD_MODERATE <= threshold <= MAX_VOLATILITY_THRESHOLD_MODERATE
def validate_volatility_threshold_high(threshold: float) -> bool:
"""
Validate high volatility threshold.
Args:
threshold: High volatility threshold percentage (20.0 to 40.0)
Returns:
True if threshold is valid (MIN_VOLATILITY_THRESHOLD_HIGH to MAX_VOLATILITY_THRESHOLD_HIGH)
"""
return MIN_VOLATILITY_THRESHOLD_HIGH <= threshold <= MAX_VOLATILITY_THRESHOLD_HIGH
def validate_volatility_threshold_very_high(threshold: float) -> bool:
"""
Validate very high volatility threshold.
Args:
threshold: Very high volatility threshold percentage (35.0 to 80.0)
Returns:
True if threshold is valid (MIN_VOLATILITY_THRESHOLD_VERY_HIGH to MAX_VOLATILITY_THRESHOLD_VERY_HIGH)
"""
return MIN_VOLATILITY_THRESHOLD_VERY_HIGH <= threshold <= MAX_VOLATILITY_THRESHOLD_VERY_HIGH
def validate_volatility_thresholds(
threshold_moderate: float,
threshold_high: float,
threshold_very_high: float,
) -> bool:
"""
Cross-validate all three volatility thresholds together.
Ensures that MODERATE < HIGH < VERY_HIGH to maintain logical classification
boundaries. Each threshold represents an escalating level of price volatility.
Args:
threshold_moderate: Moderate volatility threshold (5.0 to 25.0)
threshold_high: High volatility threshold (20.0 to 40.0)
threshold_very_high: Very high volatility threshold (35.0 to 80.0)
Returns:
True if all thresholds are valid individually AND maintain proper ordering
"""
# Validate individual ranges first
if not validate_volatility_threshold_moderate(threshold_moderate):
return False
if not validate_volatility_threshold_high(threshold_high):
return False
if not validate_volatility_threshold_very_high(threshold_very_high):
return False
# Ensure cascading order: MODERATE < HIGH < VERY_HIGH
return threshold_moderate < threshold_high < threshold_very_high
def validate_price_trend_rising(threshold: int) -> bool:
"""
Validate rising price trend threshold.
Args:
threshold: Rising trend threshold percentage (1 to 50)
Returns:
True if threshold is valid (MIN_PRICE_TREND_RISING to MAX_PRICE_TREND_RISING)
"""
return MIN_PRICE_TREND_RISING <= threshold <= MAX_PRICE_TREND_RISING
def validate_price_trend_falling(threshold: int) -> bool:
"""
Validate falling price trend threshold.
Args:
threshold: Falling trend threshold percentage (-50 to -1)
Returns:
True if threshold is valid (MIN_PRICE_TREND_FALLING to MAX_PRICE_TREND_FALLING)
"""
return MIN_PRICE_TREND_FALLING <= threshold <= MAX_PRICE_TREND_FALLING
def validate_price_trend_strongly_rising(threshold: int) -> bool:
"""
Validate strongly rising price trend threshold.
Args:
threshold: Strongly rising trend threshold percentage (2 to 100)
Returns:
True if threshold is valid (MIN_PRICE_TREND_STRONGLY_RISING to MAX_PRICE_TREND_STRONGLY_RISING)
"""
return MIN_PRICE_TREND_STRONGLY_RISING <= threshold <= MAX_PRICE_TREND_STRONGLY_RISING
def validate_price_trend_strongly_falling(threshold: int) -> bool:
"""
Validate strongly falling price trend threshold.
Args:
threshold: Strongly falling trend threshold percentage (-100 to -2)
Returns:
True if threshold is valid (MIN_PRICE_TREND_STRONGLY_FALLING to MAX_PRICE_TREND_STRONGLY_FALLING)
"""
return MIN_PRICE_TREND_STRONGLY_FALLING <= threshold <= MAX_PRICE_TREND_STRONGLY_FALLING

View file

@ -1,10 +1,11 @@
"""Constants for the Tibber Price Analytics integration.""" """Constants for the Tibber Price Analytics integration."""
from __future__ import annotations
import json import json
import logging import logging
from collections.abc import Sequence
from pathlib import Path from pathlib import Path
from typing import Any from typing import TYPE_CHECKING, Any
import aiofiles import aiofiles
@ -14,16 +15,27 @@ from homeassistant.const import (
UnitOfPower, UnitOfPower,
UnitOfTime, UnitOfTime,
) )
from homeassistant.core import HomeAssistant
if TYPE_CHECKING:
from collections.abc import Sequence
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant
DOMAIN = "tibber_prices" DOMAIN = "tibber_prices"
LOGGER = logging.getLogger(__package__) LOGGER = logging.getLogger(__package__)
# Data storage keys # Data storage keys
DATA_CHART_CONFIG = "chart_config" # Key for chart export config in hass.data DATA_CHART_CONFIG = "chart_config" # Key for chart export config in hass.data
DATA_CHART_METADATA_CONFIG = "chart_metadata_config" # Key for chart metadata config in hass.data
# Configuration keys # Configuration keys
CONF_EXTENDED_DESCRIPTIONS = "extended_descriptions" CONF_EXTENDED_DESCRIPTIONS = "extended_descriptions"
CONF_VIRTUAL_TIME_OFFSET_DAYS = (
"virtual_time_offset_days" # Time-travel: days offset (negative only, e.g., -7 = 7 days ago)
)
CONF_VIRTUAL_TIME_OFFSET_HOURS = "virtual_time_offset_hours" # Time-travel: hours offset (-23 to +23)
CONF_VIRTUAL_TIME_OFFSET_MINUTES = "virtual_time_offset_minutes" # Time-travel: minutes offset (-59 to +59)
CONF_BEST_PRICE_FLEX = "best_price_flex" CONF_BEST_PRICE_FLEX = "best_price_flex"
CONF_PEAK_PRICE_FLEX = "peak_price_flex" CONF_PEAK_PRICE_FLEX = "peak_price_flex"
CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG = "best_price_min_distance_from_avg" CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG = "best_price_min_distance_from_avg"
@ -32,8 +44,14 @@ CONF_BEST_PRICE_MIN_PERIOD_LENGTH = "best_price_min_period_length"
CONF_PEAK_PRICE_MIN_PERIOD_LENGTH = "peak_price_min_period_length" CONF_PEAK_PRICE_MIN_PERIOD_LENGTH = "peak_price_min_period_length"
CONF_PRICE_RATING_THRESHOLD_LOW = "price_rating_threshold_low" CONF_PRICE_RATING_THRESHOLD_LOW = "price_rating_threshold_low"
CONF_PRICE_RATING_THRESHOLD_HIGH = "price_rating_threshold_high" CONF_PRICE_RATING_THRESHOLD_HIGH = "price_rating_threshold_high"
CONF_PRICE_RATING_HYSTERESIS = "price_rating_hysteresis"
CONF_PRICE_RATING_GAP_TOLERANCE = "price_rating_gap_tolerance"
CONF_PRICE_LEVEL_GAP_TOLERANCE = "price_level_gap_tolerance"
CONF_AVERAGE_SENSOR_DISPLAY = "average_sensor_display" # "median" or "mean"
CONF_PRICE_TREND_THRESHOLD_RISING = "price_trend_threshold_rising" CONF_PRICE_TREND_THRESHOLD_RISING = "price_trend_threshold_rising"
CONF_PRICE_TREND_THRESHOLD_FALLING = "price_trend_threshold_falling" CONF_PRICE_TREND_THRESHOLD_FALLING = "price_trend_threshold_falling"
CONF_PRICE_TREND_THRESHOLD_STRONGLY_RISING = "price_trend_threshold_strongly_rising"
CONF_PRICE_TREND_THRESHOLD_STRONGLY_FALLING = "price_trend_threshold_strongly_falling"
CONF_VOLATILITY_THRESHOLD_MODERATE = "volatility_threshold_moderate" CONF_VOLATILITY_THRESHOLD_MODERATE = "volatility_threshold_moderate"
CONF_VOLATILITY_THRESHOLD_HIGH = "volatility_threshold_high" CONF_VOLATILITY_THRESHOLD_HIGH = "volatility_threshold_high"
CONF_VOLATILITY_THRESHOLD_VERY_HIGH = "volatility_threshold_very_high" CONF_VOLATILITY_THRESHOLD_VERY_HIGH = "volatility_threshold_very_high"
@ -54,6 +72,9 @@ ATTRIBUTION = "Data provided by Tibber"
# Integration name should match manifest.json # Integration name should match manifest.json
DEFAULT_NAME = "Tibber Price Information & Ratings" DEFAULT_NAME = "Tibber Price Information & Ratings"
DEFAULT_EXTENDED_DESCRIPTIONS = False DEFAULT_EXTENDED_DESCRIPTIONS = False
DEFAULT_VIRTUAL_TIME_OFFSET_DAYS = 0 # No time offset (live mode)
DEFAULT_VIRTUAL_TIME_OFFSET_HOURS = 0
DEFAULT_VIRTUAL_TIME_OFFSET_MINUTES = 0
DEFAULT_BEST_PRICE_FLEX = 15 # 15% base flexibility - optimal for relaxation mode (default enabled) DEFAULT_BEST_PRICE_FLEX = 15 # 15% base flexibility - optimal for relaxation mode (default enabled)
# Peak price flexibility is set to -20% (20% base flexibility - optimal for relaxation mode). # Peak price flexibility is set to -20% (20% base flexibility - optimal for relaxation mode).
# This is intentionally more flexible than best price (15%) because peak price periods can be more variable, # This is intentionally more flexible than best price (15%) because peak price periods can be more variable,
@ -62,8 +83,12 @@ DEFAULT_BEST_PRICE_FLEX = 15 # 15% base flexibility - optimal for relaxation mo
# (e.g., -20% means MAX * 0.8), not above the average price. # (e.g., -20% means MAX * 0.8), not above the average price.
# A higher percentage allows for more conservative detection, reducing false negatives for peak price warnings. # A higher percentage allows for more conservative detection, reducing false negatives for peak price warnings.
DEFAULT_PEAK_PRICE_FLEX = -20 # 20% base flexibility (user-facing, percent) DEFAULT_PEAK_PRICE_FLEX = -20 # 20% base flexibility (user-facing, percent)
DEFAULT_BEST_PRICE_MIN_DISTANCE_FROM_AVG = 5 # 5% minimum distance from daily average (ensures significance) DEFAULT_BEST_PRICE_MIN_DISTANCE_FROM_AVG = (
DEFAULT_PEAK_PRICE_MIN_DISTANCE_FROM_AVG = 5 # 5% minimum distance from daily average (ensures significance) -5
) # -5% minimum distance from daily average (below average, ensures significance)
DEFAULT_PEAK_PRICE_MIN_DISTANCE_FROM_AVG = (
5 # 5% minimum distance from daily average (above average, ensures significance)
)
DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH = 60 # 60 minutes minimum period length for best price (user-facing, minutes) DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH = 60 # 60 minutes minimum period length for best price (user-facing, minutes)
# Note: Peak price warnings are allowed for shorter periods (30 min) than best price periods (60 min). # Note: Peak price warnings are allowed for shorter periods (30 min) than best price periods (60 min).
# This asymmetry is intentional: shorter peak periods are acceptable for alerting users to brief expensive spikes, # This asymmetry is intentional: shorter peak periods are acceptable for alerting users to brief expensive spikes,
@ -72,8 +97,16 @@ DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH = 60 # 60 minutes minimum period length fo
DEFAULT_PEAK_PRICE_MIN_PERIOD_LENGTH = 30 # 30 minutes minimum period length for peak price (user-facing, minutes) DEFAULT_PEAK_PRICE_MIN_PERIOD_LENGTH = 30 # 30 minutes minimum period length for peak price (user-facing, minutes)
DEFAULT_PRICE_RATING_THRESHOLD_LOW = -10 # Default rating threshold low percentage DEFAULT_PRICE_RATING_THRESHOLD_LOW = -10 # Default rating threshold low percentage
DEFAULT_PRICE_RATING_THRESHOLD_HIGH = 10 # Default rating threshold high percentage DEFAULT_PRICE_RATING_THRESHOLD_HIGH = 10 # Default rating threshold high percentage
DEFAULT_PRICE_RATING_HYSTERESIS = 2.0 # Hysteresis percentage to prevent flickering at threshold boundaries
DEFAULT_PRICE_RATING_GAP_TOLERANCE = 1 # Max consecutive intervals to smooth out (0 = disabled)
DEFAULT_PRICE_LEVEL_GAP_TOLERANCE = 1 # Max consecutive intervals to smooth out for price level (0 = disabled)
DEFAULT_AVERAGE_SENSOR_DISPLAY = "median" # Default: show median in state, mean in attributes
DEFAULT_PRICE_TREND_THRESHOLD_RISING = 3 # Default trend threshold for rising prices (%) DEFAULT_PRICE_TREND_THRESHOLD_RISING = 3 # Default trend threshold for rising prices (%)
DEFAULT_PRICE_TREND_THRESHOLD_FALLING = -3 # Default trend threshold for falling prices (%, negative value) DEFAULT_PRICE_TREND_THRESHOLD_FALLING = -3 # Default trend threshold for falling prices (%, negative value)
# Strong trend thresholds default to 2x the base threshold.
# These are independently configurable to allow fine-tuning of "strongly" detection.
DEFAULT_PRICE_TREND_THRESHOLD_STRONGLY_RISING = 6 # Default strong rising threshold (%)
DEFAULT_PRICE_TREND_THRESHOLD_STRONGLY_FALLING = -6 # Default strong falling threshold (%, negative value)
# Default volatility thresholds (relative values using coefficient of variation) # Default volatility thresholds (relative values using coefficient of variation)
# Coefficient of variation = (standard_deviation / mean) * 100% # Coefficient of variation = (standard_deviation / mean) * 100%
# These thresholds are unitless and work across different price levels # These thresholds are unitless and work across different price levels
@ -92,6 +125,58 @@ DEFAULT_ENABLE_MIN_PERIODS_PEAK = True # Default: minimum periods feature enabl
DEFAULT_MIN_PERIODS_PEAK = 2 # Default: require at least 2 peak price periods (when enabled) DEFAULT_MIN_PERIODS_PEAK = 2 # Default: require at least 2 peak price periods (when enabled)
DEFAULT_RELAXATION_ATTEMPTS_PEAK = 11 # Default: 11 steps allows escalation from 20% to 50% (3% increment per step) DEFAULT_RELAXATION_ATTEMPTS_PEAK = 11 # Default: 11 steps allows escalation from 20% to 50% (3% increment per step)
# Validation limits (used in GUI schemas and server-side validation)
# These ensure consistency between frontend and backend validation
MAX_FLEX_PERCENTAGE = 50 # Maximum flexibility percentage (aligned with GUI slider and MAX_SAFE_FLEX)
MAX_DISTANCE_PERCENTAGE = 50 # Maximum distance from average percentage (GUI slider limit)
MAX_GAP_COUNT = 8 # Maximum gap count for level filtering (GUI slider limit)
MAX_MIN_PERIODS = 10 # Maximum number of minimum periods per day (GUI slider limit)
MAX_RELAXATION_ATTEMPTS = 12 # Maximum relaxation attempts (GUI slider limit)
MIN_PERIOD_LENGTH = 15 # Minimum period length in minutes (1 quarter hour)
MAX_MIN_PERIOD_LENGTH = 180 # Maximum for minimum period length setting (3 hours - realistic for required minimum)
# Price rating threshold limits
# LOW threshold: negative values (prices below average) - practical range -50% to -5%
# HIGH threshold: positive values (prices above average) - practical range +5% to +50%
# Ensure minimum 5% gap between thresholds to avoid overlap at 0%
MIN_PRICE_RATING_THRESHOLD_LOW = -50 # Minimum value for low rating threshold
MAX_PRICE_RATING_THRESHOLD_LOW = -5 # Maximum value for low rating threshold (must be < HIGH)
MIN_PRICE_RATING_THRESHOLD_HIGH = 5 # Minimum value for high rating threshold (must be > LOW)
MAX_PRICE_RATING_THRESHOLD_HIGH = 50 # Maximum value for high rating threshold
MIN_PRICE_RATING_HYSTERESIS = 0.0 # Minimum hysteresis (0 = disabled)
MAX_PRICE_RATING_HYSTERESIS = 5.0 # Maximum hysteresis (5% band)
MIN_PRICE_RATING_GAP_TOLERANCE = 0 # Minimum gap tolerance (0 = disabled)
MAX_PRICE_RATING_GAP_TOLERANCE = 4 # Maximum gap tolerance (4 intervals = 1 hour)
MIN_PRICE_LEVEL_GAP_TOLERANCE = 0 # Minimum gap tolerance for price level (0 = disabled)
MAX_PRICE_LEVEL_GAP_TOLERANCE = 4 # Maximum gap tolerance for price level (4 intervals = 1 hour)
# Volatility threshold limits
# MODERATE threshold: practical range 5% to 25% (entry point for noticeable fluctuation)
# HIGH threshold: practical range 20% to 40% (significant price swings)
# VERY_HIGH threshold: practical range 35% to 80% (extreme volatility)
# Ensure cascading: MODERATE < HIGH < VERY_HIGH with ~5% minimum gaps
MIN_VOLATILITY_THRESHOLD_MODERATE = 5.0 # Minimum for moderate volatility threshold
MAX_VOLATILITY_THRESHOLD_MODERATE = 25.0 # Maximum for moderate volatility threshold (must be < HIGH)
MIN_VOLATILITY_THRESHOLD_HIGH = 20.0 # Minimum for high volatility threshold (must be > MODERATE)
MAX_VOLATILITY_THRESHOLD_HIGH = 40.0 # Maximum for high volatility threshold (must be < VERY_HIGH)
MIN_VOLATILITY_THRESHOLD_VERY_HIGH = 35.0 # Minimum for very high volatility threshold (must be > HIGH)
MAX_VOLATILITY_THRESHOLD_VERY_HIGH = 80.0 # Maximum for very high volatility threshold
# Price trend threshold limits
MIN_PRICE_TREND_RISING = 1 # Minimum rising trend threshold
MAX_PRICE_TREND_RISING = 50 # Maximum rising trend threshold
MIN_PRICE_TREND_FALLING = -50 # Minimum falling trend threshold (negative)
MAX_PRICE_TREND_FALLING = -1 # Maximum falling trend threshold (negative)
# Strong trend thresholds have higher ranges to allow detection of significant moves
MIN_PRICE_TREND_STRONGLY_RISING = 2 # Minimum strongly rising threshold (must be > rising)
MAX_PRICE_TREND_STRONGLY_RISING = 100 # Maximum strongly rising threshold
MIN_PRICE_TREND_STRONGLY_FALLING = -100 # Minimum strongly falling threshold (negative)
MAX_PRICE_TREND_STRONGLY_FALLING = -2 # Maximum strongly falling threshold (must be < falling)
# Gap count and relaxation limits
MIN_GAP_COUNT = 0 # Minimum gap count
MIN_RELAXATION_ATTEMPTS = 1 # Minimum relaxation attempts
# Home types # Home types
HOME_TYPE_APARTMENT = "APARTMENT" HOME_TYPE_APARTMENT = "APARTMENT"
HOME_TYPE_ROWHOUSE = "ROWHOUSE" HOME_TYPE_ROWHOUSE = "ROWHOUSE"
@ -109,12 +194,22 @@ HOME_TYPES = {
# Currency mapping: ISO code -> (major_symbol, minor_symbol, minor_name) # Currency mapping: ISO code -> (major_symbol, minor_symbol, minor_name)
# For currencies with Home Assistant constants, use those; otherwise define custom ones # For currencies with Home Assistant constants, use those; otherwise define custom ones
CURRENCY_INFO = { CURRENCY_INFO = {
"EUR": (CURRENCY_EURO, "ct", "cents"), "EUR": (CURRENCY_EURO, "ct", "Cents"),
"NOK": ("kr", "øre", "øre"), "NOK": ("kr", "øre", "Øre"),
"SEK": ("kr", "öre", "öre"), "SEK": ("kr", "öre", "Öre"),
"DKK": ("kr", "øre", "øre"), "DKK": ("kr", "øre", "Øre"),
"USD": (CURRENCY_DOLLAR, "¢", "cents"), "USD": (CURRENCY_DOLLAR, "¢", "Cents"),
"GBP": ("£", "p", "pence"), "GBP": ("£", "p", "Pence"),
}
# Base currency names: ISO code -> full currency name (in local language)
CURRENCY_NAMES = {
"EUR": "Euro",
"NOK": "Norske kroner",
"SEK": "Svenska kronor",
"DKK": "Danske kroner",
"USD": "US Dollar",
"GBP": "British Pound",
} }
@ -136,9 +231,9 @@ def get_currency_info(currency_code: str | None) -> tuple[str, str, str]:
return CURRENCY_INFO.get(currency_code.upper(), CURRENCY_INFO["EUR"]) return CURRENCY_INFO.get(currency_code.upper(), CURRENCY_INFO["EUR"])
def format_price_unit_major(currency_code: str | None) -> str: def format_price_unit_base(currency_code: str | None) -> str:
""" """
Format the price unit string with major currency unit (e.g., '€/kWh'). Format the price unit string with base currency unit (e.g., '€/kWh').
Args: Args:
currency_code: ISO 4217 currency code (e.g., 'EUR', 'NOK', 'SEK') currency_code: ISO 4217 currency code (e.g., 'EUR', 'NOK', 'SEK')
@ -147,13 +242,13 @@ def format_price_unit_major(currency_code: str | None) -> str:
Formatted unit string like '€/kWh' or 'kr/kWh' Formatted unit string like '€/kWh' or 'kr/kWh'
""" """
major_symbol, _, _ = get_currency_info(currency_code) base_symbol, _, _ = get_currency_info(currency_code)
return f"{major_symbol}/{UnitOfPower.KILO_WATT}{UnitOfTime.HOURS}" return f"{base_symbol}/{UnitOfPower.KILO_WATT}{UnitOfTime.HOURS}"
def format_price_unit_minor(currency_code: str | None) -> str: def format_price_unit_subunit(currency_code: str | None) -> str:
""" """
Format the price unit string with minor currency unit (e.g., 'ct/kWh'). Format the price unit string with subunit currency unit (e.g., 'ct/kWh').
Args: Args:
currency_code: ISO 4217 currency code (e.g., 'EUR', 'NOK', 'SEK') currency_code: ISO 4217 currency code (e.g., 'EUR', 'NOK', 'SEK')
@ -162,11 +257,190 @@ def format_price_unit_minor(currency_code: str | None) -> str:
Formatted unit string like 'ct/kWh' or 'øre/kWh' Formatted unit string like 'ct/kWh' or 'øre/kWh'
""" """
_, minor_symbol, _ = get_currency_info(currency_code) _, subunit_symbol, _ = get_currency_info(currency_code)
return f"{minor_symbol}/{UnitOfPower.KILO_WATT}{UnitOfTime.HOURS}" return f"{subunit_symbol}/{UnitOfPower.KILO_WATT}{UnitOfTime.HOURS}"
# Price level constants from Tibber API def get_currency_name(currency_code: str | None) -> str:
"""
Get the full name of the base currency.
Args:
currency_code: ISO 4217 currency code (e.g., 'EUR', 'NOK', 'SEK')
Returns:
Full currency name like 'Euro' or 'Norwegian Krone'
Defaults to 'Euro' if currency is not recognized
"""
if not currency_code:
currency_code = "EUR"
return CURRENCY_NAMES.get(currency_code.upper(), CURRENCY_NAMES["EUR"])
# ============================================================================
# Currency Display Mode Configuration
# ============================================================================
# Configuration key for currency display mode
CONF_CURRENCY_DISPLAY_MODE = "currency_display_mode"
# Display mode values
DISPLAY_MODE_BASE = "base" # Display in base currency units (€, kr)
DISPLAY_MODE_SUBUNIT = "subunit" # Display in subunit currency units (ct, øre)
# Intelligent per-currency defaults based on market analysis
# EUR: Subunit (cents) - established convention in Germany/Netherlands
# NOK/SEK/DKK: Base (kroner) - Scandinavian preference for whole units
# USD/GBP: Base - international standard
DEFAULT_CURRENCY_DISPLAY = {
"EUR": DISPLAY_MODE_SUBUNIT,
"NOK": DISPLAY_MODE_BASE,
"SEK": DISPLAY_MODE_BASE,
"DKK": DISPLAY_MODE_BASE,
"USD": DISPLAY_MODE_BASE,
"GBP": DISPLAY_MODE_BASE,
}
def get_default_currency_display(currency_code: str | None) -> str:
"""
Get intelligent default display mode for a currency.
Args:
currency_code: ISO 4217 currency code (e.g., 'EUR', 'NOK')
Returns:
Default display mode ('base' or 'subunit')
"""
if not currency_code:
return DISPLAY_MODE_SUBUNIT # Fallback default
return DEFAULT_CURRENCY_DISPLAY.get(currency_code.upper(), DISPLAY_MODE_SUBUNIT)
def get_default_options(currency_code: str | None) -> dict[str, Any]:
"""
Get complete default options for a new config entry.
This ensures new config entries have explicitly set defaults based on their currency,
distinguishing them from legacy config entries that need migration.
Options structure has been flattened for single-section steps:
- Flat values: extended_descriptions, average_sensor_display, currency_display_mode,
price_rating_thresholds, volatility_thresholds, price_trend_thresholds, time offsets
- Nested sections (multi-section steps only): period_settings, flexibility_settings,
relaxation_and_target_periods
Args:
currency_code: ISO 4217 currency code (e.g., 'EUR', 'NOK')
Returns:
Dictionary with all default option values in nested section structure
"""
return {
# Flat configuration values
CONF_EXTENDED_DESCRIPTIONS: DEFAULT_EXTENDED_DESCRIPTIONS,
CONF_AVERAGE_SENSOR_DISPLAY: DEFAULT_AVERAGE_SENSOR_DISPLAY,
CONF_CURRENCY_DISPLAY_MODE: get_default_currency_display(currency_code),
CONF_VIRTUAL_TIME_OFFSET_DAYS: DEFAULT_VIRTUAL_TIME_OFFSET_DAYS,
CONF_VIRTUAL_TIME_OFFSET_HOURS: DEFAULT_VIRTUAL_TIME_OFFSET_HOURS,
CONF_VIRTUAL_TIME_OFFSET_MINUTES: DEFAULT_VIRTUAL_TIME_OFFSET_MINUTES,
# Price rating settings (flat - single-section step)
CONF_PRICE_RATING_THRESHOLD_LOW: DEFAULT_PRICE_RATING_THRESHOLD_LOW,
CONF_PRICE_RATING_THRESHOLD_HIGH: DEFAULT_PRICE_RATING_THRESHOLD_HIGH,
CONF_PRICE_RATING_HYSTERESIS: DEFAULT_PRICE_RATING_HYSTERESIS,
CONF_PRICE_RATING_GAP_TOLERANCE: DEFAULT_PRICE_RATING_GAP_TOLERANCE,
CONF_PRICE_LEVEL_GAP_TOLERANCE: DEFAULT_PRICE_LEVEL_GAP_TOLERANCE,
# Volatility thresholds (flat - single-section step)
CONF_VOLATILITY_THRESHOLD_MODERATE: DEFAULT_VOLATILITY_THRESHOLD_MODERATE,
CONF_VOLATILITY_THRESHOLD_HIGH: DEFAULT_VOLATILITY_THRESHOLD_HIGH,
CONF_VOLATILITY_THRESHOLD_VERY_HIGH: DEFAULT_VOLATILITY_THRESHOLD_VERY_HIGH,
# Price trend thresholds (flat - single-section step)
CONF_PRICE_TREND_THRESHOLD_RISING: DEFAULT_PRICE_TREND_THRESHOLD_RISING,
CONF_PRICE_TREND_THRESHOLD_FALLING: DEFAULT_PRICE_TREND_THRESHOLD_FALLING,
# Nested section: Period settings (shared by best/peak price)
"period_settings": {
CONF_BEST_PRICE_MIN_PERIOD_LENGTH: DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH,
CONF_PEAK_PRICE_MIN_PERIOD_LENGTH: DEFAULT_PEAK_PRICE_MIN_PERIOD_LENGTH,
CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT: DEFAULT_BEST_PRICE_MAX_LEVEL_GAP_COUNT,
CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT: DEFAULT_PEAK_PRICE_MAX_LEVEL_GAP_COUNT,
CONF_BEST_PRICE_MAX_LEVEL: DEFAULT_BEST_PRICE_MAX_LEVEL,
CONF_PEAK_PRICE_MIN_LEVEL: DEFAULT_PEAK_PRICE_MIN_LEVEL,
},
# Nested section: Flexibility settings (shared by best/peak price)
"flexibility_settings": {
CONF_BEST_PRICE_FLEX: DEFAULT_BEST_PRICE_FLEX,
CONF_PEAK_PRICE_FLEX: DEFAULT_PEAK_PRICE_FLEX,
CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG: DEFAULT_BEST_PRICE_MIN_DISTANCE_FROM_AVG,
CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG: DEFAULT_PEAK_PRICE_MIN_DISTANCE_FROM_AVG,
},
# Nested section: Relaxation and target periods (shared by best/peak price)
"relaxation_and_target_periods": {
CONF_ENABLE_MIN_PERIODS_BEST: DEFAULT_ENABLE_MIN_PERIODS_BEST,
CONF_MIN_PERIODS_BEST: DEFAULT_MIN_PERIODS_BEST,
CONF_RELAXATION_ATTEMPTS_BEST: DEFAULT_RELAXATION_ATTEMPTS_BEST,
CONF_ENABLE_MIN_PERIODS_PEAK: DEFAULT_ENABLE_MIN_PERIODS_PEAK,
CONF_MIN_PERIODS_PEAK: DEFAULT_MIN_PERIODS_PEAK,
CONF_RELAXATION_ATTEMPTS_PEAK: DEFAULT_RELAXATION_ATTEMPTS_PEAK,
},
}
def get_display_unit_factor(config_entry: ConfigEntry) -> int:
"""
Get multiplication factor for converting base to display currency.
Internal storage is ALWAYS in base currency (4 decimals precision).
This function returns the conversion factor based on user configuration.
Args:
config_entry: ConfigEntry with currency_display_mode option
Returns:
100 for subunit currency display, 1 for base currency display
Example:
price_base = 0.2534 # Internal: 0.2534 €/kWh
factor = get_display_unit_factor(config_entry)
display_value = round(price_base * factor, 2)
# → 25.34 ct/kWh (subunit) or 0.25 €/kWh (base)
"""
display_mode = config_entry.options.get(CONF_CURRENCY_DISPLAY_MODE, DISPLAY_MODE_SUBUNIT)
return 100 if display_mode == DISPLAY_MODE_SUBUNIT else 1
def get_display_unit_string(config_entry: ConfigEntry, currency_code: str | None) -> str:
"""
Get unit string for display based on configuration.
Args:
config_entry: ConfigEntry with currency_display_mode option
currency_code: ISO 4217 currency code
Returns:
Formatted unit string (e.g., 'ct/kWh' or '€/kWh')
"""
display_mode = config_entry.options.get(CONF_CURRENCY_DISPLAY_MODE, DISPLAY_MODE_SUBUNIT)
if display_mode == DISPLAY_MODE_SUBUNIT:
return format_price_unit_subunit(currency_code)
return format_price_unit_base(currency_code)
# ============================================================================
# Price Level, Rating, and Volatility Constants
# ============================================================================
# IMPORTANT: These string constants are the single source of truth for
# valid enum values. The Literal types in sensor/types.py and binary_sensor/types.py
# should be kept in sync with these values manually.
# Price level constants (from Tibber API)
PRICE_LEVEL_VERY_CHEAP = "VERY_CHEAP" PRICE_LEVEL_VERY_CHEAP = "VERY_CHEAP"
PRICE_LEVEL_CHEAP = "CHEAP" PRICE_LEVEL_CHEAP = "CHEAP"
PRICE_LEVEL_NORMAL = "NORMAL" PRICE_LEVEL_NORMAL = "NORMAL"
@ -178,12 +452,20 @@ PRICE_RATING_LOW = "LOW"
PRICE_RATING_NORMAL = "NORMAL" PRICE_RATING_NORMAL = "NORMAL"
PRICE_RATING_HIGH = "HIGH" PRICE_RATING_HIGH = "HIGH"
# Price volatility levels (based on coefficient of variation: std_dev / mean * 100%) # Price volatility level constants
VOLATILITY_LOW = "LOW" VOLATILITY_LOW = "LOW"
VOLATILITY_MODERATE = "MODERATE" VOLATILITY_MODERATE = "MODERATE"
VOLATILITY_HIGH = "HIGH" VOLATILITY_HIGH = "HIGH"
VOLATILITY_VERY_HIGH = "VERY_HIGH" VOLATILITY_VERY_HIGH = "VERY_HIGH"
# Price trend constants (calculated values with 5-level scale)
# Used by trend sensors: momentary, short-term, mid-term, long-term
PRICE_TREND_STRONGLY_FALLING = "strongly_falling"
PRICE_TREND_FALLING = "falling"
PRICE_TREND_STABLE = "stable"
PRICE_TREND_RISING = "rising"
PRICE_TREND_STRONGLY_RISING = "strongly_rising"
# Sensor options (lowercase versions for ENUM device class) # Sensor options (lowercase versions for ENUM device class)
# NOTE: These constants define the valid enum options, but they are not used directly # NOTE: These constants define the valid enum options, but they are not used directly
# in sensor/definitions.py due to import timing issues. Instead, the options are defined inline # in sensor/definitions.py due to import timing issues. Instead, the options are defined inline
@ -209,6 +491,15 @@ VOLATILITY_OPTIONS = [
VOLATILITY_VERY_HIGH.lower(), VOLATILITY_VERY_HIGH.lower(),
] ]
# Trend options for enum sensors (lowercase versions for ENUM device class)
PRICE_TREND_OPTIONS = [
PRICE_TREND_STRONGLY_FALLING,
PRICE_TREND_FALLING,
PRICE_TREND_STABLE,
PRICE_TREND_RISING,
PRICE_TREND_STRONGLY_RISING,
]
# Valid options for best price maximum level filter # Valid options for best price maximum level filter
# Sorted from cheap to expensive: user selects "up to how expensive" # Sorted from cheap to expensive: user selects "up to how expensive"
BEST_PRICE_MAX_LEVEL_OPTIONS = [ BEST_PRICE_MAX_LEVEL_OPTIONS = [
@ -251,6 +542,16 @@ PRICE_RATING_MAPPING = {
PRICE_RATING_HIGH: 1, PRICE_RATING_HIGH: 1,
} }
# Mapping for comparing price trends (used for sorting and automation comparisons)
# Values range from -2 (strongly falling) to +2 (strongly rising), with 0 = stable
PRICE_TREND_MAPPING = {
PRICE_TREND_STRONGLY_FALLING: -2,
PRICE_TREND_FALLING: -1,
PRICE_TREND_STABLE: 0,
PRICE_TREND_RISING: 1,
PRICE_TREND_STRONGLY_RISING: 2,
}
# Icon mapping for price levels (dynamic icons based on level) # Icon mapping for price levels (dynamic icons based on level)
PRICE_LEVEL_ICON_MAPPING = { PRICE_LEVEL_ICON_MAPPING = {
PRICE_LEVEL_VERY_CHEAP: "mdi:gauge-empty", PRICE_LEVEL_VERY_CHEAP: "mdi:gauge-empty",

View file

@ -1,4 +1,28 @@
"""Cache management for coordinator module.""" """
Cache management for coordinator persistent storage.
This module handles persistent storage for the coordinator, storing:
- user_data: Account/home metadata (required, refreshed daily)
- Timestamps for cache validation and lifecycle tracking
**Storage Architecture (as of v0.25.0):**
There are TWO persistent storage files per config entry:
1. `tibber_prices.{entry_id}` (this module)
- user_data: Account info, home metadata, timezone, currency
- Timestamps: last_user_update, last_midnight_check
2. `tibber_prices.interval_pool.{entry_id}` (interval_pool/storage.py)
- Intervals: Deduplicated quarter-hourly price data (source of truth)
- Fetch metadata: When each interval was fetched
- Protected range: Which intervals to keep during cleanup
**Single Source of Truth:**
Price intervals are ONLY stored in IntervalPool. This cache stores only
user metadata and timestamps. The IntervalPool handles all price data
fetching, caching, and persistence independently.
"""
from __future__ import annotations from __future__ import annotations
@ -16,11 +40,9 @@ _LOGGER = logging.getLogger(__name__)
class TibberPricesCacheData(NamedTuple): class TibberPricesCacheData(NamedTuple):
"""Cache data structure.""" """Cache data structure for user metadata (price data is in IntervalPool)."""
price_data: dict[str, Any] | None
user_data: dict[str, Any] | None user_data: dict[str, Any] | None
last_price_update: datetime | None
last_user_update: datetime | None last_user_update: datetime | None
last_midnight_check: datetime | None last_midnight_check: datetime | None
@ -31,20 +53,16 @@ async def load_cache(
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
) -> TibberPricesCacheData: ) -> TibberPricesCacheData:
"""Load cached data from storage.""" """Load cached user data from storage (price data is in IntervalPool)."""
try: try:
stored = await store.async_load() stored = await store.async_load()
if stored: if stored:
cached_price_data = stored.get("price_data")
cached_user_data = stored.get("user_data") cached_user_data = stored.get("user_data")
# Restore timestamps # Restore timestamps
last_price_update = None
last_user_update = None last_user_update = None
last_midnight_check = None last_midnight_check = None
if last_price_update_str := stored.get("last_price_update"):
last_price_update = time.parse_datetime(last_price_update_str)
if last_user_update_str := stored.get("last_user_update"): if last_user_update_str := stored.get("last_user_update"):
last_user_update = time.parse_datetime(last_user_update_str) last_user_update = time.parse_datetime(last_user_update_str)
if last_midnight_check_str := stored.get("last_midnight_check"): if last_midnight_check_str := stored.get("last_midnight_check"):
@ -52,9 +70,7 @@ async def load_cache(
_LOGGER.debug("%s Cache loaded successfully", log_prefix) _LOGGER.debug("%s Cache loaded successfully", log_prefix)
return TibberPricesCacheData( return TibberPricesCacheData(
price_data=cached_price_data,
user_data=cached_user_data, user_data=cached_user_data,
last_price_update=last_price_update,
last_user_update=last_user_update, last_user_update=last_user_update,
last_midnight_check=last_midnight_check, last_midnight_check=last_midnight_check,
) )
@ -64,9 +80,7 @@ async def load_cache(
_LOGGER.warning("%s Failed to load cache: %s", log_prefix, ex) _LOGGER.warning("%s Failed to load cache: %s", log_prefix, ex)
return TibberPricesCacheData( return TibberPricesCacheData(
price_data=None,
user_data=None, user_data=None,
last_price_update=None,
last_user_update=None, last_user_update=None,
last_midnight_check=None, last_midnight_check=None,
) )
@ -77,11 +91,9 @@ async def save_cache(
cache_data: TibberPricesCacheData, cache_data: TibberPricesCacheData,
log_prefix: str, log_prefix: str,
) -> None: ) -> None:
"""Store cache data.""" """Store cache data (user metadata only, price data is in IntervalPool)."""
data = { data = {
"price_data": cache_data.price_data,
"user_data": cache_data.user_data, "user_data": cache_data.user_data,
"last_price_update": (cache_data.last_price_update.isoformat() if cache_data.last_price_update else None),
"last_user_update": (cache_data.last_user_update.isoformat() if cache_data.last_user_update else None), "last_user_update": (cache_data.last_user_update.isoformat() if cache_data.last_user_update else None),
"last_midnight_check": (cache_data.last_midnight_check.isoformat() if cache_data.last_midnight_check else None), "last_midnight_check": (cache_data.last_midnight_check.isoformat() if cache_data.last_midnight_check else None),
} }
@ -91,36 +103,3 @@ async def save_cache(
_LOGGER.debug("%s Cache stored successfully", log_prefix) _LOGGER.debug("%s Cache stored successfully", log_prefix)
except OSError: except OSError:
_LOGGER.exception("%s Failed to store cache", log_prefix) _LOGGER.exception("%s Failed to store cache", log_prefix)
def is_cache_valid(
cache_data: TibberPricesCacheData,
log_prefix: str,
*,
time: TibberPricesTimeService,
) -> bool:
"""
Validate if cached price data is still current.
Returns False if:
- No cached data exists
- Cached data is from a different calendar day (in local timezone)
- Midnight turnover has occurred since cache was saved
"""
if cache_data.price_data is None or cache_data.last_price_update is None:
return False
current_local_date = time.as_local(time.now()).date()
last_update_local_date = time.as_local(cache_data.last_price_update).date()
if current_local_date != last_update_local_date:
_LOGGER.debug(
"%s Cache date mismatch: cached=%s, current=%s",
log_prefix,
last_update_local_date,
current_local_date,
)
return False
return True

View file

@ -31,6 +31,7 @@ TIME_SENSITIVE_ENTITY_KEYS = frozenset(
{ {
# Current/next/previous price sensors # Current/next/previous price sensors
"current_interval_price", "current_interval_price",
"current_interval_price_base",
"next_interval_price", "next_interval_price",
"previous_interval_price", "previous_interval_price",
# Current/next/previous price levels # Current/next/previous price levels
@ -84,7 +85,11 @@ TIME_SENSITIVE_ENTITY_KEYS = frozenset(
"best_price_next_start_time", "best_price_next_start_time",
"peak_price_end_time", "peak_price_end_time",
"peak_price_next_start_time", "peak_price_next_start_time",
# Lifecycle sensor (needs quarter-hour updates for turnover_pending detection at 23:45) # Lifecycle sensor needs quarter-hour precision for state transitions:
# - 23:45: turnover_pending (last interval before midnight)
# - 00:00: turnover complete (after midnight API update)
# - 13:00: searching_tomorrow (when tomorrow data search begins)
# Uses state-change filter in _handle_time_sensitive_update() to prevent recorder spam
"data_lifecycle_status", "data_lifecycle_status",
} }
) )

View file

@ -6,21 +6,17 @@ import logging
from datetime import timedelta from datetime import timedelta
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
from homeassistant.const import CONF_ACCESS_TOKEN
from homeassistant.core import CALLBACK_TYPE, HomeAssistant, callback from homeassistant.core import CALLBACK_TYPE, HomeAssistant, callback
from homeassistant.helpers import aiohttp_client
from homeassistant.helpers.storage import Store from homeassistant.helpers.storage import Store
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator, UpdateFailed from homeassistant.helpers.update_coordinator import DataUpdateCoordinator
if TYPE_CHECKING: if TYPE_CHECKING:
from collections.abc import Callable
from datetime import date, datetime from datetime import date, datetime
from homeassistant.config_entries import ConfigEntry from homeassistant.config_entries import ConfigEntry
from .listeners import TimeServiceCallback from .listeners import TimeServiceCallback
from custom_components.tibber_prices import const as _const
from custom_components.tibber_prices.api import ( from custom_components.tibber_prices.api import (
TibberPricesApiClient, TibberPricesApiClient,
TibberPricesApiClientAuthenticationError, TibberPricesApiClientAuthenticationError,
@ -31,16 +27,19 @@ from custom_components.tibber_prices.const import DOMAIN
from custom_components.tibber_prices.utils.price import ( from custom_components.tibber_prices.utils.price import (
find_price_data_for_interval, find_price_data_for_interval,
) )
from homeassistant.exceptions import ConfigEntryAuthFailed
from . import helpers from . import helpers
from .constants import ( from .constants import (
STORAGE_VERSION, STORAGE_VERSION,
UPDATE_INTERVAL, UPDATE_INTERVAL,
) )
from .data_fetching import TibberPricesDataFetcher
from .data_transformation import TibberPricesDataTransformer from .data_transformation import TibberPricesDataTransformer
from .listeners import TibberPricesListenerManager from .listeners import TibberPricesListenerManager
from .midnight_handler import TibberPricesMidnightHandler
from .periods import TibberPricesPeriodCalculator from .periods import TibberPricesPeriodCalculator
from .price_data_manager import TibberPricesPriceDataManager
from .repairs import TibberPricesRepairManager
from .time_service import TibberPricesTimeService from .time_service import TibberPricesTimeService
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
@ -48,6 +47,44 @@ _LOGGER = logging.getLogger(__name__)
# Lifecycle state transition thresholds # Lifecycle state transition thresholds
FRESH_TO_CACHED_SECONDS = 300 # 5 minutes FRESH_TO_CACHED_SECONDS = 300 # 5 minutes
def get_connection_state(coordinator: TibberPricesDataUpdateCoordinator) -> bool | None:
"""
Determine API connection state based on lifecycle and exceptions.
This is the source of truth for the connection binary sensor.
It ensures consistency between lifecycle_status and connection state.
Returns:
True: Connected and working (cached or fresh data)
False: Connection failed or auth failed
None: Unknown state (no data yet, initializing)
Logic:
- Auth failures definitively disconnected (False)
- Other errors with cached data considered connected (True, using cache)
- No errors with data connected (True)
- No data and no error initializing (None)
"""
# Auth failures = definitively disconnected
# User must provide new token via reauth flow
if isinstance(coordinator.last_exception, ConfigEntryAuthFailed):
return False
# Other errors but cache available = considered connected (using cached data as fallback)
# This shows "on" but lifecycle_status will show "error" to indicate degraded operation
if coordinator.last_exception and coordinator.data:
return True
# No error and data available = connected
if coordinator.data:
return True
# No data and no error = initializing (unknown state)
return None
# ============================================================================= # =============================================================================
# TIMER SYSTEM - Three independent update mechanisms: # TIMER SYSTEM - Three independent update mechanisms:
# ============================================================================= # =============================================================================
@ -104,6 +141,17 @@ FRESH_TO_CACHED_SECONDS = 300 # 5 minutes
# - No race condition possible - Python datetime.date() comparison is thread-safe # - No race condition possible - Python datetime.date() comparison is thread-safe
# - _last_transformation_time is separate and tracks when data was last transformed (for cache) # - _last_transformation_time is separate and tracks when data was last transformed (for cache)
# #
# CRITICAL - Dual Listener System:
# After midnight turnover, BOTH listener groups must be notified:
# 1. Normal listeners (async_update_listeners) - standard HA entities
# 2. Time-sensitive listeners (_async_update_time_sensitive_listeners) - quarter-hour entities
#
# Why? Entities like best_price_period and peak_price_period register as time-sensitive
# listeners and won't update if only async_update_listeners() is called. This caused
# the bug where period binary sensors showed stale data until the next quarter-hour
# refresh at 00:15 (they were updated then because Timer #2 explicitly calls
# _async_update_time_sensitive_listeners in its normal flow).
#
# ============================================================================= # =============================================================================
@ -114,7 +162,8 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
self, self,
hass: HomeAssistant, hass: HomeAssistant,
config_entry: ConfigEntry, config_entry: ConfigEntry,
version: str, api_client: TibberPricesApiClient,
interval_pool: Any, # TibberPricesIntervalPool - Any to avoid circular import
) -> None: ) -> None:
"""Initialize the coordinator.""" """Initialize the coordinator."""
super().__init__( super().__init__(
@ -125,11 +174,17 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
) )
self.config_entry = config_entry self.config_entry = config_entry
self.api = TibberPricesApiClient(
access_token=config_entry.data[CONF_ACCESS_TOKEN], # Get home_id from config entry
session=aiohttp_client.async_get_clientsession(hass), self._home_id = config_entry.data.get("home_id", "")
version=version, if not self._home_id:
) _LOGGER.error("No home_id found in config entry %s", config_entry.entry_id)
# Use the API client from runtime_data (created in __init__.py with proper TOKEN handling)
self.api = api_client
# Use the shared interval pool (one per config entry/Tibber account)
self.interval_pool = interval_pool
# Storage for persistence # Storage for persistence
storage_key = f"{DOMAIN}.{config_entry.entry_id}" storage_key = f"{DOMAIN}.{config_entry.entry_id}"
@ -138,8 +193,8 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
# Log prefix for identifying this coordinator instance # Log prefix for identifying this coordinator instance
self._log_prefix = f"[{config_entry.title}]" self._log_prefix = f"[{config_entry.title}]"
# Track if this is the main entry (first one created) # Note: In the new architecture, all coordinators (parent + subentries) fetch their own data
self._is_main_entry = not self._has_existing_main_coordinator() # No distinction between "main" and "sub" coordinators anymore
# Initialize time service (single source of truth for all time operations) # Initialize time service (single source of truth for all time operations)
self.time = TibberPricesTimeService() self.time = TibberPricesTimeService()
@ -149,48 +204,62 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
# Initialize helper modules # Initialize helper modules
self._listener_manager = TibberPricesListenerManager(hass, self._log_prefix) self._listener_manager = TibberPricesListenerManager(hass, self._log_prefix)
self._data_fetcher = TibberPricesDataFetcher( self._midnight_handler = TibberPricesMidnightHandler()
self._price_data_manager = TibberPricesPriceDataManager(
api=self.api, api=self.api,
store=self._store, store=self._store,
log_prefix=self._log_prefix, log_prefix=self._log_prefix,
user_update_interval=timedelta(days=1), user_update_interval=timedelta(days=1),
time=self.time, time=self.time,
home_id=self._home_id,
interval_pool=self.interval_pool,
)
# Create period calculator BEFORE data transformer (transformer needs it in lambda)
self._period_calculator = TibberPricesPeriodCalculator(
config_entry=config_entry,
log_prefix=self._log_prefix,
get_config_override_fn=self.get_config_override,
) )
self._data_transformer = TibberPricesDataTransformer( self._data_transformer = TibberPricesDataTransformer(
config_entry=config_entry, config_entry=config_entry,
log_prefix=self._log_prefix, log_prefix=self._log_prefix,
perform_turnover_fn=self._perform_midnight_turnover, calculate_periods_fn=lambda price_info: self._period_calculator.calculate_periods_for_price_info(
price_info
),
time=self.time, time=self.time,
) )
self._period_calculator = TibberPricesPeriodCalculator( self._repair_manager = TibberPricesRepairManager(
config_entry=config_entry, hass=hass,
log_prefix=self._log_prefix, entry_id=config_entry.entry_id,
home_name=config_entry.title,
) )
# Register options update listener to invalidate config caches # Register options update listener to invalidate config caches
config_entry.async_on_unload(config_entry.add_update_listener(self._handle_options_update)) config_entry.async_on_unload(config_entry.add_update_listener(self._handle_options_update))
# Legacy compatibility - keep references for methods that access directly # User data cache (price data is in IntervalPool)
self._cached_user_data: dict[str, Any] | None = None self._cached_user_data: dict[str, Any] | None = None
self._last_user_update: datetime | None = None self._last_user_update: datetime | None = None
self._user_update_interval = timedelta(days=1) self._user_update_interval = timedelta(days=1)
self._cached_price_data: dict[str, Any] | None = None
self._last_price_update: datetime | None = None
self._cached_transformed_data: dict[str, Any] | None = None
self._last_transformation_config: dict[str, Any] | None = None
self._last_transformation_time: datetime | None = None # When data was last transformed (for cache)
self._last_midnight_turnover_check: datetime | None = None # Last midnight turnover detection check
self._last_actual_turnover: datetime | None = None # When midnight turnover actually happened
# Data lifecycle tracking for diagnostic sensor # Data lifecycle tracking
# Note: _lifecycle_state is used for DIAGNOSTICS only (diagnostics.py export).
# The lifecycle SENSOR calculates its state dynamically in get_lifecycle_state(),
# using: _is_fetching, last_exception, time calculations, _needs_tomorrow_data(),
# and _last_price_update. It does NOT read _lifecycle_state!
self._lifecycle_state: str = ( self._lifecycle_state: str = (
"cached" # Current state: cached, fresh, refreshing, searching_tomorrow, turnover_pending, error "cached" # For diagnostics: cached, fresh, refreshing, searching_tomorrow, turnover_pending, error
) )
self._last_price_update: datetime | None = None # When price data was last fetched from API
self._api_calls_today: int = 0 # Counter for API calls today self._api_calls_today: int = 0 # Counter for API calls today
self._last_api_call_date: date | None = None # Date of last API call (for daily reset) self._last_api_call_date: date | None = None # Date of last API call (for daily reset)
self._is_fetching: bool = False # Flag to track active API fetch self._is_fetching: bool = False # Flag to track active API fetch (read by lifecycle sensor)
self._last_coordinator_update: datetime | None = None # When Timer #1 last ran (_async_update_data) self._last_coordinator_update: datetime | None = None # When Timer #1 last ran (_async_update_data)
self._lifecycle_callbacks: list[Callable[[], None]] = [] # Push-update callbacks for lifecycle sensor
# Runtime config overrides from config entities (number/switch)
# Structure: {"section_name": {"config_key": value, ...}, ...}
# When set, these override the corresponding options from config_entry.options
self._config_overrides: dict[str, dict[str, Any]] = {}
# Start timers # Start timers
self._listener_manager.schedule_quarter_hour_refresh(self._handle_quarter_hour_refresh) self._listener_manager.schedule_quarter_hour_refresh(self._handle_quarter_hour_refresh)
@ -202,12 +271,129 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
getattr(_LOGGER, level)(prefixed_message, *args, **kwargs) getattr(_LOGGER, level)(prefixed_message, *args, **kwargs)
async def _handle_options_update(self, _hass: HomeAssistant, _config_entry: ConfigEntry) -> None: async def _handle_options_update(self, _hass: HomeAssistant, _config_entry: ConfigEntry) -> None:
"""Handle options update by invalidating config caches.""" """Handle options update by invalidating config caches and re-transforming data."""
self._log("debug", "Options updated, invalidating config caches") self._log("debug", "Options update triggered, re-transforming data")
self._data_transformer.invalidate_config_cache() self._data_transformer.invalidate_config_cache()
self._period_calculator.invalidate_config_cache() self._period_calculator.invalidate_config_cache()
# Trigger a refresh to apply new configuration
await self.async_request_refresh() # Re-transform existing data with new configuration
# This updates rating_levels, volatility, and period calculations
# without needing to fetch new data from the API
if self.data and "priceInfo" in self.data:
# Extract raw price_info and re-transform
raw_data = {"price_info": self.data["priceInfo"]}
self.data = self._transform_data(raw_data)
self.async_update_listeners()
else:
self._log("debug", "No data to re-transform")
# =========================================================================
# Runtime Config Override Methods (for number/switch entities)
# =========================================================================
def set_config_override(self, config_key: str, config_section: str, value: Any) -> None:
"""
Set a runtime config override value.
These overrides take precedence over options from config_entry.options
and are used by number/switch entities for runtime configuration.
Args:
config_key: The configuration key (e.g., CONF_BEST_PRICE_FLEX)
config_section: The section in options (e.g., "flexibility_settings")
value: The override value
"""
if config_section not in self._config_overrides:
self._config_overrides[config_section] = {}
self._config_overrides[config_section][config_key] = value
self._log(
"debug",
"Config override set: %s.%s = %s",
config_section,
config_key,
value,
)
def remove_config_override(self, config_key: str, config_section: str) -> None:
"""
Remove a runtime config override value.
After removal, the value from config_entry.options will be used again.
Args:
config_key: The configuration key to remove
config_section: The section the key belongs to
"""
if config_section in self._config_overrides:
self._config_overrides[config_section].pop(config_key, None)
# Clean up empty sections
if not self._config_overrides[config_section]:
del self._config_overrides[config_section]
self._log(
"debug",
"Config override removed: %s.%s",
config_section,
config_key,
)
def get_config_override(self, config_key: str, config_section: str) -> Any | None:
"""
Get a runtime config override value if set.
Args:
config_key: The configuration key to check
config_section: The section the key belongs to
Returns:
The override value if set, None otherwise
"""
return self._config_overrides.get(config_section, {}).get(config_key)
def has_config_override(self, config_key: str, config_section: str) -> bool:
"""
Check if a runtime config override is set.
Args:
config_key: The configuration key to check
config_section: The section the key belongs to
Returns:
True if an override is set, False otherwise
"""
return config_key in self._config_overrides.get(config_section, {})
def get_active_overrides(self) -> dict[str, dict[str, Any]]:
"""
Get all active config overrides.
Returns:
Dictionary of all active overrides by section
"""
return self._config_overrides.copy()
async def async_handle_config_override_update(self) -> None:
"""
Handle config override change by invalidating caches and re-transforming data.
This is called by number/switch entities when their values change.
Uses the same logic as options update to ensure consistent behavior.
"""
self._log("debug", "Config override update triggered, re-transforming data")
self._data_transformer.invalidate_config_cache()
self._period_calculator.invalidate_config_cache()
# Re-transform existing data with new configuration
if self.data and "priceInfo" in self.data:
raw_data = {"price_info": self.data["priceInfo"]}
self.data = self._transform_data(raw_data)
self.async_update_listeners()
else:
self._log("debug", "No data to re-transform")
@callback @callback
def async_add_time_sensitive_listener(self, update_callback: TimeServiceCallback) -> CALLBACK_TYPE: def async_add_time_sensitive_listener(self, update_callback: TimeServiceCallback) -> CALLBACK_TYPE:
@ -287,7 +473,7 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
# Update helper modules with fresh TimeService instance # Update helper modules with fresh TimeService instance
self.api.time = time_service self.api.time = time_service
self._data_fetcher.time = time_service self._price_data_manager.time = time_service
self._data_transformer.time = time_service self._data_transformer.time = time_service
self._period_calculator.time = time_service self._period_calculator.time = time_service
@ -353,16 +539,13 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
True if midnight turnover is needed, False if already done True if midnight turnover is needed, False if already done
""" """
current_date = now.date() # Initialize handler on first use
if self._midnight_handler.last_check_time is None:
# First time check - initialize (no turnover needed) self._midnight_handler.update_check_time(now)
if self._last_midnight_turnover_check is None:
return False return False
last_check_date = self._last_midnight_turnover_check.date() # Delegate to midnight handler
return self._midnight_handler.is_turnover_needed(now)
# Turnover needed if we've crossed into a new day
return current_date > last_check_date
def _perform_midnight_data_rotation(self, now: datetime) -> None: def _perform_midnight_data_rotation(self, now: datetime) -> None:
""" """
@ -380,7 +563,7 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
""" """
current_date = now.date() current_date = now.date()
last_check_date = ( last_check_date = (
self._last_midnight_turnover_check.date() if self._last_midnight_turnover_check else current_date self._midnight_handler.last_check_time.date() if self._midnight_handler.last_check_time else current_date
) )
self._log( self._log(
@ -390,28 +573,16 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
current_date, current_date,
) )
# Perform rotation on cached data if available # With flat interval list architecture and IntervalPool as source of truth,
if self._cached_price_data and "homes" in self._cached_price_data: # no data rotation needed! get_intervals_for_day_offsets() automatically
for home_id, home_data in self._cached_price_data["homes"].items(): # filters by date. Just re-transform to refresh enrichment.
if "price_info" in home_data: if self.data and "priceInfo" in self.data:
price_info = home_data["price_info"] # Re-transform data to ensure enrichment is refreshed for new day
rotated = self._perform_midnight_turnover(price_info) raw_data = {"price_info": self.data["priceInfo"]}
home_data["price_info"] = rotated self.data = self._transform_data(raw_data)
self._log("debug", "Rotated price data for home %s", home_id)
# Update coordinator's data with enriched rotated data
if self.data:
# Re-transform data to ensure enrichment is applied to rotated data
if self.is_main_entry():
self.data = self._transform_data_for_main_entry(self._cached_price_data)
else:
# For subentry, get fresh data from main coordinator after rotation
# Main coordinator will have performed rotation already
self.data["timestamp"] = now
# Mark turnover as done for today (atomic update) # Mark turnover as done for today (atomic update)
self._last_midnight_turnover_check = now self._midnight_handler.mark_turnover_done(now)
self._last_actual_turnover = now # Record when actual turnover happened
@callback @callback
def _check_and_handle_midnight_turnover(self, now: datetime) -> bool: def _check_and_handle_midnight_turnover(self, now: datetime) -> bool:
@ -439,46 +610,44 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
self._log("info", "[Timer #2] Midnight turnover detected, performing data rotation") self._log("info", "[Timer #2] Midnight turnover detected, performing data rotation")
self._perform_midnight_data_rotation(now) self._perform_midnight_data_rotation(now)
# Notify listeners about updated data # CRITICAL: Notify BOTH listener groups after midnight turnover
# - async_update_listeners(): Notifies normal entities (via HA's DataUpdateCoordinator)
# - async_update_time_sensitive_listeners(): Notifies time-sensitive entities (custom system)
# Without both calls, period binary sensors (best_price_period, peak_price_period)
# won't update because they're time-sensitive listeners, not normal listeners.
self.async_update_listeners() self.async_update_listeners()
# Create TimeService with fresh reference time for time-sensitive entity updates
time_service = TibberPricesTimeService()
self._async_update_time_sensitive_listeners(time_service)
return True return True
def register_lifecycle_callback(self, callback: Callable[[], None]) -> None:
"""
Register callback for lifecycle state changes (push updates).
This allows the lifecycle sensor to receive immediate updates when
the coordinator's lifecycle state changes, instead of waiting for
the next polling cycle.
Args:
callback: Function to call when lifecycle state changes (typically async_write_ha_state)
"""
if callback not in self._lifecycle_callbacks:
self._lifecycle_callbacks.append(callback)
def _notify_lifecycle_change(self) -> None:
"""Notify registered callbacks about lifecycle state change (push update)."""
for lifecycle_callback in self._lifecycle_callbacks:
lifecycle_callback()
async def async_shutdown(self) -> None: async def async_shutdown(self) -> None:
"""Shut down the coordinator and clean up timers.""" """
Shut down the coordinator and clean up timers.
Cancels all three timer types:
- Timer #1: API polling (coordinator update timer)
- Timer #2: Quarter-hour entity updates
- Timer #3: Minute timing sensor updates
Also saves cache to persist any unsaved changes and clears all repairs.
"""
# Cancel all timers first
self._listener_manager.cancel_timers() self._listener_manager.cancel_timers()
def _has_existing_main_coordinator(self) -> bool: # Clear all repairs when integration is removed or disabled
"""Check if there's already a main coordinator in hass.data.""" await self._repair_manager.clear_all_repairs()
domain_data = self.hass.data.get(DOMAIN, {})
return any(
isinstance(coordinator, TibberPricesDataUpdateCoordinator) and coordinator.is_main_entry()
for coordinator in domain_data.values()
)
def is_main_entry(self) -> bool: # Save cache to persist any unsaved data
"""Return True if this is the main entry that fetches data for all homes.""" # This ensures we don't lose data if HA is shutting down
return self._is_main_entry try:
await self._store_cache()
self._log("debug", "Cache saved during shutdown")
except OSError as err:
# Log but don't raise - shutdown should complete even if cache save fails
self._log("error", "Failed to save cache during shutdown: %s", err)
async def _async_update_data(self) -> dict[str, Any]: async def _async_update_data(self) -> dict[str, Any]:
""" """
@ -497,24 +666,26 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
# Transition lifecycle state from "fresh" to "cached" if enough time passed # Transition lifecycle state from "fresh" to "cached" if enough time passed
# (5 minutes threshold defined in lifecycle calculator) # (5 minutes threshold defined in lifecycle calculator)
if self._lifecycle_state == "fresh" and self._last_price_update: # Note: This updates _lifecycle_state for diagnostics only.
age = current_time - self._last_price_update # The lifecycle sensor calculates its state dynamically in get_lifecycle_state(),
if age.total_seconds() > FRESH_TO_CACHED_SECONDS: # checking _last_price_update timestamp directly.
self._lifecycle_state = "cached" if self._lifecycle_state == "fresh":
# After 5 minutes, data is considered "cached" (no longer "just fetched")
self._lifecycle_state = "cached"
# Update helper modules with fresh TimeService instance # Update helper modules with fresh TimeService instance
self.api.time = self.time self.api.time = self.time
self._data_fetcher.time = self.time self._price_data_manager.time = self.time
self._data_transformer.time = self.time self._data_transformer.time = self.time
self._period_calculator.time = self.time self._period_calculator.time = self.time
# Load cache if not already loaded # Load cache if not already loaded (user data only, price data is in Pool)
if self._cached_price_data is None and self._cached_user_data is None: if self._cached_user_data is None:
await self.load_cache() await self.load_cache()
# Initialize midnight turnover check on first run # Initialize midnight handler on first run
if self._last_midnight_turnover_check is None: if self._midnight_handler.last_check_time is None:
self._last_midnight_turnover_check = current_time self._midnight_handler.update_check_time(current_time)
# CRITICAL: Check for midnight turnover FIRST (before any data operations) # CRITICAL: Check for midnight turnover FIRST (before any data operations)
# This prevents race condition with Timer #2 (quarter-hour refresh) # This prevents race condition with Timer #2 (quarter-hour refresh)
@ -525,46 +696,65 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
self._perform_midnight_data_rotation(current_time) self._perform_midnight_data_rotation(current_time)
# After rotation, save cache and notify entities # After rotation, save cache and notify entities
await self._store_cache() await self._store_cache()
# CRITICAL: Notify time-sensitive listeners explicitly
# When Timer #1 performs turnover, returning self.data will trigger
# async_update_listeners() (normal listeners) automatically via DataUpdateCoordinator.
# But time-sensitive listeners (like best_price_period, peak_price_period)
# won't be notified unless we explicitly call their update method.
# This ensures ALL entities see the updated periods after midnight turnover.
time_service = TibberPricesTimeService()
self._async_update_time_sensitive_listeners(time_service)
# Return current data (enriched after rotation) to trigger entity updates # Return current data (enriched after rotation) to trigger entity updates
if self.data: if self.data:
return self.data return self.data
try: try:
if self.is_main_entry(): # Reset API call counter if day changed
# Set lifecycle state to refreshing before API call current_date = current_time.date()
self._lifecycle_state = "refreshing" if self._last_api_call_date != current_date:
self._is_fetching = True self._api_calls_today = 0
self._notify_lifecycle_change() # Push update: now refreshing self._last_api_call_date = current_date
# Reset API call counter if day changed # Set _is_fetching flag - lifecycle sensor shows "refreshing" during fetch
current_date = current_time.date() # Note: Lifecycle sensor reads this flag directly in get_lifecycle_state()
if self._last_api_call_date != current_date: self._is_fetching = True
self._api_calls_today = 0
self._last_api_call_date = current_date
# Main entry fetches data for all homes # Get current price info to check if tomorrow data already exists
configured_home_ids = self._get_configured_home_ids() current_price_info = self.data.get("priceInfo", []) if self.data else []
result = await self._data_fetcher.handle_main_entry_update(
current_time,
configured_home_ids,
self._transform_data_for_main_entry,
)
# Update lifecycle tracking after successful fetch result, api_called = await self._price_data_manager.handle_main_entry_update(
self._is_fetching = False current_time,
self._home_id,
self._transform_data,
current_price_info=current_price_info,
)
# CRITICAL: Reset fetching flag AFTER data fetch completes
self._is_fetching = False
# Sync user_data cache (price data is in IntervalPool)
self._cached_user_data = self._price_data_manager.cached_user_data
# Update lifecycle tracking - ONLY if API was actually called
# (not when returning cached data)
if api_called and result and "priceInfo" in result and len(result["priceInfo"]) > 0:
self._last_price_update = current_time # Track when data was fetched from API
self._api_calls_today += 1 self._api_calls_today += 1
self._lifecycle_state = "fresh" # Data just fetched self._lifecycle_state = "fresh" # Data just fetched
self._notify_lifecycle_change() # Push update: fresh data available _LOGGER.debug(
"API call completed: Fetched %d intervals, updating lifecycle to 'fresh'",
# CRITICAL: Sync cached_user_data after API call (for new integrations without cache) len(result["priceInfo"]),
# handle_main_entry_update() may have fetched user_data via update_user_data_if_needed() )
self._cached_user_data = self._data_fetcher.cached_user_data # Note: _lifecycle_state is for diagnostics only.
# Sync _last_price_update for lifecycle tracking # Lifecycle sensor calculates state dynamically from _last_price_update.
self._last_price_update = self._data_fetcher._last_price_update # noqa: SLF001 - Sync for lifecycle tracking elif not api_called:
return result # Using cached data - lifecycle stays as is (cached/searching_tomorrow/etc.)
# Subentries get data from main coordinator (no lifecycle tracking - they don't fetch) _LOGGER.debug(
return await self._handle_subentry_update() "Using cached data: %d intervals from pool, no API call made",
len(result.get("priceInfo", [])),
)
except ( except (
TibberPricesApiClientAuthenticationError, TibberPricesApiClientAuthenticationError,
TibberPricesApiClientCommunicationError, TibberPricesApiClientCommunicationError,
@ -572,115 +762,93 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
) as err: ) as err:
# Reset lifecycle state on error # Reset lifecycle state on error
self._is_fetching = False self._is_fetching = False
self._lifecycle_state = "error" self._lifecycle_state = "error" # For diagnostics
self._notify_lifecycle_change() # Push update: error occurred # Note: Lifecycle sensor detects errors via coordinator.last_exception
return await self._data_fetcher.handle_api_error(
err, # Track rate limit errors for repair system
self._transform_data_for_main_entry, await self._track_rate_limit_error(err)
# Handle API error - will re-raise as ConfigEntryAuthFailed or UpdateFailed
# Note: With IntervalPool, there's no local cache fallback here.
# The Pool has its own persistence for offline recovery.
await self._price_data_manager.handle_api_error(err)
# Note: handle_api_error always raises, this is never reached
return {} # Satisfy type checker
else:
# Check for repair conditions after successful update
await self._check_repair_conditions(result, current_time)
return result
async def _track_rate_limit_error(self, error: Exception) -> None:
"""Track rate limit errors for repair notification system."""
error_str = str(error).lower()
is_rate_limit = "429" in error_str or "rate limit" in error_str or "too many requests" in error_str
if is_rate_limit:
await self._repair_manager.track_rate_limit_error()
async def _check_repair_conditions(
self,
result: dict[str, Any],
current_time: datetime,
) -> None:
"""Check and manage repair conditions after successful data update."""
# 1. Home not found detection (home was removed from Tibber account)
if result and result.get("_home_not_found"):
await self._repair_manager.create_home_not_found_repair()
# Remove the marker before returning to entities
result.pop("_home_not_found", None)
else:
# Home exists - clear any existing repair
await self._repair_manager.clear_home_not_found_repair()
# 2. Tomorrow data availability (after 18:00)
if result and "priceInfo" in result:
has_tomorrow_data = self._price_data_manager.has_tomorrow_data(result["priceInfo"])
await self._repair_manager.check_tomorrow_data_availability(
has_tomorrow_data=has_tomorrow_data,
current_time=current_time,
) )
async def _handle_subentry_update(self) -> dict[str, Any]: # 3. Clear rate limit tracking on successful API call
"""Handle update for subentry - get data from main coordinator.""" await self._repair_manager.clear_rate_limit_tracking()
main_data = await self._get_data_from_main_coordinator()
return self._transform_data_for_subentry(main_data)
async def _get_data_from_main_coordinator(self) -> dict[str, Any]:
"""Get data from the main coordinator (subentries only)."""
# Find the main coordinator
main_coordinator = self._find_main_coordinator()
if not main_coordinator:
msg = "Main coordinator not found"
raise UpdateFailed(msg)
# Wait for main coordinator to have data
if main_coordinator.data is None:
main_coordinator.async_set_updated_data({})
# Return the main coordinator's data
return main_coordinator.data or {}
def _find_main_coordinator(self) -> TibberPricesDataUpdateCoordinator | None:
"""Find the main coordinator that fetches data for all homes."""
domain_data = self.hass.data.get(DOMAIN, {})
for coordinator in domain_data.values():
if (
isinstance(coordinator, TibberPricesDataUpdateCoordinator)
and coordinator.is_main_entry()
and coordinator != self
):
return coordinator
return None
def _get_configured_home_ids(self) -> set[str]:
"""Get all home_ids that have active config entries (main + subentries)."""
home_ids = helpers.get_configured_home_ids(self.hass)
self._log(
"debug",
"Found %d configured home(s): %s",
len(home_ids),
", ".join(sorted(home_ids)),
)
return home_ids
async def load_cache(self) -> None: async def load_cache(self) -> None:
"""Load cached data from storage.""" """Load cached user data from storage (price data is in IntervalPool)."""
await self._data_fetcher.load_cache() await self._price_data_manager.load_cache()
# Sync legacy references # Sync user data reference
self._cached_price_data = self._data_fetcher.cached_price_data self._cached_user_data = self._price_data_manager.cached_user_data
self._cached_user_data = self._data_fetcher.cached_user_data self._last_user_update = self._price_data_manager._last_user_update # noqa: SLF001 - Sync for lifecycle tracking
self._last_price_update = self._data_fetcher._last_price_update # noqa: SLF001 - Sync for lifecycle tracking
self._last_user_update = self._data_fetcher._last_user_update # noqa: SLF001 - Sync for lifecycle tracking
# Initialize _last_actual_turnover: If cache is from today, assume turnover happened at midnight # Note: Midnight handler state is now based on current date
if self._last_price_update: # Since price data is in IntervalPool (persistent), we just need to
cache_date = self.time.as_local(self._last_price_update).date() # ensure turnover doesn't happen twice if HA restarts after midnight
today_date = self.time.as_local(self.time.now()).date() today_midnight = self.time.as_local(self.time.now()).replace(hour=0, minute=0, second=0, microsecond=0)
if cache_date == today_date: # Mark today's midnight as done to prevent double turnover on HA restart
# Cache is from today, so midnight turnover already happened self._midnight_handler.mark_turnover_done(today_midnight)
today_midnight = self.time.as_local(self.time.now()).replace(hour=0, minute=0, second=0, microsecond=0)
self._last_actual_turnover = today_midnight
def _perform_midnight_turnover(self, price_info: dict[str, Any]) -> dict[str, Any]:
"""
Perform midnight turnover on price data.
Moves: today yesterday, tomorrow today, clears tomorrow.
This handles cases where:
- Server was running through midnight
- Cache is being refreshed and needs proper day rotation
Args:
price_info: The price info dict with 'today', 'tomorrow', 'yesterday' keys
Returns:
Updated price_info with rotated day data
"""
return helpers.perform_midnight_turnover(price_info, time=self.time)
async def _store_cache(self) -> None: async def _store_cache(self) -> None:
"""Store cache data.""" """Store cache data (user metadata only, price data is in IntervalPool)."""
await self._data_fetcher.store_cache(self._last_midnight_turnover_check) await self._price_data_manager.store_cache(self._midnight_handler.last_check_time)
def _needs_tomorrow_data(self, tomorrow_date: date) -> bool: def _needs_tomorrow_data(self) -> bool:
"""Check if tomorrow data is missing or invalid.""" """Check if tomorrow data is missing or invalid."""
return helpers.needs_tomorrow_data(self._cached_price_data, tomorrow_date) # Check self.data (from Pool) instead of _cached_price_data
if not self.data or "priceInfo" not in self.data:
return True
return helpers.needs_tomorrow_data({"price_info": self.data["priceInfo"]})
def _has_valid_tomorrow_data(self, tomorrow_date: date) -> bool: def _has_valid_tomorrow_data(self) -> bool:
"""Check if we have valid tomorrow data (inverse of _needs_tomorrow_data).""" """Check if we have valid tomorrow data (inverse of _needs_tomorrow_data)."""
return not self._needs_tomorrow_data(tomorrow_date) return not self._needs_tomorrow_data()
@callback @callback
def _merge_cached_data(self) -> dict[str, Any]: def _merge_cached_data(self) -> dict[str, Any]:
"""Merge cached data into the expected format for main entry.""" """Return current data (from Pool)."""
if not self._cached_price_data: if not self.data:
return {} return {}
return self._transform_data_for_main_entry(self._cached_price_data) return self.data
def _get_threshold_percentages(self) -> dict[str, int]: def _get_threshold_percentages(self) -> dict[str, int | float]:
"""Get threshold percentages from config options.""" """Get threshold percentages from config options."""
return self._data_transformer.get_threshold_percentages() return self._data_transformer.get_threshold_percentages()
@ -688,154 +856,36 @@ class TibberPricesDataUpdateCoordinator(DataUpdateCoordinator[dict[str, Any]]):
"""Calculate periods (best price and peak price) for the given price info.""" """Calculate periods (best price and peak price) for the given price info."""
return self._period_calculator.calculate_periods_for_price_info(price_info) return self._period_calculator.calculate_periods_for_price_info(price_info)
def _get_current_transformation_config(self) -> dict[str, Any]: def _transform_data(self, raw_data: dict[str, Any]) -> dict[str, Any]:
"""Get current configuration that affects data transformation."""
return {
"thresholds": self._get_threshold_percentages(),
"volatility_thresholds": {
"moderate": self.config_entry.options.get(_const.CONF_VOLATILITY_THRESHOLD_MODERATE, 15.0),
"high": self.config_entry.options.get(_const.CONF_VOLATILITY_THRESHOLD_HIGH, 25.0),
"very_high": self.config_entry.options.get(_const.CONF_VOLATILITY_THRESHOLD_VERY_HIGH, 40.0),
},
"best_price_config": {
"flex": self.config_entry.options.get(_const.CONF_BEST_PRICE_FLEX, 15.0),
"max_level": self.config_entry.options.get(_const.CONF_BEST_PRICE_MAX_LEVEL, "NORMAL"),
"min_period_length": self.config_entry.options.get(_const.CONF_BEST_PRICE_MIN_PERIOD_LENGTH, 4),
"min_distance_from_avg": self.config_entry.options.get(
_const.CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG, -5.0
),
"max_level_gap_count": self.config_entry.options.get(_const.CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT, 0),
"enable_min_periods": self.config_entry.options.get(_const.CONF_ENABLE_MIN_PERIODS_BEST, False),
"min_periods": self.config_entry.options.get(_const.CONF_MIN_PERIODS_BEST, 2),
"relaxation_attempts": self.config_entry.options.get(_const.CONF_RELAXATION_ATTEMPTS_BEST, 4),
},
"peak_price_config": {
"flex": self.config_entry.options.get(_const.CONF_PEAK_PRICE_FLEX, 15.0),
"min_level": self.config_entry.options.get(_const.CONF_PEAK_PRICE_MIN_LEVEL, "HIGH"),
"min_period_length": self.config_entry.options.get(_const.CONF_PEAK_PRICE_MIN_PERIOD_LENGTH, 4),
"min_distance_from_avg": self.config_entry.options.get(
_const.CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG, 5.0
),
"max_level_gap_count": self.config_entry.options.get(_const.CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT, 0),
"enable_min_periods": self.config_entry.options.get(_const.CONF_ENABLE_MIN_PERIODS_PEAK, False),
"min_periods": self.config_entry.options.get(_const.CONF_MIN_PERIODS_PEAK, 2),
"relaxation_attempts": self.config_entry.options.get(_const.CONF_RELAXATION_ATTEMPTS_PEAK, 4),
},
}
def _should_retransform_data(self, current_time: datetime) -> bool:
"""Check if data transformation should be performed."""
# No cached transformed data - must transform
if self._cached_transformed_data is None:
return True
# Configuration changed - must retransform
current_config = self._get_current_transformation_config()
if current_config != self._last_transformation_config:
self._log("debug", "Configuration changed, retransforming data")
return True
# Check for midnight turnover
now_local = self.time.as_local(current_time)
current_date = now_local.date()
if self._last_transformation_time is None:
return True
last_transform_local = self.time.as_local(self._last_transformation_time)
last_transform_date = last_transform_local.date()
if current_date != last_transform_date:
self._log("debug", "Midnight turnover detected, retransforming data")
return True
return False
def _transform_data_for_main_entry(self, raw_data: dict[str, Any]) -> dict[str, Any]:
"""Transform raw data for main entry (aggregated view of all homes).""" """Transform raw data for main entry (aggregated view of all homes)."""
current_time = self.time.now() # Delegate complete transformation to DataTransformer (enrichment + periods)
# DataTransformer handles its own caching internally
# Return cached transformed data if no retransformation needed return self._data_transformer.transform_data(raw_data)
if not self._should_retransform_data(current_time) and self._cached_transformed_data is not None:
self._log("debug", "Using cached transformed data (no transformation needed)")
return self._cached_transformed_data
self._log("debug", "Transforming price data (enrichment only, periods calculated separately)")
# Delegate actual transformation to DataTransformer (enrichment only)
transformed_data = self._data_transformer.transform_data_for_main_entry(raw_data)
# Add periods (calculated and cached separately by PeriodCalculator)
if "priceInfo" in transformed_data:
transformed_data["periods"] = self._calculate_periods_for_price_info(transformed_data["priceInfo"])
# Cache the transformed data
self._cached_transformed_data = transformed_data
self._last_transformation_config = self._get_current_transformation_config()
self._last_transformation_time = current_time
return transformed_data
def _transform_data_for_subentry(self, main_data: dict[str, Any]) -> dict[str, Any]:
"""Transform main coordinator data for subentry (home-specific view)."""
current_time = self.time.now()
# Return cached transformed data if no retransformation needed
if not self._should_retransform_data(current_time) and self._cached_transformed_data is not None:
self._log("debug", "Using cached transformed data (no transformation needed)")
return self._cached_transformed_data
self._log("debug", "Transforming price data for home (enrichment only, periods calculated separately)")
home_id = self.config_entry.data.get("home_id")
if not home_id:
return main_data
# Delegate actual transformation to DataTransformer (enrichment only)
transformed_data = self._data_transformer.transform_data_for_subentry(main_data, home_id)
# Add periods (calculated and cached separately by PeriodCalculator)
if "priceInfo" in transformed_data:
transformed_data["periods"] = self._calculate_periods_for_price_info(transformed_data["priceInfo"])
# Cache the transformed data
self._cached_transformed_data = transformed_data
self._last_transformation_config = self._get_current_transformation_config()
self._last_transformation_time = current_time
return transformed_data
# --- Methods expected by sensors and services --- # --- Methods expected by sensors and services ---
def get_home_data(self, home_id: str) -> dict[str, Any] | None: def get_home_data(self, home_id: str) -> dict[str, Any] | None:
"""Get data for a specific home.""" """Get data for a specific home (returns this coordinator's data if home_id matches)."""
if not self.data: if not self.data:
return None return None
homes_data = self.data.get("homes", {}) # In new architecture, each coordinator manages one home only
return homes_data.get(home_id) # Return data only if requesting this coordinator's home
if home_id == self._home_id:
return self.data
return None
def get_current_interval(self) -> dict[str, Any] | None: def get_current_interval(self) -> dict[str, Any] | None:
"""Get the price data for the current interval.""" """Get the price data for the current interval."""
if not self.data: if not self.data:
return None return None
price_info = self.data.get("priceInfo", {}) if not self.data:
if not price_info:
return None return None
now = self.time.now() now = self.time.now()
return find_price_data_for_interval(price_info, now, time=self.time) return find_price_data_for_interval(self.data, now, time=self.time)
def get_all_intervals(self) -> list[dict[str, Any]]:
"""Get all price intervals (today + tomorrow)."""
if not self.data:
return []
price_info = self.data.get("priceInfo", {})
today_prices = price_info.get("today", [])
tomorrow_prices = price_info.get("tomorrow", [])
return today_prices + tomorrow_prices
async def refresh_user_data(self) -> bool: async def refresh_user_data(self) -> bool:
"""Force refresh of user data and return True if data was updated.""" """Force refresh of user data and return True if data was updated."""

View file

@ -1,314 +0,0 @@
"""Data fetching logic for the coordinator."""
from __future__ import annotations
import asyncio
import logging
import secrets
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from datetime import timedelta
from custom_components.tibber_prices.api import (
TibberPricesApiClientAuthenticationError,
TibberPricesApiClientCommunicationError,
TibberPricesApiClientError,
)
from homeassistant.core import callback
from homeassistant.exceptions import ConfigEntryAuthFailed
from homeassistant.helpers.update_coordinator import UpdateFailed
from . import cache, helpers
from .constants import TOMORROW_DATA_CHECK_HOUR, TOMORROW_DATA_RANDOM_DELAY_MAX
if TYPE_CHECKING:
from collections.abc import Callable
from datetime import date, datetime
from custom_components.tibber_prices.api import TibberPricesApiClient
from .time_service import TibberPricesTimeService
_LOGGER = logging.getLogger(__name__)
class TibberPricesDataFetcher:
"""Handles data fetching, caching, and main/subentry coordination."""
def __init__(
self,
api: TibberPricesApiClient,
store: Any,
log_prefix: str,
user_update_interval: timedelta,
time: TibberPricesTimeService,
) -> None:
"""Initialize the data fetcher."""
self.api = api
self._store = store
self._log_prefix = log_prefix
self._user_update_interval = user_update_interval
self.time: TibberPricesTimeService = time
# Cached data
self._cached_price_data: dict[str, Any] | None = None
self._cached_user_data: dict[str, Any] | None = None
self._last_price_update: datetime | None = None
self._last_user_update: datetime | None = None
def _log(self, level: str, message: str, *args: object, **kwargs: object) -> None:
"""Log with coordinator-specific prefix."""
prefixed_message = f"{self._log_prefix} {message}"
getattr(_LOGGER, level)(prefixed_message, *args, **kwargs)
async def load_cache(self) -> None:
"""Load cached data from storage."""
cache_data = await cache.load_cache(self._store, self._log_prefix, time=self.time)
self._cached_price_data = cache_data.price_data
self._cached_user_data = cache_data.user_data
self._last_price_update = cache_data.last_price_update
self._last_user_update = cache_data.last_user_update
# Parse timestamps if we loaded price data from cache
if self._cached_price_data:
self._cached_price_data = helpers.parse_all_timestamps(self._cached_price_data, time=self.time)
# Validate cache: check if price data is from a previous day
if not cache.is_cache_valid(cache_data, self._log_prefix, time=self.time):
self._log("info", "Cached price data is from a previous day, clearing cache to fetch fresh data")
self._cached_price_data = None
self._last_price_update = None
await self.store_cache()
async def store_cache(self, last_midnight_check: datetime | None = None) -> None:
"""Store cache data."""
cache_data = cache.TibberPricesCacheData(
price_data=self._cached_price_data,
user_data=self._cached_user_data,
last_price_update=self._last_price_update,
last_user_update=self._last_user_update,
last_midnight_check=last_midnight_check,
)
await cache.save_cache(self._store, cache_data, self._log_prefix)
async def update_user_data_if_needed(self, current_time: datetime) -> bool:
"""
Update user data if needed (daily check).
Returns:
True if user data was updated, False otherwise
"""
if self._last_user_update is None or current_time - self._last_user_update >= self._user_update_interval:
try:
self._log("debug", "Updating user data")
user_data = await self.api.async_get_viewer_details()
self._cached_user_data = user_data
self._last_user_update = current_time
self._log("debug", "User data updated successfully")
except (
TibberPricesApiClientError,
TibberPricesApiClientCommunicationError,
) as ex:
self._log("warning", "Failed to update user data: %s", ex)
return False # Update failed
else:
return True # User data was updated
return False # No update needed
@callback
def should_update_price_data(self, current_time: datetime) -> bool | str:
"""
Check if price data should be updated from the API.
API calls only happen when truly needed:
1. No cached data exists
2. Cache is invalid (from previous day - detected by _is_cache_valid)
3. After 13:00 local time and tomorrow's data is missing or invalid
Cache validity is ensured by:
- _is_cache_valid() checks date mismatch on load
- Midnight turnover clears cache (Timer #2)
- Tomorrow data validation after 13:00
No periodic "safety" updates - trust the cache validation!
Returns:
bool or str: True for immediate update, "tomorrow_check" for tomorrow
data check (needs random delay), False for no update
"""
if self._cached_price_data is None:
self._log("debug", "API update needed: No cached price data")
return True
if self._last_price_update is None:
self._log("debug", "API update needed: No last price update timestamp")
return True
# Get tomorrow's date using TimeService
_, tomorrow_midnight = self.time.get_day_boundaries("today")
tomorrow_date = tomorrow_midnight.date()
now_local = self.time.as_local(current_time)
# Check if after 13:00 and tomorrow data is missing or invalid
if (
now_local.hour >= TOMORROW_DATA_CHECK_HOUR
and self._cached_price_data
and "homes" in self._cached_price_data
and self.needs_tomorrow_data(tomorrow_date)
):
self._log(
"debug",
"API update needed: After %s:00 and tomorrow's data missing/invalid",
TOMORROW_DATA_CHECK_HOUR,
)
# Return special marker to indicate this is a tomorrow data check
# Caller should add random delay to spread load
return "tomorrow_check"
# No update needed - cache is valid and complete
return False
def needs_tomorrow_data(self, tomorrow_date: date) -> bool:
"""Check if tomorrow data is missing or invalid."""
return helpers.needs_tomorrow_data(self._cached_price_data, tomorrow_date)
async def fetch_all_homes_data(self, configured_home_ids: set[str], current_time: datetime) -> dict[str, Any]:
"""Fetch data for all homes (main coordinator only)."""
if not configured_home_ids:
self._log("warning", "No configured homes found - cannot fetch price data")
return {
"timestamp": current_time,
"homes": {},
}
# Get price data for configured homes only (API call with specific home_ids)
self._log("debug", "Fetching price data for %d configured home(s)", len(configured_home_ids))
price_data = await self.api.async_get_price_info(home_ids=configured_home_ids)
all_homes_data = {}
homes_list = price_data.get("homes", {})
# Process returned data
for home_id, home_price_data in homes_list.items():
# Store raw price data without enrichment
# Enrichment will be done dynamically when data is transformed
home_data = {
"price_info": home_price_data,
}
all_homes_data[home_id] = home_data
self._log(
"debug",
"Successfully fetched data for %d home(s)",
len(all_homes_data),
)
return {
"timestamp": current_time,
"homes": all_homes_data,
}
async def handle_main_entry_update(
self,
current_time: datetime,
configured_home_ids: set[str],
transform_fn: Callable[[dict[str, Any]], dict[str, Any]],
) -> dict[str, Any]:
"""Handle update for main entry - fetch data for all homes."""
# Update user data if needed (daily check)
user_data_updated = await self.update_user_data_if_needed(current_time)
# Check if we need to update price data
should_update = self.should_update_price_data(current_time)
if should_update:
# If this is a tomorrow data check, add random delay to spread API load
if should_update == "tomorrow_check":
# Use secrets for better randomness distribution
delay = secrets.randbelow(TOMORROW_DATA_RANDOM_DELAY_MAX + 1)
self._log(
"debug",
"Tomorrow data check - adding random delay of %d seconds to spread load",
delay,
)
await asyncio.sleep(delay)
self._log("debug", "Fetching fresh price data from API")
raw_data = await self.fetch_all_homes_data(configured_home_ids, current_time)
# Parse timestamps immediately after API fetch
raw_data = helpers.parse_all_timestamps(raw_data, time=self.time)
# Cache the data (now with datetime objects)
self._cached_price_data = raw_data
self._last_price_update = current_time
await self.store_cache()
# Transform for main entry: provide aggregated view
return transform_fn(raw_data)
# Use cached data if available
if self._cached_price_data is not None:
# If user data was updated, we need to return transformed data to trigger entity updates
# This ensures diagnostic sensors (home_type, grid_company, etc.) get refreshed
if user_data_updated:
self._log("debug", "User data updated - returning transformed data to update diagnostic sensors")
else:
self._log("debug", "Using cached price data (no API call needed)")
return transform_fn(self._cached_price_data)
# Fallback: no cache and no update needed (shouldn't happen)
self._log("warning", "No cached data available and update not triggered - returning empty data")
return {
"timestamp": current_time,
"homes": {},
"priceInfo": {},
}
async def handle_api_error(
self,
error: Exception,
transform_fn: Callable[[dict[str, Any]], dict[str, Any]],
) -> dict[str, Any]:
"""Handle API errors with fallback to cached data."""
if isinstance(error, TibberPricesApiClientAuthenticationError):
msg = "Invalid access token"
raise ConfigEntryAuthFailed(msg) from error
# Use cached data as fallback if available
if self._cached_price_data is not None:
self._log("warning", "API error, using cached data: %s", error)
return transform_fn(self._cached_price_data)
msg = f"Error communicating with API: {error}"
raise UpdateFailed(msg) from error
def perform_midnight_turnover(self, price_info: dict[str, Any]) -> dict[str, Any]:
"""
Perform midnight turnover on price data.
Moves: today yesterday, tomorrow today, clears tomorrow.
Args:
price_info: The price info dict with 'today', 'tomorrow', 'yesterday' keys
Returns:
Updated price_info with rotated day data
"""
return helpers.perform_midnight_turnover(price_info, time=self.time)
@property
def cached_price_data(self) -> dict[str, Any] | None:
"""Get cached price data."""
return self._cached_price_data
@cached_price_data.setter
def cached_price_data(self, value: dict[str, Any] | None) -> None:
"""Set cached price data."""
self._cached_price_data = value
@property
def cached_user_data(self) -> dict[str, Any] | None:
"""Get cached user data."""
return self._cached_user_data

View file

@ -2,6 +2,7 @@
from __future__ import annotations from __future__ import annotations
import copy
import logging import logging
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
@ -26,19 +27,20 @@ class TibberPricesDataTransformer:
self, self,
config_entry: ConfigEntry, config_entry: ConfigEntry,
log_prefix: str, log_prefix: str,
perform_turnover_fn: Callable[[dict[str, Any]], dict[str, Any]], calculate_periods_fn: Callable[[dict[str, Any]], dict[str, Any]],
time: TibberPricesTimeService, time: TibberPricesTimeService,
) -> None: ) -> None:
"""Initialize the data transformer.""" """Initialize the data transformer."""
self.config_entry = config_entry self.config_entry = config_entry
self._log_prefix = log_prefix self._log_prefix = log_prefix
self._perform_turnover_fn = perform_turnover_fn self._calculate_periods_fn = calculate_periods_fn
self.time: TibberPricesTimeService = time self.time: TibberPricesTimeService = time
# Transformation cache # Transformation cache
self._cached_transformed_data: dict[str, Any] | None = None self._cached_transformed_data: dict[str, Any] | None = None
self._last_transformation_config: dict[str, Any] | None = None self._last_transformation_config: dict[str, Any] | None = None
self._last_midnight_check: datetime | None = None self._last_midnight_check: datetime | None = None
self._last_source_data_timestamp: datetime | None = None # Track when source data changed
self._config_cache: dict[str, Any] | None = None self._config_cache: dict[str, Any] | None = None
self._config_cache_valid = False self._config_cache_valid = False
@ -47,19 +49,50 @@ class TibberPricesDataTransformer:
prefixed_message = f"{self._log_prefix} {message}" prefixed_message = f"{self._log_prefix} {message}"
getattr(_LOGGER, level)(prefixed_message, *args, **kwargs) getattr(_LOGGER, level)(prefixed_message, *args, **kwargs)
def get_threshold_percentages(self) -> dict[str, int]: def get_threshold_percentages(self) -> dict[str, int | float]:
"""Get threshold percentages from config options.""" """
Get threshold percentages, hysteresis and gap tolerance for RATING_LEVEL from config options.
CRITICAL: This function is ONLY for rating_level (internal calculation: LOW/NORMAL/HIGH).
Do NOT use for price level (Tibber API: VERY_CHEAP/CHEAP/NORMAL/EXPENSIVE/VERY_EXPENSIVE).
"""
options = self.config_entry.options or {} options = self.config_entry.options or {}
return { return {
"low": options.get(_const.CONF_PRICE_RATING_THRESHOLD_LOW, _const.DEFAULT_PRICE_RATING_THRESHOLD_LOW), "low": options.get(_const.CONF_PRICE_RATING_THRESHOLD_LOW, _const.DEFAULT_PRICE_RATING_THRESHOLD_LOW),
"high": options.get(_const.CONF_PRICE_RATING_THRESHOLD_HIGH, _const.DEFAULT_PRICE_RATING_THRESHOLD_HIGH), "high": options.get(_const.CONF_PRICE_RATING_THRESHOLD_HIGH, _const.DEFAULT_PRICE_RATING_THRESHOLD_HIGH),
"hysteresis": options.get(_const.CONF_PRICE_RATING_HYSTERESIS, _const.DEFAULT_PRICE_RATING_HYSTERESIS),
"gap_tolerance": options.get(
_const.CONF_PRICE_RATING_GAP_TOLERANCE, _const.DEFAULT_PRICE_RATING_GAP_TOLERANCE
),
} }
def get_level_gap_tolerance(self) -> int:
"""
Get gap tolerance for PRICE LEVEL (Tibber API) from config options.
CRITICAL: This is separate from rating_level gap tolerance.
Price level comes from Tibber API (VERY_CHEAP/CHEAP/NORMAL/EXPENSIVE/VERY_EXPENSIVE).
Rating level is calculated internally (LOW/NORMAL/HIGH).
"""
options = self.config_entry.options or {}
return options.get(_const.CONF_PRICE_LEVEL_GAP_TOLERANCE, _const.DEFAULT_PRICE_LEVEL_GAP_TOLERANCE)
def invalidate_config_cache(self) -> None: def invalidate_config_cache(self) -> None:
"""Invalidate config cache when options change.""" """
Invalidate config cache AND transformation cache when options change.
CRITICAL: When options like gap_tolerance, hysteresis, or price_level_gap_tolerance
change, we must clear BOTH caches:
1. Config cache (_config_cache) - forces config rebuild on next check
2. Transformation cache (_cached_transformed_data) - forces data re-enrichment
This ensures that the next call to transform_data() will re-calculate
rating_levels and apply new gap tolerance settings to existing price data.
"""
self._config_cache_valid = False self._config_cache_valid = False
self._config_cache = None self._config_cache = None
self._log("debug", "Config cache invalidated") self._cached_transformed_data = None # Force re-transformation with new config
self._last_transformation_config = None # Force config comparison to trigger
def _get_current_transformation_config(self) -> dict[str, Any]: def _get_current_transformation_config(self) -> dict[str, Any]:
""" """
@ -72,36 +105,53 @@ class TibberPricesDataTransformer:
return self._config_cache return self._config_cache
# Build config dictionary (expensive operation) # Build config dictionary (expensive operation)
options = self.config_entry.options
# Best/peak price remain nested (multi-section steps)
best_period_section = options.get("period_settings", {})
best_flex_section = options.get("flexibility_settings", {})
best_relax_section = options.get("relaxation_and_target_periods", {})
peak_period_section = options.get("period_settings", {})
peak_flex_section = options.get("flexibility_settings", {})
peak_relax_section = options.get("relaxation_and_target_periods", {})
config = { config = {
"thresholds": self.get_threshold_percentages(), "thresholds": self.get_threshold_percentages(),
"level_gap_tolerance": self.get_level_gap_tolerance(), # Separate: Tibber's price level smoothing
# Volatility thresholds now flat (single-section step)
"volatility_thresholds": { "volatility_thresholds": {
"moderate": self.config_entry.options.get(_const.CONF_VOLATILITY_THRESHOLD_MODERATE, 15.0), "moderate": options.get(_const.CONF_VOLATILITY_THRESHOLD_MODERATE, 15.0),
"high": self.config_entry.options.get(_const.CONF_VOLATILITY_THRESHOLD_HIGH, 25.0), "high": options.get(_const.CONF_VOLATILITY_THRESHOLD_HIGH, 25.0),
"very_high": self.config_entry.options.get(_const.CONF_VOLATILITY_THRESHOLD_VERY_HIGH, 40.0), "very_high": options.get(_const.CONF_VOLATILITY_THRESHOLD_VERY_HIGH, 40.0),
},
# Price trend thresholds now flat (single-section step)
"price_trend_thresholds": {
"rising": options.get(
_const.CONF_PRICE_TREND_THRESHOLD_RISING, _const.DEFAULT_PRICE_TREND_THRESHOLD_RISING
),
"falling": options.get(
_const.CONF_PRICE_TREND_THRESHOLD_FALLING, _const.DEFAULT_PRICE_TREND_THRESHOLD_FALLING
),
}, },
"best_price_config": { "best_price_config": {
"flex": self.config_entry.options.get(_const.CONF_BEST_PRICE_FLEX, 15.0), "flex": best_flex_section.get(_const.CONF_BEST_PRICE_FLEX, 15.0),
"max_level": self.config_entry.options.get(_const.CONF_BEST_PRICE_MAX_LEVEL, "NORMAL"), "max_level": best_period_section.get(_const.CONF_BEST_PRICE_MAX_LEVEL, "NORMAL"),
"min_period_length": self.config_entry.options.get(_const.CONF_BEST_PRICE_MIN_PERIOD_LENGTH, 4), "min_period_length": best_period_section.get(_const.CONF_BEST_PRICE_MIN_PERIOD_LENGTH, 4),
"min_distance_from_avg": self.config_entry.options.get( "min_distance_from_avg": best_flex_section.get(_const.CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG, -5.0),
_const.CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG, -5.0 "max_level_gap_count": best_period_section.get(_const.CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT, 0),
), "enable_min_periods": best_relax_section.get(_const.CONF_ENABLE_MIN_PERIODS_BEST, False),
"max_level_gap_count": self.config_entry.options.get(_const.CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT, 0), "min_periods": best_relax_section.get(_const.CONF_MIN_PERIODS_BEST, 2),
"enable_min_periods": self.config_entry.options.get(_const.CONF_ENABLE_MIN_PERIODS_BEST, False), "relaxation_attempts": best_relax_section.get(_const.CONF_RELAXATION_ATTEMPTS_BEST, 4),
"min_periods": self.config_entry.options.get(_const.CONF_MIN_PERIODS_BEST, 2),
"relaxation_attempts": self.config_entry.options.get(_const.CONF_RELAXATION_ATTEMPTS_BEST, 4),
}, },
"peak_price_config": { "peak_price_config": {
"flex": self.config_entry.options.get(_const.CONF_PEAK_PRICE_FLEX, 15.0), "flex": peak_flex_section.get(_const.CONF_PEAK_PRICE_FLEX, 15.0),
"min_level": self.config_entry.options.get(_const.CONF_PEAK_PRICE_MIN_LEVEL, "HIGH"), "min_level": peak_period_section.get(_const.CONF_PEAK_PRICE_MIN_LEVEL, "HIGH"),
"min_period_length": self.config_entry.options.get(_const.CONF_PEAK_PRICE_MIN_PERIOD_LENGTH, 4), "min_period_length": peak_period_section.get(_const.CONF_PEAK_PRICE_MIN_PERIOD_LENGTH, 4),
"min_distance_from_avg": self.config_entry.options.get( "min_distance_from_avg": peak_flex_section.get(_const.CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG, 5.0),
_const.CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG, 5.0 "max_level_gap_count": peak_period_section.get(_const.CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT, 0),
), "enable_min_periods": peak_relax_section.get(_const.CONF_ENABLE_MIN_PERIODS_PEAK, False),
"max_level_gap_count": self.config_entry.options.get(_const.CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT, 0), "min_periods": peak_relax_section.get(_const.CONF_MIN_PERIODS_PEAK, 2),
"enable_min_periods": self.config_entry.options.get(_const.CONF_ENABLE_MIN_PERIODS_PEAK, False), "relaxation_attempts": peak_relax_section.get(_const.CONF_RELAXATION_ATTEMPTS_PEAK, 4),
"min_periods": self.config_entry.options.get(_const.CONF_MIN_PERIODS_PEAK, 2),
"relaxation_attempts": self.config_entry.options.get(_const.CONF_RELAXATION_ATTEMPTS_PEAK, 4),
}, },
} }
@ -110,16 +160,33 @@ class TibberPricesDataTransformer:
self._config_cache_valid = True self._config_cache_valid = True
return config return config
def _should_retransform_data(self, current_time: datetime) -> bool: def _should_retransform_data(self, current_time: datetime, source_data_timestamp: datetime | None = None) -> bool:
"""Check if data transformation should be performed.""" """
Check if data transformation should be performed.
Args:
current_time: Current time for midnight check
source_data_timestamp: Timestamp of source data (if available)
Returns:
True if retransformation needed, False if cached data can be used
"""
# No cached transformed data - must transform # No cached transformed data - must transform
if self._cached_transformed_data is None: if self._cached_transformed_data is None:
return True return True
# Source data changed - must retransform
# This detects when new API data was fetched (e.g., tomorrow data arrival)
if source_data_timestamp is not None and source_data_timestamp != self._last_source_data_timestamp:
self._log("debug", "Source data changed, retransforming data")
return True
# Configuration changed - must retransform # Configuration changed - must retransform
current_config = self._get_current_transformation_config() current_config = self._get_current_transformation_config()
if current_config != self._last_transformation_config: config_changed = current_config != self._last_transformation_config
self._log("debug", "Configuration changed, retransforming data")
if config_changed:
return True return True
# Check for midnight turnover # Check for midnight turnover
@ -138,120 +205,80 @@ class TibberPricesDataTransformer:
return False return False
def transform_data_for_main_entry(self, raw_data: dict[str, Any]) -> dict[str, Any]: def transform_data(self, raw_data: dict[str, Any]) -> dict[str, Any]:
"""Transform raw data for main entry (aggregated view of all homes).""" """Transform raw data for main entry (single home view)."""
current_time = self.time.now() current_time = self.time.now()
source_data_timestamp = raw_data.get("timestamp")
# Return cached transformed data if no retransformation needed # Return cached transformed data if no retransformation needed
if not self._should_retransform_data(current_time) and self._cached_transformed_data is not None: should_retransform = self._should_retransform_data(current_time, source_data_timestamp)
has_cache = self._cached_transformed_data is not None
self._log(
"info",
"transform_data: should_retransform=%s, has_cache=%s",
should_retransform,
has_cache,
)
if not should_retransform and has_cache:
self._log("debug", "Using cached transformed data (no transformation needed)") self._log("debug", "Using cached transformed data (no transformation needed)")
return self._cached_transformed_data # has_cache ensures _cached_transformed_data is not None
return self._cached_transformed_data # type: ignore[return-value]
self._log("debug", "Transforming price data (enrichment only, periods cached separately)") self._log("debug", "Transforming price data (enrichment + period calculation)")
# For main entry, we can show data from the first home as default # Extract data from single-home structure
# or provide an aggregated view home_id = raw_data.get("home_id", "")
homes_data = raw_data.get("homes", {}) # CRITICAL: Make a deep copy of intervals to avoid modifying cached raw data
if not homes_data: # The enrichment function modifies intervals in-place, which would corrupt
# the original API data and make re-enrichment with different settings impossible
all_intervals = copy.deepcopy(raw_data.get("price_info", []))
currency = raw_data.get("currency", "EUR")
if not all_intervals:
return { return {
"timestamp": raw_data.get("timestamp"), "timestamp": raw_data.get("timestamp"),
"homes": {}, "home_id": home_id,
"priceInfo": {}, "priceInfo": [],
"pricePeriods": {
"best_price": [],
"peak_price": [],
},
"currency": currency,
} }
# Use the first home's data as the main entry's data
first_home_data = next(iter(homes_data.values()))
price_info = first_home_data.get("price_info", {})
# Perform midnight turnover if needed (handles day transitions)
price_info = self._perform_turnover_fn(price_info)
# Ensure all required keys exist (API might not return tomorrow data yet)
price_info.setdefault("yesterday", [])
price_info.setdefault("today", [])
price_info.setdefault("tomorrow", [])
price_info.setdefault("currency", "EUR")
# Enrich price info dynamically with calculated differences and rating levels # Enrich price info dynamically with calculated differences and rating levels
# This ensures enrichment is always up-to-date, especially after midnight turnover # (Modifies all_intervals in-place, returns same list)
thresholds = self.get_threshold_percentages() thresholds = self.get_threshold_percentages() # Only for rating_level
price_info = enrich_price_info_with_differences( level_gap_tolerance = self.get_level_gap_tolerance() # Separate: for Tibber's price level
price_info,
enriched_intervals = enrich_price_info_with_differences(
all_intervals,
threshold_low=thresholds["low"], threshold_low=thresholds["low"],
threshold_high=thresholds["high"], threshold_high=thresholds["high"],
hysteresis=float(thresholds["hysteresis"]),
gap_tolerance=int(thresholds["gap_tolerance"]),
level_gap_tolerance=level_gap_tolerance,
time=self.time,
) )
# Note: Periods are calculated and cached separately by PeriodCalculator # Store enriched intervals directly as priceInfo (flat list)
# to avoid redundant caching (periods were cached twice before)
transformed_data = { transformed_data = {
"timestamp": raw_data.get("timestamp"), "home_id": home_id,
"homes": homes_data, "priceInfo": enriched_intervals,
"priceInfo": price_info, "currency": currency,
}
# Cache the transformed data
self._cached_transformed_data = transformed_data
self._last_transformation_config = self._get_current_transformation_config()
self._last_midnight_check = current_time
return transformed_data
def transform_data_for_subentry(self, main_data: dict[str, Any], home_id: str) -> dict[str, Any]:
"""Transform main coordinator data for subentry (home-specific view)."""
current_time = self.time.now()
# Return cached transformed data if no retransformation needed
if not self._should_retransform_data(current_time) and self._cached_transformed_data is not None:
self._log("debug", "Using cached transformed data (no transformation needed)")
return self._cached_transformed_data
self._log("debug", "Transforming price data for home (enrichment only, periods cached separately)")
if not home_id:
return main_data
homes_data = main_data.get("homes", {})
home_data = homes_data.get(home_id, {})
if not home_data:
return {
"timestamp": main_data.get("timestamp"),
"priceInfo": {},
}
price_info = home_data.get("price_info", {})
# Perform midnight turnover if needed (handles day transitions)
price_info = self._perform_turnover_fn(price_info)
# Ensure all required keys exist (API might not return tomorrow data yet)
price_info.setdefault("yesterday", [])
price_info.setdefault("today", [])
price_info.setdefault("tomorrow", [])
price_info.setdefault("currency", "EUR")
# Enrich price info dynamically with calculated differences and rating levels
# This ensures enrichment is always up-to-date, especially after midnight turnover
thresholds = self.get_threshold_percentages()
price_info = enrich_price_info_with_differences(
price_info,
threshold_low=thresholds["low"],
threshold_high=thresholds["high"],
)
# Note: Periods are calculated and cached separately by PeriodCalculator
# to avoid redundant caching (periods were cached twice before)
transformed_data = {
"timestamp": main_data.get("timestamp"),
"priceInfo": price_info,
} }
# Calculate periods (best price and peak price)
if "priceInfo" in transformed_data:
transformed_data["pricePeriods"] = self._calculate_periods_fn(transformed_data["priceInfo"])
# Cache the transformed data # Cache the transformed data
self._cached_transformed_data = transformed_data self._cached_transformed_data = transformed_data
self._last_transformation_config = self._get_current_transformation_config() self._last_transformation_config = self._get_current_transformation_config()
self._last_midnight_check = current_time self._last_midnight_check = current_time
self._last_source_data_timestamp = source_data_timestamp
return transformed_data return transformed_data

View file

@ -3,104 +3,139 @@
from __future__ import annotations from __future__ import annotations
import logging import logging
from datetime import timedelta
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
from homeassistant.util import dt as dt_util
if TYPE_CHECKING: if TYPE_CHECKING:
from datetime import date
from homeassistant.core import HomeAssistant
from .time_service import TibberPricesTimeService from .time_service import TibberPricesTimeService
from custom_components.tibber_prices.const import DOMAIN
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
def get_configured_home_ids(hass: HomeAssistant) -> set[str]: def get_intervals_for_day_offsets(
"""Get all home_ids that have active config entries (main + subentries).""" coordinator_data: dict[str, Any] | None,
home_ids = set() offsets: list[int],
) -> list[dict[str, Any]]:
"""
Get intervals for specific day offsets from coordinator data.
# Collect home_ids from all config entries for this domain This is the core function for filtering intervals by date offset.
for entry in hass.config_entries.async_entries(DOMAIN): Abstracts the data structure - callers don't need to know where intervals are stored.
if home_id := entry.data.get("home_id"):
home_ids.add(home_id)
return home_ids Performance optimized:
- Date comparison using .date() on datetime objects (fast)
- Single pass through intervals with date caching
- Only processes requested offsets
Args:
coordinator_data: Coordinator data dict (typically coordinator.data).
offsets: List of day offsets relative to today (e.g., [0, 1] for today and tomorrow).
Range: -374 to +1 (allows historical comparisons up to one year + one week).
0 = today, -1 = yesterday, +1 = tomorrow, -7 = one week ago, etc.
Returns:
List of intervals matching the requested day offsets, in chronological order.
Example:
# Get only today's intervals
today_intervals = get_intervals_for_day_offsets(coordinator.data, [0])
# Get today and tomorrow
future_intervals = get_intervals_for_day_offsets(coordinator.data, [0, 1])
# Get all available intervals
all = get_intervals_for_day_offsets(coordinator.data, [-1, 0, 1])
# Compare last week with same week one year ago
comparison = get_intervals_for_day_offsets(coordinator.data, [-7, -371])
"""
if not coordinator_data:
return []
# Validate offsets are within acceptable range
min_offset = -374 # One year + one week for comparisons
max_offset = 1 # Tomorrow (we don't have data further in the future)
# Extract intervals from coordinator data structure (priceInfo is now a list)
all_intervals = coordinator_data.get("priceInfo", [])
if not all_intervals:
return []
# Get current local date for comparison (no TimeService needed - use dt_util directly)
now_local = dt_util.now()
today_date = now_local.date()
# Build set of target dates based on requested offsets
target_dates = set()
for offset in offsets:
# Silently clamp offsets to valid range (don't fail on invalid input)
if offset < min_offset or offset > max_offset:
continue
target_date = today_date + timedelta(days=offset)
target_dates.add(target_date)
if not target_dates:
return []
# Filter intervals matching target dates
# Optimized: single pass, date() called once per interval
result = []
for interval in all_intervals:
starts_at = interval.get("startsAt")
if not starts_at:
continue
# Handle both datetime objects and strings (for flexibility)
if isinstance(starts_at, str):
# Parse if string (should be rare after parse_all_timestamps)
starts_at = dt_util.parse_datetime(starts_at)
if not starts_at:
continue
starts_at = dt_util.as_local(starts_at)
# Fast date comparison using datetime.date()
interval_date = starts_at.date()
if interval_date in target_dates:
result.append(interval)
return result
def needs_tomorrow_data( def needs_tomorrow_data(
cached_price_data: dict[str, Any] | None, cached_price_data: dict[str, Any] | None,
tomorrow_date: date,
) -> bool: ) -> bool:
"""Check if tomorrow data is missing or invalid."""
if not cached_price_data or "homes" not in cached_price_data:
return False
# Use provided TimeService or create new one
for home_data in cached_price_data["homes"].values():
price_info = home_data.get("price_info", {})
tomorrow_prices = price_info.get("tomorrow", [])
# Check if tomorrow data is missing
if not tomorrow_prices:
return True
# Check if tomorrow data is actually for tomorrow (validate date)
first_price = tomorrow_prices[0]
if starts_at := first_price.get("startsAt"): # Already datetime in local timezone
price_date = starts_at.date()
if price_date != tomorrow_date:
return True
return False
def perform_midnight_turnover(price_info: dict[str, Any], *, time: TibberPricesTimeService) -> dict[str, Any]:
""" """
Perform midnight turnover on price data. Check if tomorrow data is missing or invalid in cached price data.
Moves: today yesterday, tomorrow today, clears tomorrow. Expects single-home cache format: {"price_info": [...], "home_id": "xxx"}
This handles cases where: Old multi-home format (v0.14.0) is automatically invalidated by is_cache_valid()
- Server was running through midnight in cache.py, so we only need to handle the current format here.
- Cache is being refreshed and needs proper day rotation
Uses get_intervals_for_day_offsets() to automatically determine tomorrow
based on current date. No explicit date parameter needed.
Args: Args:
price_info: The price info dict with 'today', 'tomorrow', 'yesterday' keys cached_price_data: Cached price data in single-home structure
time: TibberPricesTimeService instance (required)
Returns: Returns:
Updated price_info with rotated day data True if tomorrow's data is missing, False otherwise
""" """
# Use provided TimeService or create new one if not cached_price_data or "price_info" not in cached_price_data:
return False
current_local_date = time.now().date() # Single-home format: {"price_info": [...], "home_id": "xxx"}
# Use helper to get tomorrow's intervals (offset +1 from current date)
coordinator_data = {"priceInfo": cached_price_data.get("price_info", [])}
tomorrow_intervals = get_intervals_for_day_offsets(coordinator_data, [1])
# Extract current data # If no intervals for tomorrow found, we need tomorrow data
today_prices = price_info.get("today", []) return len(tomorrow_intervals) == 0
tomorrow_prices = price_info.get("tomorrow", [])
# Check if any of today's prices are from the previous day
prices_need_rotation = False
if today_prices:
first_today_price = today_prices[0].get("startsAt") # Already datetime in local timezone
if first_today_price:
first_today_price_date = first_today_price.date()
prices_need_rotation = first_today_price_date < current_local_date
if prices_need_rotation:
return {
"yesterday": today_prices,
"today": tomorrow_prices,
"tomorrow": [],
"currency": price_info.get("currency", "EUR"),
}
# No rotation needed, return original
return price_info
def parse_all_timestamps(price_data: dict[str, Any], *, time: TibberPricesTimeService) -> dict[str, Any]: def parse_all_timestamps(price_data: dict[str, Any], *, time: TibberPricesTimeService) -> dict[str, Any]:
@ -113,29 +148,28 @@ def parse_all_timestamps(price_data: dict[str, Any], *, time: TibberPricesTimeSe
Performance: ~200 timestamps parsed ONCE instead of multiple times per update cycle. Performance: ~200 timestamps parsed ONCE instead of multiple times per update cycle.
Args: Args:
price_data: Raw API data with string timestamps price_data: Raw API data with string timestamps (single-home structure)
time: TibberPricesTimeService for parsing time: TibberPricesTimeService for parsing
Returns: Returns:
Same structure but with datetime objects instead of strings Same structure but with datetime objects instead of strings
""" """
if not price_data or "homes" not in price_data: if not price_data:
return price_data return price_data
# Process each home # Single-home structure: price_info is a flat list of intervals
for home_data in price_data["homes"].values(): price_info = price_data.get("price_info", [])
price_info = home_data.get("price_info", {})
# Process each day's intervals # Skip if price_info is not a list (empty or invalid)
for day_key in ["yesterday", "today", "tomorrow"]: if not isinstance(price_info, list):
intervals = price_info.get(day_key, []) return price_data
for interval in intervals:
if (starts_at_str := interval.get("startsAt")) and isinstance(starts_at_str, str): # Parse timestamps in flat interval list
# Parse once, convert to local timezone, store as datetime object for interval in price_info:
interval["startsAt"] = time.parse_and_localize(starts_at_str) if (starts_at_str := interval.get("startsAt")) and isinstance(starts_at_str, str):
# If already datetime (e.g., from cache), skip parsing # Parse once, convert to local timezone, store as datetime object
interval["startsAt"] = time.parse_and_localize(starts_at_str)
# If already datetime (e.g., from cache), skip parsing
return price_data return price_data
return price_info

View file

@ -157,7 +157,18 @@ class TibberPricesListenerManager:
self, self,
handler_callback: Callable[[datetime], None], handler_callback: Callable[[datetime], None],
) -> None: ) -> None:
"""Schedule 30-second entity refresh for timing sensors.""" """
Schedule 30-second entity refresh for timing sensors (Timer #3).
This is Timer #3 in the integration's timer architecture. It MUST trigger
at exact 30-second boundaries (0, 30 seconds) to keep timing sensors
(countdown, time-to) accurate.
Home Assistant may introduce small scheduling delays (jitter), which are
corrected using _BOUNDARY_TOLERANCE_SECONDS in time_service.py.
Runs independently of Timer #1 (API polling), which operates at random offsets.
"""
# Cancel any existing timer # Cancel any existing timer
if self._minute_timer_cancel: if self._minute_timer_cancel:
self._minute_timer_cancel() self._minute_timer_cancel()

View file

@ -0,0 +1,121 @@
"""
Midnight turnover detection and coordination handler.
This module provides atomic coordination logic for midnight turnover between
multiple timers (DataUpdateCoordinator and quarter-hour refresh timer).
The handler ensures that midnight turnover happens exactly once per day,
regardless of which timer detects it first.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from datetime import datetime
class TibberPricesMidnightHandler:
"""
Handles midnight turnover detection and atomic coordination.
This class encapsulates the logic for detecting when midnight has passed
and ensuring that data rotation happens exactly once per day.
The atomic coordination works without locks by comparing date values:
- Timer #1 and Timer #2 both check if current_date > last_checked_date
- First timer to succeed marks the date as checked
- Second timer sees dates are equal and skips turnover
- Timer #3 doesn't participate in midnight logic (only 30-second timing updates)
HA Restart Handling:
- If HA restarts after midnight, _last_midnight_check is None (fresh handler)
- But _last_actual_turnover is restored from cache with yesterday's date
- is_turnover_needed() detects the date mismatch and returns True
- Missed midnight turnover is caught up on first timer run after restart
Attributes:
_last_midnight_check: Last datetime when midnight turnover was checked
_last_actual_turnover: Last datetime when turnover actually happened
"""
def __init__(self) -> None:
"""Initialize the midnight handler."""
self._last_midnight_check: datetime | None = None
self._last_actual_turnover: datetime | None = None
def is_turnover_needed(self, now: datetime) -> bool:
"""
Check if midnight turnover is needed without side effects.
This is a pure check function - it doesn't modify state. Call
mark_turnover_done() after successfully performing the turnover.
IMPORTANT: If handler is uninitialized (HA restart), this checks if we
need to catch up on midnight turnover that happened while HA was down.
Args:
now: Current datetime to check
Returns:
True if midnight has passed since last check, False otherwise
"""
# First time initialization after HA restart
if self._last_midnight_check is None:
# Check if we need to catch up on missed midnight turnover
# If last_actual_turnover exists, we can determine if midnight was missed
if self._last_actual_turnover is not None:
last_turnover_date = self._last_actual_turnover.date()
current_date = now.date()
# Turnover needed if we're on a different day than last turnover
return current_date > last_turnover_date
# Both None = fresh start, no turnover needed yet
return False
# Extract date components
last_checked_date = self._last_midnight_check.date()
current_date = now.date()
# Midnight crossed if current date is after last checked date
return current_date > last_checked_date
def mark_turnover_done(self, now: datetime) -> None:
"""
Mark that midnight turnover has been completed.
Updates both check timestamp and actual turnover timestamp to prevent
duplicate turnover by another timer.
Args:
now: Current datetime when turnover was completed
"""
self._last_midnight_check = now
self._last_actual_turnover = now
def update_check_time(self, now: datetime) -> None:
"""
Update the last check time without marking turnover as done.
Used for initializing the handler or updating the check timestamp
without triggering turnover logic.
Args:
now: Current datetime to set as last check time
"""
if self._last_midnight_check is None:
self._last_midnight_check = now
@property
def last_turnover_time(self) -> datetime | None:
"""Get the timestamp of the last actual turnover."""
return self._last_actual_turnover
@property
def last_check_time(self) -> datetime | None:
"""Get the timestamp of the last midnight check."""
return self._last_midnight_check

View file

@ -16,8 +16,10 @@ from .period_building import (
add_interval_ends, add_interval_ends,
build_periods, build_periods,
calculate_reference_prices, calculate_reference_prices,
extend_periods_across_midnight,
filter_periods_by_end_date, filter_periods_by_end_date,
filter_periods_by_min_length, filter_periods_by_min_length,
filter_superseded_periods,
split_intervals_by_day, split_intervals_by_day,
) )
from .period_statistics import ( from .period_statistics import (
@ -52,10 +54,10 @@ def calculate_periods(
7. Extract period summaries (start/end times, not full price data) 7. Extract period summaries (start/end times, not full price data)
Args: Args:
all_prices: All price data points from yesterday/today/tomorrow all_prices: All price data points from yesterday/today/tomorrow.
config: Period configuration containing reverse_sort, flex, min_distance_from_avg, config: Period configuration containing reverse_sort, flex, min_distance_from_avg,
min_period_length, threshold_low, and threshold_high min_period_length, threshold_low, and threshold_high.
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required).
Returns: Returns:
Dict with: Dict with:
@ -73,7 +75,7 @@ def calculate_periods(
# Extract config values # Extract config values
reverse_sort = config.reverse_sort reverse_sort = config.reverse_sort
flex_raw = config.flex flex_raw = config.flex # Already normalized to positive by get_period_config()
min_distance_from_avg = config.min_distance_from_avg min_distance_from_avg = config.min_distance_from_avg
min_period_length = config.min_period_length min_period_length = config.min_period_length
threshold_low = config.threshold_low threshold_low = config.threshold_low
@ -81,13 +83,14 @@ def calculate_periods(
# CRITICAL: Hard cap flex at 50% to prevent degenerate behavior # CRITICAL: Hard cap flex at 50% to prevent degenerate behavior
# Above 50%, period detection becomes unreliable (too many intervals qualify) # Above 50%, period detection becomes unreliable (too many intervals qualify)
# NOTE: flex_raw is already positive from normalization in get_period_config()
flex = flex_raw flex = flex_raw
if abs(flex_raw) > MAX_SAFE_FLEX: if flex_raw > MAX_SAFE_FLEX:
flex = MAX_SAFE_FLEX if flex_raw > 0 else -MAX_SAFE_FLEX flex = MAX_SAFE_FLEX
_LOGGER.warning( _LOGGER.warning(
"Flex %.1f%% exceeds maximum safe value! Capping at %.0f%%. " "Flex %.1f%% exceeds maximum safe value! Capping at %.0f%%. "
"Recommendation: Use 15-20%% with relaxation enabled, or 25-35%% without relaxation.", "Recommendation: Use 15-20%% with relaxation enabled, or 25-35%% without relaxation.",
abs(flex_raw) * 100, flex_raw * 100,
MAX_SAFE_FLEX * 100, MAX_SAFE_FLEX * 100,
) )
@ -126,9 +129,13 @@ def calculate_periods(
# High flex (>25%) makes outlier detection too permissive, accepting # High flex (>25%) makes outlier detection too permissive, accepting
# unstable price contexts as "normal". This breaks period formation. # unstable price contexts as "normal". This breaks period formation.
# User's flex setting still applies to period criteria (in_flex check). # User's flex setting still applies to period criteria (in_flex check).
# Import details logger locally (core.py imports logger locally in function)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details") # noqa: N806
outlier_flex = min(abs(flex) * 100, MAX_OUTLIER_FLEX * 100) outlier_flex = min(abs(flex) * 100, MAX_OUTLIER_FLEX * 100)
if abs(flex) * 100 > MAX_OUTLIER_FLEX * 100: if abs(flex) * 100 > MAX_OUTLIER_FLEX * 100:
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sOutlier filtering: Using capped flex %.1f%% (user setting: %.1f%%)", "%sOutlier filtering: Using capped flex %.1f%% (user setting: %.1f%%)",
INDENT_L0, INDENT_L0,
outlier_flex, outlier_flex,
@ -145,6 +152,7 @@ def calculate_periods(
price_context = { price_context = {
"ref_prices": ref_prices, "ref_prices": ref_prices,
"avg_prices": avg_price_by_day, "avg_prices": avg_price_by_day,
"intervals_by_day": intervals_by_day, # Needed for day volatility calculation
"flex": flex, "flex": flex,
"min_distance_from_avg": min_distance_from_avg, "min_distance_from_avg": min_distance_from_avg,
} }
@ -177,12 +185,14 @@ def calculate_periods(
# Step 5: Add interval ends # Step 5: Add interval ends
add_interval_ends(raw_periods, time=time) add_interval_ends(raw_periods, time=time)
# Step 6: Filter periods by end date (keep periods ending today or later) # Step 6: Filter periods by end date (keep periods ending yesterday or later)
# This ensures coordinator cache contains yesterday/today/tomorrow periods
# Sensors filter further for today+tomorrow, services can access all cached periods
raw_periods = filter_periods_by_end_date(raw_periods, time=time) raw_periods = filter_periods_by_end_date(raw_periods, time=time)
# Step 8: Extract lightweight period summaries (no full price data) # Step 7: Extract lightweight period summaries (no full price data)
# Note: Filtering for current/future is done here based on end date, # Note: Periods are filtered by end date to keep yesterday/today/tomorrow.
# not start date. This preserves periods that started yesterday but end today. # This preserves periods that started day-before-yesterday but end yesterday.
thresholds = TibberPricesThresholdConfig( thresholds = TibberPricesThresholdConfig(
threshold_low=threshold_low, threshold_low=threshold_low,
threshold_high=threshold_high, threshold_high=threshold_high,
@ -199,6 +209,26 @@ def calculate_periods(
time=time, time=time,
) )
# Step 8: Cross-day extension for late-night periods
# If a best-price period ends near midnight and tomorrow has continued low prices,
# extend the period across midnight to give users the full cheap window
period_summaries = extend_periods_across_midnight(
period_summaries,
all_prices_sorted,
price_context,
time=time,
reverse_sort=reverse_sort,
)
# Step 9: Filter superseded periods
# When tomorrow data is available, late-night today periods that were found via
# relaxation may be obsolete if tomorrow has significantly better alternatives
period_summaries = filter_superseded_periods(
period_summaries,
time=time,
reverse_sort=reverse_sort,
)
return { return {
"periods": period_summaries, # Lightweight summaries only "periods": period_summaries, # Lightweight summaries only
"metadata": { "metadata": {

View file

@ -109,6 +109,11 @@ def check_interval_criteria(
""" """
Check if interval meets flex and minimum distance criteria. Check if interval meets flex and minimum distance criteria.
CRITICAL: This function works with NORMALIZED values (always positive):
- criteria.flex: Always positive (e.g., 0.20 for 20%)
- criteria.min_distance_from_avg: Always positive (e.g., 5.0 for 5%)
- criteria.reverse_sort: Determines direction (True=Peak, False=Best)
Args: Args:
price: Interval price price: Interval price
criteria: Interval criteria (ref_price, avg_price, flex, etc.) criteria: Interval criteria (ref_price, avg_price, flex, etc.)
@ -117,22 +122,54 @@ def check_interval_criteria(
Tuple of (in_flex, meets_min_distance) Tuple of (in_flex, meets_min_distance)
""" """
# Calculate percentage difference from reference # Normalize inputs to absolute values for consistent calculation
percent_diff = ((price - criteria.ref_price) / criteria.ref_price) * 100 if criteria.ref_price != 0 else 0.0 flex_abs = abs(criteria.flex)
min_distance_abs = abs(criteria.min_distance_from_avg)
# Check if interval qualifies for the period # ============================================================
in_flex = percent_diff >= criteria.flex * 100 if criteria.reverse_sort else percent_diff <= criteria.flex * 100 # FLEX FILTER: Check if price is within flex threshold of reference
# ============================================================
# Reference price is:
# - Peak price (reverse_sort=True): daily MAXIMUM
# - Best price (reverse_sort=False): daily MINIMUM
#
# Flex band calculation (using absolute values):
# - Peak price: [max - max*flex, max] → accept prices near the maximum
# - Best price: [min, min + min*flex] → accept prices near the minimum
#
# Examples with flex=20%:
# - Peak: max=30 ct → accept [24, 30] ct (prices ≥ 24 ct)
# - Best: min=10 ct → accept [10, 12] ct (prices ≤ 12 ct)
if criteria.ref_price == 0:
# Zero reference: flex has no effect, use strict equality
in_flex = price == 0
else:
# Calculate flex amount using absolute value
flex_amount = abs(criteria.ref_price) * flex_abs
if criteria.reverse_sort:
# Peak price: accept prices >= (ref_price - flex_amount)
# Prices must be CLOSE TO or AT the maximum
flex_threshold = criteria.ref_price - flex_amount
in_flex = price >= flex_threshold
else:
# Best price: accept prices <= (ref_price + flex_amount)
# Accept ALL low prices up to the flex threshold, not just those >= minimum
# This ensures that if there are multiple low-price intervals, all that meet
# the threshold are included, regardless of whether they're before or after
# the daily minimum in the chronological sequence.
flex_threshold = criteria.ref_price + flex_amount
in_flex = price <= flex_threshold
# ============================================================
# MIN_DISTANCE FILTER: Check if price is far enough from average
# ============================================================
# CRITICAL: Adjust min_distance dynamically based on flex to prevent conflicts # CRITICAL: Adjust min_distance dynamically based on flex to prevent conflicts
# Problem: High flex (e.g., 50%) can conflict with fixed min_distance (e.g., 5%) # Problem: High flex (e.g., 50%) can conflict with fixed min_distance (e.g., 5%)
# Solution: When flex is high, reduce min_distance requirement proportionally # Solution: When flex is high, reduce min_distance requirement proportionally
#
# At low flex (≤20%), use full min_distance (e.g., 5%)
# At high flex (≥40%), reduce min_distance to avoid over-filtering
# Linear interpolation between 20-40% flex range
adjusted_min_distance = criteria.min_distance_from_avg adjusted_min_distance = min_distance_abs
flex_abs = abs(criteria.flex)
if flex_abs > FLEX_SCALING_THRESHOLD: if flex_abs > FLEX_SCALING_THRESHOLD:
# Scale down min_distance as flex increases # Scale down min_distance as flex increases
@ -141,28 +178,30 @@ def check_interval_criteria(
# At 50% flex: multiplier = 0.25 (quarter min_distance) # At 50% flex: multiplier = 0.25 (quarter min_distance)
flex_excess = flex_abs - 0.20 # How much above 20% flex_excess = flex_abs - 0.20 # How much above 20%
scale_factor = max(0.25, 1.0 - (flex_excess * 2.5)) # Linear reduction, min 25% scale_factor = max(0.25, 1.0 - (flex_excess * 2.5)) # Linear reduction, min 25%
adjusted_min_distance = criteria.min_distance_from_avg * scale_factor adjusted_min_distance = min_distance_abs * scale_factor
# Log adjustment at DEBUG level (only when significant reduction) # Log adjustment at DEBUG level (only when significant reduction)
if scale_factor < SCALE_FACTOR_WARNING_THRESHOLD: if scale_factor < SCALE_FACTOR_WARNING_THRESHOLD:
import logging # noqa: PLC0415 import logging # noqa: PLC0415
_LOGGER = logging.getLogger(__name__) # noqa: N806 _LOGGER = logging.getLogger(f"{__name__}.details") # noqa: N806
_LOGGER.debug( _LOGGER.debug(
"High flex %.1f%% detected: Reducing min_distance %.1f%%%.1f%% (scale %.2f)", "High flex %.1f%% detected: Reducing min_distance %.1f%%%.1f%% (scale %.2f)",
flex_abs * 100, flex_abs * 100,
criteria.min_distance_from_avg, min_distance_abs,
adjusted_min_distance, adjusted_min_distance,
scale_factor, scale_factor,
) )
# Minimum distance from average (using adjusted value) # Calculate threshold from average (using normalized positive distance)
# - Peak price: threshold = avg * (1 + distance/100) → prices must be ABOVE avg+distance
# - Best price: threshold = avg * (1 - distance/100) → prices must be BELOW avg-distance
if criteria.reverse_sort: if criteria.reverse_sort:
# Peak price: must be at least adjusted_min_distance% above average # Peak: price must be >= avg * (1 + distance%)
min_distance_threshold = criteria.avg_price * (1 + adjusted_min_distance / 100) min_distance_threshold = criteria.avg_price * (1 + adjusted_min_distance / 100)
meets_min_distance = price >= min_distance_threshold meets_min_distance = price >= min_distance_threshold
else: else:
# Best price: must be at least adjusted_min_distance% below average # Best: price must be <= avg * (1 - distance%)
min_distance_threshold = criteria.avg_price * (1 - adjusted_min_distance / 100) min_distance_threshold = criteria.avg_price * (1 - adjusted_min_distance / 100)
meets_min_distance = price <= min_distance_threshold meets_min_distance = price <= min_distance_threshold

View file

@ -15,18 +15,34 @@ Uses statistical methods:
from __future__ import annotations from __future__ import annotations
import logging import logging
from datetime import datetime
from typing import NamedTuple from typing import NamedTuple
from custom_components.tibber_prices.utils.price import calculate_coefficient_of_variation
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
# Outlier filtering constants # Outlier filtering constants
MIN_CONTEXT_SIZE = 3 # Minimum intervals needed before/after for analysis MIN_CONTEXT_SIZE = 3 # Minimum intervals needed before/after for analysis
CONFIDENCE_LEVEL = 2.0 # Standard deviations for 95% confidence interval
VOLATILITY_THRESHOLD = 0.05 # 5% max relative std dev for zigzag detection VOLATILITY_THRESHOLD = 0.05 # 5% max relative std dev for zigzag detection
SYMMETRY_THRESHOLD = 1.5 # Max std dev difference for symmetric spike SYMMETRY_THRESHOLD = 1.5 # Max std dev difference for symmetric spike
RELATIVE_VOLATILITY_THRESHOLD = 2.0 # Window volatility vs context (cluster detection) RELATIVE_VOLATILITY_THRESHOLD = 2.0 # Window volatility vs context (cluster detection)
ASYMMETRY_TAIL_WINDOW = 6 # Skip asymmetry check for last ~1.5h (6 intervals) of available data ASYMMETRY_TAIL_WINDOW = 6 # Skip asymmetry check for last ~1.5h (6 intervals) of available data
ZIGZAG_TAIL_WINDOW = 6 # Skip zigzag/cluster detection for last ~1.5h (6 intervals) ZIGZAG_TAIL_WINDOW = 6 # Skip zigzag/cluster detection for last ~1.5h (6 intervals)
EXTREMES_PROTECTION_TOLERANCE = 0.001 # Protect prices within 0.1% of daily min/max from smoothing
# Adaptive confidence level constants
# Uses coefficient of variation (CV) from utils/price.py for consistency with volatility sensors
# On flat days (low CV), we're more conservative (higher confidence = fewer smoothed)
# On volatile days (high CV), we're more aggressive (lower confidence = more smoothed)
CONFIDENCE_LEVEL_MIN = 1.5 # Minimum confidence (volatile days: smooth more aggressively)
CONFIDENCE_LEVEL_MAX = 2.5 # Maximum confidence (flat days: smooth more conservatively)
CONFIDENCE_LEVEL_DEFAULT = 2.0 # Default: 95% confidence interval (2 std devs)
# CV thresholds for adaptive confidence (align with volatility sensor defaults)
# These are in percentage points (e.g., 10.0 = 10% CV)
DAILY_CV_LOW = 10.0 # ≤10% CV = flat day (use max confidence)
DAILY_CV_HIGH = 30.0 # ≥30% CV = volatile day (use min confidence)
# Module-local log indentation (each module starts at level 0) # Module-local log indentation (each module starts at level 0)
INDENT_L0 = "" # All logs in this module (no indentation needed) INDENT_L0 = "" # All logs in this module (no indentation needed)
@ -52,7 +68,7 @@ def _should_skip_tail_check(
) -> bool: ) -> bool:
"""Return True when remaining intervals fall inside tail window and log why.""" """Return True when remaining intervals fall inside tail window and log why."""
if remaining_intervals < tail_window: if remaining_intervals < tail_window:
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sSpike at %s: Skipping %s check (only %d intervals remaining)", "%sSpike at %s: Skipping %s check (only %d intervals remaining)",
INDENT_L0, INDENT_L0,
interval_label, interval_label,
@ -190,7 +206,7 @@ def _validate_spike_candidate(
context_diff_pct = abs(avg_after - avg_before) / avg_before if avg_before > 0 else 0 context_diff_pct = abs(avg_after - avg_before) / avg_before if avg_before > 0 else 0
if context_diff_pct > candidate.flexibility_ratio: if context_diff_pct > candidate.flexibility_ratio:
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sInterval %s: Context unstable (%.1f%% change) - not a spike", "%sInterval %s: Context unstable (%.1f%% change) - not a spike",
INDENT_L0, INDENT_L0,
candidate.current.get("startsAt", "unknown interval"), candidate.current.get("startsAt", "unknown interval"),
@ -204,7 +220,7 @@ def _validate_spike_candidate(
"asymmetry", "asymmetry",
candidate.current.get("startsAt", "unknown interval"), candidate.current.get("startsAt", "unknown interval"),
) and not _check_symmetry(avg_before, avg_after, candidate.stats["std_dev"]): ) and not _check_symmetry(avg_before, avg_after, candidate.stats["std_dev"]):
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sSpike at %s rejected: Asymmetric (before=%.2f, after=%.2f ct/kWh)", "%sSpike at %s rejected: Asymmetric (before=%.2f, after=%.2f ct/kWh)",
INDENT_L0, INDENT_L0,
candidate.current.get("startsAt", "unknown interval"), candidate.current.get("startsAt", "unknown interval"),
@ -222,7 +238,7 @@ def _validate_spike_candidate(
return True return True
if _detect_zigzag_pattern(candidate.analysis_window, candidate.stats["std_dev"]): if _detect_zigzag_pattern(candidate.analysis_window, candidate.stats["std_dev"]):
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sSpike at %s rejected: Zigzag/cluster pattern detected", "%sSpike at %s rejected: Zigzag/cluster pattern detected",
INDENT_L0, INDENT_L0,
candidate.current.get("startsAt", "unknown interval"), candidate.current.get("startsAt", "unknown interval"),
@ -232,6 +248,166 @@ def _validate_spike_candidate(
return True return True
def _calculate_daily_extremes(intervals: list[dict]) -> dict[str, tuple[float, float]]:
"""
Calculate daily min/max prices for each day in the interval list.
These extremes are used to protect reference prices from being smoothed.
The daily minimum is the reference for best_price periods, and the daily
maximum is the reference for peak_price periods - smoothing these would
break period detection.
Args:
intervals: List of price intervals with 'startsAt' and 'total' keys
Returns:
Dict mapping date strings to (min_price, max_price) tuples
"""
daily_prices: dict[str, list[float]] = {}
for interval in intervals:
starts_at = interval.get("startsAt")
if starts_at is None:
continue
# Handle both datetime objects and ISO strings
dt = datetime.fromisoformat(starts_at) if isinstance(starts_at, str) else starts_at
date_key = dt.strftime("%Y-%m-%d")
price = float(interval["total"])
daily_prices.setdefault(date_key, []).append(price)
# Calculate min/max for each day
return {date_key: (min(prices), max(prices)) for date_key, prices in daily_prices.items()}
def _calculate_daily_cv(intervals: list[dict]) -> dict[str, float]:
"""
Calculate daily coefficient of variation (CV) for each day.
Uses the same CV calculation as volatility sensors for consistency.
CV = (std_dev / mean) * 100, expressed as percentage.
Used to adapt the confidence level for outlier detection:
- Flat days (low CV): Higher confidence fewer false positives
- Volatile days (high CV): Lower confidence catch more real outliers
Args:
intervals: List of price intervals with 'startsAt' and 'total' keys
Returns:
Dict mapping date strings to CV percentage (e.g., 15.0 for 15% CV)
"""
daily_prices: dict[str, list[float]] = {}
for interval in intervals:
starts_at = interval.get("startsAt")
if starts_at is None:
continue
dt = datetime.fromisoformat(starts_at) if isinstance(starts_at, str) else starts_at
date_key = dt.strftime("%Y-%m-%d")
price = float(interval["total"])
daily_prices.setdefault(date_key, []).append(price)
# Calculate CV using the shared function from utils/price.py
result = {}
for date_key, prices in daily_prices.items():
cv = calculate_coefficient_of_variation(prices)
result[date_key] = cv if cv is not None else 0.0
return result
def _get_adaptive_confidence_level(
interval: dict,
daily_cv: dict[str, float],
) -> float:
"""
Get adaptive confidence level based on daily coefficient of variation (CV).
Maps daily CV to confidence level:
- Low CV (10%): High confidence (2.5) conservative, fewer smoothed
- High CV (30%): Low confidence (1.5) aggressive, more smoothed
- Between: Linear interpolation
Uses the same CV calculation as volatility sensors for consistency.
Args:
interval: Price interval dict with 'startsAt' key
daily_cv: Dict from _calculate_daily_cv()
Returns:
Confidence level multiplier for std_dev threshold
"""
starts_at = interval.get("startsAt")
if starts_at is None:
return CONFIDENCE_LEVEL_DEFAULT
dt = datetime.fromisoformat(starts_at) if isinstance(starts_at, str) else starts_at
date_key = dt.strftime("%Y-%m-%d")
cv = daily_cv.get(date_key, 0.0)
# Linear interpolation between LOW and HIGH CV
# Low CV → high confidence (conservative)
# High CV → low confidence (aggressive)
if cv <= DAILY_CV_LOW:
return CONFIDENCE_LEVEL_MAX
if cv >= DAILY_CV_HIGH:
return CONFIDENCE_LEVEL_MIN
# Linear interpolation: as CV increases, confidence decreases
ratio = (cv - DAILY_CV_LOW) / (DAILY_CV_HIGH - DAILY_CV_LOW)
return CONFIDENCE_LEVEL_MAX - (ratio * (CONFIDENCE_LEVEL_MAX - CONFIDENCE_LEVEL_MIN))
def _is_daily_extreme(
interval: dict,
daily_extremes: dict[str, tuple[float, float]],
tolerance: float = EXTREMES_PROTECTION_TOLERANCE,
) -> bool:
"""
Check if an interval's price is at or very near a daily extreme.
Prices at daily extremes should never be smoothed because:
- Daily minimum is the reference for best_price period detection
- Daily maximum is the reference for peak_price period detection
- Smoothing these would cause periods to miss their most important intervals
Args:
interval: Price interval dict with 'startsAt' and 'total' keys
daily_extremes: Dict from _calculate_daily_extremes()
tolerance: Relative tolerance for matching (default 0.1%)
Returns:
True if the price is at or very near a daily min or max
"""
starts_at = interval.get("startsAt")
if starts_at is None:
return False
# Handle both datetime objects and ISO strings
dt = datetime.fromisoformat(starts_at) if isinstance(starts_at, str) else starts_at
date_key = dt.strftime("%Y-%m-%d")
if date_key not in daily_extremes:
return False
price = float(interval["total"])
daily_min, daily_max = daily_extremes[date_key]
# Check if price is within tolerance of daily min or max
# Using relative tolerance: |price - extreme| <= extreme * tolerance
min_threshold = daily_min * (1 + tolerance)
max_threshold = daily_max * (1 - tolerance)
return price <= min_threshold or price >= max_threshold
def filter_price_outliers( def filter_price_outliers(
intervals: list[dict], intervals: list[dict],
flexibility_pct: float, flexibility_pct: float,
@ -259,15 +435,29 @@ def filter_price_outliers(
Intervals with smoothed prices (marked with _smoothed flag) Intervals with smoothed prices (marked with _smoothed flag)
""" """
# Convert percentage to ratio once for all comparisons (e.g., 15.0 → 0.15)
flexibility_ratio = flexibility_pct / 100
# Calculate daily extremes to protect reference prices from smoothing
# Daily min is the reference for best_price, daily max for peak_price
daily_extremes = _calculate_daily_extremes(intervals)
# Calculate daily coefficient of variation (CV) for adaptive confidence levels
# Uses same CV calculation as volatility sensors for consistency
# Flat days → conservative smoothing, volatile days → aggressive smoothing
daily_cv = _calculate_daily_cv(intervals)
# Log CV info for debugging (CV is in percentage points, e.g., 15.0 = 15%)
cv_info = ", ".join(f"{date}: {cv:.1f}%" for date, cv in sorted(daily_cv.items()))
_LOGGER.info( _LOGGER.info(
"%sSmoothing price outliers: %d intervals, flex=%.1f%%", "%sSmoothing price outliers: %d intervals, flex=%.1f%%, daily CV: %s",
INDENT_L0, INDENT_L0,
len(intervals), len(intervals),
flexibility_pct, flexibility_pct,
cv_info,
) )
# Convert percentage to ratio once for all comparisons (e.g., 15.0 → 0.15) protected_count = 0
flexibility_ratio = flexibility_pct / 100
result = [] result = []
smoothed_count = 0 smoothed_count = 0
@ -275,6 +465,20 @@ def filter_price_outliers(
for i, current in enumerate(intervals): for i, current in enumerate(intervals):
current_price = current["total"] current_price = current["total"]
# CRITICAL: Never smooth daily extremes - they are the reference prices!
# Smoothing the daily min would break best_price period detection,
# smoothing the daily max would break peak_price period detection.
if _is_daily_extreme(current, daily_extremes):
result.append(current)
protected_count += 1
_LOGGER_DETAILS.debug(
"%sProtected daily extreme at %s: %.2f ct/kWh (not smoothed)",
INDENT_L0,
current.get("startsAt", f"index {i}"),
current_price * 100,
)
continue
# Get context windows (3 intervals before and after) # Get context windows (3 intervals before and after)
context_before = intervals[max(0, i - MIN_CONTEXT_SIZE) : i] context_before = intervals[max(0, i - MIN_CONTEXT_SIZE) : i]
context_after = intervals[i + 1 : min(len(intervals), i + 1 + MIN_CONTEXT_SIZE)] context_after = intervals[i + 1 : min(len(intervals), i + 1 + MIN_CONTEXT_SIZE)]
@ -296,8 +500,11 @@ def filter_price_outliers(
# Calculate how far current price deviates from expected # Calculate how far current price deviates from expected
residual = abs(current_price - expected_price) residual = abs(current_price - expected_price)
# Tolerance based on statistical confidence (2 std dev = 95% confidence) # Adaptive confidence level based on daily CV:
tolerance = stats["std_dev"] * CONFIDENCE_LEVEL # - Flat days (low CV): higher confidence (2.5) → fewer false positives
# - Volatile days (high CV): lower confidence (1.5) → catch more real spikes
confidence_level = _get_adaptive_confidence_level(current, daily_cv)
tolerance = stats["std_dev"] * confidence_level
# Not a spike if within tolerance # Not a spike if within tolerance
if residual <= tolerance: if residual <= tolerance:
@ -330,24 +537,23 @@ def filter_price_outliers(
result.append(smoothed) result.append(smoothed)
smoothed_count += 1 smoothed_count += 1
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sSmoothed spike at %s: %.2f%.2f ct/kWh (residual: %.2f, tolerance: %.2f, trend_slope: %.4f)", "%sSmoothed spike at %s: %.2f%.2f ct/kWh (residual: %.2f, tolerance: %.2f, confidence: %.2f)",
INDENT_L0, INDENT_L0,
current.get("startsAt", f"index {i}"), current.get("startsAt", f"index {i}"),
current_price * 100, current_price * 100,
expected_price * 100, expected_price * 100,
residual * 100, residual * 100,
tolerance * 100, tolerance * 100,
stats["trend_slope"] * 100, confidence_level,
) )
if smoothed_count > 0: if smoothed_count > 0 or protected_count > 0:
_LOGGER.info( _LOGGER.info(
"%sPrice outlier smoothing complete: %d/%d intervals smoothed (%.1f%%)", "%sPrice outlier smoothing complete: %d smoothed, %d protected (daily extremes)",
INDENT_L0, INDENT_L0,
smoothed_count, smoothed_count,
len(intervals), protected_count,
(smoothed_count / len(intervals)) * 100,
) )
return result return result

View file

@ -3,13 +3,12 @@
from __future__ import annotations from __future__ import annotations
import logging import logging
from datetime import date, datetime, timedelta
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
from custom_components.tibber_prices.const import PRICE_LEVEL_MAPPING from custom_components.tibber_prices.const import PRICE_LEVEL_MAPPING
if TYPE_CHECKING: if TYPE_CHECKING:
from datetime import date
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from .level_filtering import ( from .level_filtering import (
@ -19,6 +18,7 @@ from .level_filtering import (
from .types import TibberPricesIntervalCriteria from .types import TibberPricesIntervalCriteria
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
# Module-local log indentation (each module starts at level 0) # Module-local log indentation (each module starts at level 0)
INDENT_L0 = "" # Entry point / main function INDENT_L0 = "" # Entry point / main function
@ -65,9 +65,10 @@ def build_periods( # noqa: PLR0913, PLR0915, PLR0912 - Complex period building
""" """
Build periods, allowing periods to cross midnight (day boundary). Build periods, allowing periods to cross midnight (day boundary).
Periods can span multiple days. Each period uses the reference price (min/max) from Periods can span multiple days. Each interval is evaluated against the reference
the day when the period started, ensuring consistent filtering criteria throughout price (min/max) and average price of its own day. This ensures fair filtering
the period even when crossing midnight. criteria even when periods cross midnight, where prices can jump significantly
due to different forecasting uncertainty (prices at day end vs. day start).
Args: Args:
all_prices: All price data points all_prices: All price data points
@ -91,7 +92,7 @@ def build_periods( # noqa: PLR0913, PLR0915, PLR0912 - Complex period building
level_filter_active = True level_filter_active = True
filter_direction = "" if reverse_sort else "" filter_direction = "" if reverse_sort else ""
gap_info = f", gap_tolerance={gap_count}" if gap_count > 0 else "" gap_info = f", gap_tolerance={gap_count}" if gap_count > 0 else ""
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sLevel filter active: %s (order %s, require interval level %s filter level%s)", "%sLevel filter active: %s (order %s, require interval level %s filter level%s)",
INDENT_L0, INDENT_L0,
level_filter.upper(), level_filter.upper(),
@ -101,11 +102,10 @@ def build_periods( # noqa: PLR0913, PLR0915, PLR0912 - Complex period building
) )
else: else:
status = "RELAXED to ANY" if (level_filter and level_filter.lower() == "any") else "DISABLED (not configured)" status = "RELAXED to ANY" if (level_filter and level_filter.lower() == "any") else "DISABLED (not configured)"
_LOGGER.debug("%sLevel filter: %s (accepting all levels)", INDENT_L0, status) _LOGGER_DETAILS.debug("%sLevel filter: %s (accepting all levels)", INDENT_L0, status)
periods: list[list[dict]] = [] periods: list[list[dict]] = []
current_period: list[dict] = [] current_period: list[dict] = []
period_start_date: date | None = None # Track start day of current period
consecutive_gaps = 0 # Track consecutive intervals that deviate by 1 level step consecutive_gaps = 0 # Track consecutive intervals that deviate by 1 level step
intervals_checked = 0 intervals_checked = 0
intervals_filtered_by_level = 0 intervals_filtered_by_level = 0
@ -125,11 +125,14 @@ def build_periods( # noqa: PLR0913, PLR0915, PLR0912 - Complex period building
intervals_checked += 1 intervals_checked += 1
# Use reference price from period start day (for consistency across midnight) # CRITICAL: Always use reference price from the interval's own day
# If no period active, use current interval's day # Each interval must meet the criteria of its own day, not the period start day.
ref_date = period_start_date if period_start_date is not None else date_key # This ensures fair filtering even when periods cross midnight, where prices
# can jump significantly (last intervals of a day have more risk buffer than
# first intervals of next day, as they're set with different uncertainty levels).
ref_date = date_key
# Check flex and minimum distance criteria (using smoothed price and period start date reference) # Check flex and minimum distance criteria (using smoothed price and interval's own day reference)
criteria = TibberPricesIntervalCriteria( criteria = TibberPricesIntervalCriteria(
ref_price=ref_prices[ref_date], ref_price=ref_prices[ref_date],
avg_price=avg_prices[ref_date], avg_price=avg_prices[ref_date],
@ -164,10 +167,6 @@ def build_periods( # noqa: PLR0913, PLR0915, PLR0912 - Complex period building
# Add to period if all criteria are met # Add to period if all criteria are met
if in_flex and meets_min_distance and meets_level: if in_flex and meets_min_distance and meets_level:
# Start new period if none active
if not current_period:
period_start_date = date_key # Lock reference to start day
current_period.append( current_period.append(
{ {
"interval_hour": starts_at.hour, "interval_hour": starts_at.hour,
@ -184,7 +183,6 @@ def build_periods( # noqa: PLR0913, PLR0915, PLR0912 - Complex period building
# Criteria no longer met, end current period # Criteria no longer met, end current period
periods.append(current_period) periods.append(current_period)
current_period = [] current_period = []
period_start_date = None # Reset period start date
consecutive_gaps = 0 # Reset gap counter consecutive_gaps = 0 # Reset gap counter
# Add final period if exists # Add final period if exists
@ -193,14 +191,14 @@ def build_periods( # noqa: PLR0913, PLR0915, PLR0912 - Complex period building
# Log detailed filter statistics # Log detailed filter statistics
if intervals_checked > 0: if intervals_checked > 0:
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sFilter statistics: %d intervals checked", "%sFilter statistics: %d intervals checked",
INDENT_L0, INDENT_L0,
intervals_checked, intervals_checked,
) )
if intervals_filtered_by_flex > 0: if intervals_filtered_by_flex > 0:
flex_pct = (intervals_filtered_by_flex / intervals_checked) * 100 flex_pct = (intervals_filtered_by_flex / intervals_checked) * 100
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s Filtered by FLEX (price too far from ref): %d/%d (%.1f%%)", "%s Filtered by FLEX (price too far from ref): %d/%d (%.1f%%)",
INDENT_L0, INDENT_L0,
intervals_filtered_by_flex, intervals_filtered_by_flex,
@ -209,7 +207,7 @@ def build_periods( # noqa: PLR0913, PLR0915, PLR0912 - Complex period building
) )
if intervals_filtered_by_min_distance > 0: if intervals_filtered_by_min_distance > 0:
distance_pct = (intervals_filtered_by_min_distance / intervals_checked) * 100 distance_pct = (intervals_filtered_by_min_distance / intervals_checked) * 100
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s Filtered by MIN_DISTANCE (price too close to avg): %d/%d (%.1f%%)", "%s Filtered by MIN_DISTANCE (price too close to avg): %d/%d (%.1f%%)",
INDENT_L0, INDENT_L0,
intervals_filtered_by_min_distance, intervals_filtered_by_min_distance,
@ -218,7 +216,7 @@ def build_periods( # noqa: PLR0913, PLR0915, PLR0912 - Complex period building
) )
if level_filter_active and intervals_filtered_by_level > 0: if level_filter_active and intervals_filtered_by_level > 0:
level_pct = (intervals_filtered_by_level / intervals_checked) * 100 level_pct = (intervals_filtered_by_level / intervals_checked) * 100
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s Filtered by LEVEL (wrong price level): %d/%d (%.1f%%)", "%s Filtered by LEVEL (wrong price level): %d/%d (%.1f%%)",
INDENT_L0, INDENT_L0,
intervals_filtered_by_level, intervals_filtered_by_level,
@ -249,19 +247,21 @@ def add_interval_ends(periods: list[list[dict]], *, time: TibberPricesTimeServic
def filter_periods_by_end_date(periods: list[list[dict]], *, time: TibberPricesTimeService) -> list[list[dict]]: def filter_periods_by_end_date(periods: list[list[dict]], *, time: TibberPricesTimeService) -> list[list[dict]]:
""" """
Filter periods to keep only relevant ones for today and tomorrow. Filter periods to keep only relevant ones for yesterday, today, and tomorrow.
Keep periods that: Keep periods that:
- End in the future (> now) - End yesterday or later (>= start of yesterday)
- End today but after the start of the day (not exactly at midnight)
This removes: This removes:
- Periods that ended yesterday - Periods that ended before yesterday (day-before-yesterday or earlier)
- Periods that ended exactly at midnight today (they're completely in the past)
Rationale: Coordinator caches periods for yesterday/today/tomorrow so that:
- Binary sensors can filter for today+tomorrow (current/next periods)
- Services can access yesterday's periods when user requests "yesterday" data
""" """
now = time.now() now = time.now()
today = now.date() # Calculate start of yesterday (midnight yesterday)
midnight_today = time.start_of_local_day(now) yesterday_start = time.start_of_local_day(now) - time.get_interval_duration() * 96 # 96 intervals = 24 hours
filtered = [] filtered = []
for period in periods: for period in periods:
@ -275,13 +275,433 @@ def filter_periods_by_end_date(periods: list[list[dict]], *, time: TibberPricesT
if not period_end: if not period_end:
continue continue
# Keep if period ends in the future # Keep if period ends yesterday or later
if time.is_in_future(period_end): if period_end >= yesterday_start:
filtered.append(period)
continue
# Keep if period ends today but AFTER midnight (not exactly at midnight)
if period_end.date() == today and period_end > midnight_today:
filtered.append(period) filtered.append(period)
return filtered return filtered
def _categorize_periods_for_supersession(
period_summaries: list[dict],
today: date,
tomorrow: date,
late_hour_threshold: int,
early_hour_limit: int,
) -> tuple[list[dict], list[dict], list[dict]]:
"""Categorize periods into today-late, tomorrow-early, and other."""
today_late: list[dict] = []
tomorrow_early: list[dict] = []
other: list[dict] = []
for period in period_summaries:
period_start = period.get("start")
period_end = period.get("end")
if not period_start or not period_end:
other.append(period)
# Today late-night periods: START today at or after late_hour_threshold (e.g., 20:00)
# Note: period_end could be tomorrow (e.g., 23:30-00:00 spans midnight)
elif period_start.date() == today and period_start.hour >= late_hour_threshold:
today_late.append(period)
# Tomorrow early-morning periods: START tomorrow before early_hour_limit (e.g., 08:00)
elif period_start.date() == tomorrow and period_start.hour < early_hour_limit:
tomorrow_early.append(period)
else:
other.append(period)
return today_late, tomorrow_early, other
def _filter_superseded_today_periods(
today_late_periods: list[dict],
best_tomorrow: dict,
best_tomorrow_price: float,
improvement_threshold: float,
) -> list[dict]:
"""Filter today periods that are superseded by a better tomorrow period."""
kept: list[dict] = []
for today_period in today_late_periods:
today_price = today_period.get("price_mean")
if today_price is None:
kept.append(today_period)
continue
# Calculate how much better tomorrow is (as percentage)
improvement_pct = ((today_price - best_tomorrow_price) / today_price * 100) if today_price > 0 else 0
_LOGGER.debug(
"Supersession check: Today %s-%s (%.4f) vs Tomorrow %s-%s (%.4f) = %.1f%% improvement (threshold: %.1f%%)",
today_period["start"].strftime("%H:%M"),
today_period["end"].strftime("%H:%M"),
today_price,
best_tomorrow["start"].strftime("%H:%M"),
best_tomorrow["end"].strftime("%H:%M"),
best_tomorrow_price,
improvement_pct,
improvement_threshold,
)
if improvement_pct >= improvement_threshold:
_LOGGER.info(
"Period superseded: Today %s-%s (%.2f) replaced by Tomorrow %s-%s (%.2f, %.1f%% better)",
today_period["start"].strftime("%H:%M"),
today_period["end"].strftime("%H:%M"),
today_price,
best_tomorrow["start"].strftime("%H:%M"),
best_tomorrow["end"].strftime("%H:%M"),
best_tomorrow_price,
improvement_pct,
)
else:
kept.append(today_period)
return kept
def filter_superseded_periods(
period_summaries: list[dict],
*,
time: TibberPricesTimeService,
reverse_sort: bool,
) -> list[dict]:
"""
Filter out late-night today periods that are superseded by better tomorrow periods.
When tomorrow's data becomes available, some late-night periods that were found
through relaxation may no longer make sense. If tomorrow has a significantly
better period in the early morning, the late-night today period is obsolete.
Example:
- Today 23:30-00:00 at 0.70 kr (found via relaxation, was best available)
- Tomorrow 04:00-05:30 at 0.50 kr (much better alternative)
The today period is superseded and should be filtered out
This only applies to best-price periods (reverse_sort=False).
Peak-price periods are not filtered this way.
"""
from .types import ( # noqa: PLC0415
CROSS_DAY_LATE_PERIOD_START_HOUR,
CROSS_DAY_MAX_EXTENSION_HOUR,
SUPERSESSION_PRICE_IMPROVEMENT_PCT,
)
_LOGGER.debug(
"filter_superseded_periods called: %d periods, reverse_sort=%s",
len(period_summaries) if period_summaries else 0,
reverse_sort,
)
# Only filter for best-price periods
if reverse_sort or not period_summaries:
return period_summaries
now = time.now()
today = now.date()
tomorrow = today + timedelta(days=1)
# Categorize periods
today_late, tomorrow_early, other = _categorize_periods_for_supersession(
period_summaries,
today,
tomorrow,
CROSS_DAY_LATE_PERIOD_START_HOUR,
CROSS_DAY_MAX_EXTENSION_HOUR,
)
_LOGGER.debug(
"Supersession categorization: today_late=%d, tomorrow_early=%d, other=%d",
len(today_late),
len(tomorrow_early),
len(other),
)
# If no tomorrow early periods, nothing to compare against
if not tomorrow_early:
_LOGGER.debug("No tomorrow early periods - skipping supersession check")
return period_summaries
# Find the best tomorrow early period (lowest mean price)
best_tomorrow = min(tomorrow_early, key=lambda p: p.get("price_mean", float("inf")))
best_tomorrow_price = best_tomorrow.get("price_mean")
if best_tomorrow_price is None:
return period_summaries
# Filter superseded today periods
kept_today = _filter_superseded_today_periods(
today_late,
best_tomorrow,
best_tomorrow_price,
SUPERSESSION_PRICE_IMPROVEMENT_PCT,
)
# Reconstruct and sort by start time
result = other + kept_today + tomorrow_early
result.sort(key=lambda p: p.get("start") or time.now())
return result
def _is_period_eligible_for_extension(
period: dict,
today: date,
late_hour_threshold: int,
) -> bool:
"""
Check if a period is eligible for cross-day extension.
Eligibility criteria:
- Period has valid start and end times
- Period ends on today (not yesterday or tomorrow)
- Period ends late (after late_hour_threshold, e.g. 20:00)
"""
period_end = period.get("end")
period_start = period.get("start")
if not period_end or not period_start:
return False
if period_end.date() != today:
return False
return period_end.hour >= late_hour_threshold
def _find_extension_intervals(
period_end: datetime,
price_lookup: dict[str, dict],
criteria: Any,
max_extension_time: datetime,
interval_duration: timedelta,
) -> list[dict]:
"""
Find consecutive intervals after period_end that meet criteria.
Iterates forward from period_end, adding intervals while they
meet the flex and min_distance criteria. Stops at first failure
or when reaching max_extension_time.
"""
from .level_filtering import check_interval_criteria # noqa: PLC0415
extension_intervals: list[dict] = []
check_time = period_end
while check_time < max_extension_time:
price_data = price_lookup.get(check_time.isoformat())
if not price_data:
break # No more data
price = float(price_data["total"])
in_flex, meets_min_distance = check_interval_criteria(price, criteria)
if not (in_flex and meets_min_distance):
break # Criteria no longer met
extension_intervals.append(price_data)
check_time = check_time + interval_duration
return extension_intervals
def _collect_original_period_prices(
period_start: datetime,
period_end: datetime,
price_lookup: dict[str, dict],
interval_duration: timedelta,
) -> list[float]:
"""Collect prices from original period for CV calculation."""
prices: list[float] = []
current = period_start
while current < period_end:
price_data = price_lookup.get(current.isoformat())
if price_data:
prices.append(float(price_data["total"]))
current = current + interval_duration
return prices
def _build_extended_period(
period: dict,
extension_intervals: list[dict],
combined_prices: list[float],
combined_cv: float,
interval_duration: timedelta,
) -> dict:
"""Create extended period dict with updated statistics."""
period_start = period["start"]
period_end = period["end"]
new_end = period_end + (interval_duration * len(extension_intervals))
extended = period.copy()
extended["end"] = new_end
extended["duration_minutes"] = int((new_end - period_start).total_seconds() / 60)
extended["period_interval_count"] = len(combined_prices)
extended["cross_day_extended"] = True
extended["cross_day_extension_intervals"] = len(extension_intervals)
# Recalculate price statistics
extended["price_min"] = min(combined_prices)
extended["price_max"] = max(combined_prices)
extended["price_mean"] = sum(combined_prices) / len(combined_prices)
extended["price_spread"] = extended["price_max"] - extended["price_min"]
extended["price_coefficient_variation_%"] = round(combined_cv, 1)
return extended
def extend_periods_across_midnight(
period_summaries: list[dict],
all_prices: list[dict],
price_context: dict[str, Any],
*,
time: TibberPricesTimeService,
reverse_sort: bool,
) -> list[dict]:
"""
Extend late-night periods across midnight if favorable prices continue.
When a period ends close to midnight and tomorrow's data shows continued
favorable prices, extend the period into the next day. This prevents
artificial period breaks at midnight when it's actually better to continue.
Example: Best price period 22:00-23:45 today could extend to 04:00 tomorrow
if prices remain low overnight.
Rules:
- Only extends periods ending after CROSS_DAY_LATE_PERIOD_START_HOUR (20:00)
- Won't extend beyond CROSS_DAY_MAX_EXTENSION_HOUR (08:00) next day
- Extension must pass same flex criteria as original period
- Quality Gate (CV check) applies to extended period
Args:
period_summaries: List of period summary dicts (already processed)
all_prices: All price intervals including tomorrow
price_context: Dict with ref_prices, avg_prices, flex, min_distance_from_avg
time: Time service instance
reverse_sort: True for peak price, False for best price
Returns:
Updated list of period summaries with extensions applied
"""
from custom_components.tibber_prices.utils.price import calculate_coefficient_of_variation # noqa: PLC0415
from .types import ( # noqa: PLC0415
CROSS_DAY_LATE_PERIOD_START_HOUR,
CROSS_DAY_MAX_EXTENSION_HOUR,
PERIOD_MAX_CV,
TibberPricesIntervalCriteria,
)
if not period_summaries or not all_prices:
return period_summaries
# Build price lookup by timestamp
price_lookup: dict[str, dict] = {}
for price_data in all_prices:
interval_time = time.get_interval_time(price_data)
if interval_time:
price_lookup[interval_time.isoformat()] = price_data
ref_prices = price_context.get("ref_prices", {})
avg_prices = price_context.get("avg_prices", {})
flex = price_context.get("flex", 0.15)
min_distance = price_context.get("min_distance_from_avg", 0)
now = time.now()
today = now.date()
tomorrow = today + timedelta(days=1)
interval_duration = time.get_interval_duration()
# Max extension time (e.g., 08:00 tomorrow)
max_extension_time = time.start_of_local_day(now) + timedelta(days=1, hours=CROSS_DAY_MAX_EXTENSION_HOUR)
extended_summaries = []
for period in period_summaries:
# Check eligibility for extension
if not _is_period_eligible_for_extension(period, today, CROSS_DAY_LATE_PERIOD_START_HOUR):
extended_summaries.append(period)
continue
# Get tomorrow's reference prices
tomorrow_ref = ref_prices.get(tomorrow) or ref_prices.get(str(tomorrow))
tomorrow_avg = avg_prices.get(tomorrow) or avg_prices.get(str(tomorrow))
if tomorrow_ref is None or tomorrow_avg is None:
extended_summaries.append(period)
continue
# Set up criteria for extension check
criteria = TibberPricesIntervalCriteria(
ref_price=tomorrow_ref,
avg_price=tomorrow_avg,
flex=flex,
min_distance_from_avg=min_distance,
reverse_sort=reverse_sort,
)
# Find extension intervals
extension_intervals = _find_extension_intervals(
period["end"],
price_lookup,
criteria,
max_extension_time,
interval_duration,
)
if not extension_intervals:
extended_summaries.append(period)
continue
# Collect all prices for CV check
original_prices = _collect_original_period_prices(
period["start"],
period["end"],
price_lookup,
interval_duration,
)
extension_prices = [float(p["total"]) for p in extension_intervals]
combined_prices = original_prices + extension_prices
# Quality Gate: Check CV of extended period
combined_cv = calculate_coefficient_of_variation(combined_prices)
if combined_cv is not None and combined_cv <= PERIOD_MAX_CV:
# Extension passes quality gate
extended_period = _build_extended_period(
period,
extension_intervals,
combined_prices,
combined_cv,
interval_duration,
)
_LOGGER.info(
"Cross-day extension: Period %s-%s extended to %s (+%d intervals, CV=%.1f%%)",
period["start"].strftime("%H:%M"),
period["end"].strftime("%H:%M"),
extended_period["end"].strftime("%H:%M"),
len(extension_intervals),
combined_cv,
)
extended_summaries.append(extended_period)
else:
# Extension would exceed quality gate
_LOGGER_DETAILS.debug(
"%sCross-day extension rejected for period %s-%s: CV=%.1f%% > %.1f%%",
INDENT_L0,
period["start"].strftime("%H:%M"),
period["end"].strftime("%H:%M"),
combined_cv or 0,
PERIOD_MAX_CV,
)
extended_summaries.append(period)
return extended_summaries

View file

@ -9,6 +9,7 @@ if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
# Module-local log indentation (each module starts at level 0) # Module-local log indentation (each module starts at level 0)
INDENT_L0 = "" # Entry point / main function INDENT_L0 = "" # Entry point / main function
@ -16,6 +17,41 @@ INDENT_L1 = " " # Nested logic / loop iterations
INDENT_L2 = " " # Deeper nesting INDENT_L2 = " " # Deeper nesting
def _estimate_merged_cv(period1: dict, period2: dict) -> float | None:
"""
Estimate the CV of a merged period from two period summaries.
Since we don't have the raw prices, we estimate using the combined min/max range.
This is a conservative estimate - the actual CV could be higher or lower.
Formula: CV (range / 2) / mean * 100
Where range = max - min, mean = (min + max) / 2
This approximation assumes roughly uniform distribution within the range.
"""
p1_min = period1.get("price_min")
p1_max = period1.get("price_max")
p2_min = period2.get("price_min")
p2_max = period2.get("price_max")
if None in (p1_min, p1_max, p2_min, p2_max):
return None
# Cast to float - None case handled above
combined_min = min(float(p1_min), float(p2_min)) # type: ignore[arg-type]
combined_max = max(float(p1_max), float(p2_max)) # type: ignore[arg-type]
if combined_min <= 0:
return None
combined_mean = (combined_min + combined_max) / 2
price_range = combined_max - combined_min
# CV estimate based on range (assuming uniform distribution)
# For uniform distribution: std_dev ≈ range / sqrt(12) ≈ range / 3.46
return (price_range / 3.46) / combined_mean * 100
def recalculate_period_metadata(periods: list[dict], *, time: TibberPricesTimeService) -> None: def recalculate_period_metadata(periods: list[dict], *, time: TibberPricesTimeService) -> None:
""" """
Recalculate period metadata after merging periods. Recalculate period metadata after merging periods.
@ -104,7 +140,7 @@ def merge_adjacent_periods(period1: dict, period2: dict) -> dict:
"period2_end": period2["end"].isoformat(), "period2_end": period2["end"].isoformat(),
} }
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sMerged periods: %s-%s + %s-%s%s-%s (duration: %d min)", "%sMerged periods: %s-%s + %s-%s%s-%s (duration: %d min)",
INDENT_L2, INDENT_L2,
period1["start"].strftime("%H:%M"), period1["start"].strftime("%H:%M"),
@ -119,6 +155,119 @@ def merge_adjacent_periods(period1: dict, period2: dict) -> dict:
return merged return merged
def _check_merge_quality_gate(periods_to_merge: list[tuple[int, dict]], relaxed: dict) -> bool:
"""
Check if merging would create a period that's too heterogeneous.
Returns True if merge is allowed, False if blocked by Quality Gate.
"""
from .types import PERIOD_MAX_CV # noqa: PLC0415
relaxed_start = relaxed["start"]
relaxed_end = relaxed["end"]
for _idx, existing in periods_to_merge:
estimated_cv = _estimate_merged_cv(existing, relaxed)
if estimated_cv is not None and estimated_cv > PERIOD_MAX_CV:
_LOGGER.debug(
"Merge blocked by Quality Gate: %s-%s + %s-%s would have CV≈%.1f%% (max: %.1f%%)",
existing["start"].strftime("%H:%M"),
existing["end"].strftime("%H:%M"),
relaxed_start.strftime("%H:%M"),
relaxed_end.strftime("%H:%M"),
estimated_cv,
PERIOD_MAX_CV,
)
return False
return True
def _would_swallow_existing(relaxed: dict, existing_periods: list[dict]) -> bool:
"""
Check if the relaxed period would "swallow" any existing period.
A period is "swallowed" if the new relaxed period completely contains it.
In this case, we should NOT merge - the existing smaller period is more
homogeneous and should be preserved.
This prevents relaxation from replacing good small periods with larger,
more heterogeneous ones.
Returns:
True if any existing period would be swallowed (merge should be blocked)
False if safe to proceed with merge evaluation
"""
relaxed_start = relaxed["start"]
relaxed_end = relaxed["end"]
for existing in existing_periods:
existing_start = existing["start"]
existing_end = existing["end"]
# Check if relaxed completely contains existing
if relaxed_start <= existing_start and relaxed_end >= existing_end:
_LOGGER.debug(
"Blocking merge: %s-%s would swallow %s-%s (keeping smaller period)",
relaxed_start.strftime("%H:%M"),
relaxed_end.strftime("%H:%M"),
existing_start.strftime("%H:%M"),
existing_end.strftime("%H:%M"),
)
return True
return False
def _is_duplicate_period(relaxed: dict, existing_periods: list[dict], tolerance_seconds: int = 60) -> bool:
"""Check if relaxed period is a duplicate of any existing period."""
relaxed_start = relaxed["start"]
relaxed_end = relaxed["end"]
for existing in existing_periods:
if (
abs((relaxed_start - existing["start"]).total_seconds()) < tolerance_seconds
and abs((relaxed_end - existing["end"]).total_seconds()) < tolerance_seconds
):
_LOGGER_DETAILS.debug(
"%sSkipping duplicate period %s-%s (already exists)",
INDENT_L1,
relaxed_start.strftime("%H:%M"),
relaxed_end.strftime("%H:%M"),
)
return True
return False
def _find_adjacent_or_overlapping(relaxed: dict, existing_periods: list[dict]) -> list[tuple[int, dict]]:
"""Find all periods that are adjacent to or overlapping with the relaxed period."""
relaxed_start = relaxed["start"]
relaxed_end = relaxed["end"]
periods_to_merge = []
for idx, existing in enumerate(existing_periods):
existing_start = existing["start"]
existing_end = existing["end"]
# Check if adjacent (no gap) or overlapping
is_adjacent = relaxed_end == existing_start or relaxed_start == existing_end
is_overlapping = relaxed_start < existing_end and relaxed_end > existing_start
if is_adjacent or is_overlapping:
periods_to_merge.append((idx, existing))
_LOGGER_DETAILS.debug(
"%sPeriod %s-%s %s with existing period %s-%s",
INDENT_L1,
relaxed_start.strftime("%H:%M"),
relaxed_end.strftime("%H:%M"),
"overlaps" if is_overlapping else "is adjacent to",
existing_start.strftime("%H:%M"),
existing_end.strftime("%H:%M"),
)
return periods_to_merge
def resolve_period_overlaps( def resolve_period_overlaps(
existing_periods: list[dict], existing_periods: list[dict],
new_relaxed_periods: list[dict], new_relaxed_periods: list[dict],
@ -129,6 +278,10 @@ def resolve_period_overlaps(
Adjacent or overlapping periods are merged into single continuous periods. Adjacent or overlapping periods are merged into single continuous periods.
The newer period's relaxation attributes override the older period's. The newer period's relaxation attributes override the older period's.
Quality Gate: Merging is blocked if the combined period would have
an estimated CV above PERIOD_MAX_CV (25%), to prevent creating
periods with excessive internal price variation.
This function is called incrementally after each relaxation phase: This function is called incrementally after each relaxation phase:
- Phase 1: existing = baseline, new = first relaxation - Phase 1: existing = baseline, new = first relaxation
- Phase 2: existing = baseline + phase 1, new = second relaxation - Phase 2: existing = baseline + phase 1, new = second relaxation
@ -144,7 +297,7 @@ def resolve_period_overlaps(
- new_periods_count: Number of new periods added (some may have been merged) - new_periods_count: Number of new periods added (some may have been merged)
""" """
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sresolve_period_overlaps called: existing=%d, new=%d", "%sresolve_period_overlaps called: existing=%d, new=%d",
INDENT_L0, INDENT_L0,
len(existing_periods), len(existing_periods),
@ -166,74 +319,60 @@ def resolve_period_overlaps(
relaxed_end = relaxed["end"] relaxed_end = relaxed["end"]
# Check if this period is duplicate (exact match within tolerance) # Check if this period is duplicate (exact match within tolerance)
tolerance_seconds = 60 # 1 minute tolerance if _is_duplicate_period(relaxed, merged):
is_duplicate = False continue
for existing in merged:
if (
abs((relaxed_start - existing["start"]).total_seconds()) < tolerance_seconds
and abs((relaxed_end - existing["end"]).total_seconds()) < tolerance_seconds
):
is_duplicate = True
_LOGGER.debug(
"%sSkipping duplicate period %s-%s (already exists)",
INDENT_L1,
relaxed_start.strftime("%H:%M"),
relaxed_end.strftime("%H:%M"),
)
break
if is_duplicate: # Check if this period would "swallow" an existing smaller period
# In that case, skip it - the smaller existing period is more homogeneous
if _would_swallow_existing(relaxed, merged):
continue continue
# Find periods that are adjacent or overlapping (should be merged) # Find periods that are adjacent or overlapping (should be merged)
periods_to_merge = [] periods_to_merge = _find_adjacent_or_overlapping(relaxed, merged)
for idx, existing in enumerate(merged):
existing_start = existing["start"]
existing_end = existing["end"]
# Check if adjacent (no gap) or overlapping
is_adjacent = relaxed_end == existing_start or relaxed_start == existing_end
is_overlapping = relaxed_start < existing_end and relaxed_end > existing_start
if is_adjacent or is_overlapping:
periods_to_merge.append((idx, existing))
_LOGGER.debug(
"%sPeriod %s-%s %s with existing period %s-%s",
INDENT_L1,
relaxed_start.strftime("%H:%M"),
relaxed_end.strftime("%H:%M"),
"overlaps" if is_overlapping else "is adjacent to",
existing_start.strftime("%H:%M"),
existing_end.strftime("%H:%M"),
)
if not periods_to_merge: if not periods_to_merge:
# No merge needed - add as new period # No merge needed - add as new period
merged.append(relaxed) merged.append(relaxed)
periods_added += 1 periods_added += 1
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sAdded new period %s-%s (no overlap/adjacency)", "%sAdded new period %s-%s (no overlap/adjacency)",
INDENT_L1, INDENT_L1,
relaxed_start.strftime("%H:%M"), relaxed_start.strftime("%H:%M"),
relaxed_end.strftime("%H:%M"), relaxed_end.strftime("%H:%M"),
) )
else: continue
# Merge with all adjacent/overlapping periods
# Start with the new relaxed period
merged_period = relaxed.copy()
# Remove old periods (in reverse order to maintain indices) # Quality Gate: Check if merging would create a period that's too heterogeneous
for idx, existing in reversed(periods_to_merge): should_merge = _check_merge_quality_gate(periods_to_merge, relaxed)
merged_period = merge_adjacent_periods(existing, merged_period)
merged.pop(idx)
# Add the merged result if not should_merge:
merged.append(merged_period) # Don't merge - add as separate period instead
merged.append(relaxed)
periods_added += 1
_LOGGER_DETAILS.debug(
"%sAdded new period %s-%s separately (merge blocked by CV gate)",
INDENT_L1,
relaxed_start.strftime("%H:%M"),
relaxed_end.strftime("%H:%M"),
)
continue
# Count as added if we merged exactly one existing period # Merge with all adjacent/overlapping periods
# (means we extended/merged, not replaced multiple) # Start with the new relaxed period
if len(periods_to_merge) == 1: merged_period = relaxed.copy()
periods_added += 1
# Remove old periods (in reverse order to maintain indices)
for idx, existing in reversed(periods_to_merge):
merged_period = merge_adjacent_periods(existing, merged_period)
merged.pop(idx)
# Add the merged result
merged.append(merged_period)
# Count as added if we merged exactly one existing period
# (means we extended/merged, not replaced multiple)
if len(periods_to_merge) == 1:
periods_added += 1
# Sort all periods by start time # Sort all periods by start time
merged.sort(key=lambda p: p["start"]) merged.sort(key=lambda p: p["start"])

View file

@ -14,15 +14,18 @@ if TYPE_CHECKING:
TibberPricesPeriodStatistics, TibberPricesPeriodStatistics,
TibberPricesThresholdConfig, TibberPricesThresholdConfig,
) )
from custom_components.tibber_prices.utils.average import calculate_median
from custom_components.tibber_prices.utils.price import ( from custom_components.tibber_prices.utils.price import (
aggregate_period_levels, aggregate_period_levels,
aggregate_period_ratings, aggregate_period_ratings,
calculate_coefficient_of_variation,
calculate_volatility_level, calculate_volatility_level,
) )
def calculate_period_price_diff( def calculate_period_price_diff(
price_avg: float, price_mean: float,
start_time: datetime, start_time: datetime,
price_context: dict[str, Any], price_context: dict[str, Any],
) -> tuple[float | None, float | None]: ) -> tuple[float | None, float | None]:
@ -31,6 +34,11 @@ def calculate_period_price_diff(
Uses reference price from start day of the period for consistency. Uses reference price from start day of the period for consistency.
Args:
price_mean: Mean price of the period (in base currency).
start_time: Start time of the period.
price_context: Dictionary with ref_prices per day.
Returns: Returns:
Tuple of (period_price_diff, period_price_diff_pct) or (None, None) if no reference available. Tuple of (period_price_diff, period_price_diff_pct) or (None, None) if no reference available.
@ -45,12 +53,14 @@ def calculate_period_price_diff(
if ref_price is None: if ref_price is None:
return None, None return None, None
# Convert reference price to minor units (ct/øre) # Both prices are in base currency, no conversion needed
ref_price_minor = round(ref_price * 100, 2) ref_price_display = round(ref_price, 4)
period_price_diff = round(price_avg - ref_price_minor, 2) period_price_diff = round(price_mean - ref_price_display, 4)
period_price_diff_pct = None period_price_diff_pct = None
if ref_price_minor != 0: if ref_price_display != 0:
period_price_diff_pct = round((period_price_diff / ref_price_minor) * 100, 2) # CRITICAL: Use abs() for negative prices (same logic as calculate_difference_percentage)
# Example: avg=-10, ref=-20 → diff=10, pct=10/abs(-20)*100=+50% (correctly shows more expensive)
period_price_diff_pct = round((period_price_diff / abs(ref_price_display)) * 100, 2)
return period_price_diff, period_price_diff_pct return period_price_diff, period_price_diff_pct
@ -80,34 +90,44 @@ def calculate_aggregated_rating_difference(period_price_data: list[dict]) -> flo
return round(sum(differences) / len(differences), 2) return round(sum(differences) / len(differences), 2)
def calculate_period_price_statistics(period_price_data: list[dict]) -> dict[str, float]: def calculate_period_price_statistics(
period_price_data: list[dict],
) -> dict[str, float]:
""" """
Calculate price statistics for a period. Calculate price statistics for a period.
Args: Args:
period_price_data: List of price data dictionaries with "total" field period_price_data: List of price data dictionaries with "total" field.
Returns: Returns:
Dictionary with price_avg, price_min, price_max, price_spread (all in minor units: ct/øre) Dictionary with price_mean, price_median, price_min, price_max, price_spread (all in base currency).
Note: price_spread is calculated based on price_mean (max - min range as percentage of mean).
""" """
prices_minor = [round(float(p["total"]) * 100, 2) for p in period_price_data] # Keep prices in base currency (Euro/NOK/SEK) for internal storage
# Conversion to display units (ct/øre) happens in services/formatting layer
factor = 1 # Always use base currency for storage
prices_display = [round(float(p["total"]) * factor, 4) for p in period_price_data]
if not prices_minor: if not prices_display:
return { return {
"price_avg": 0.0, "price_mean": 0.0,
"price_median": 0.0,
"price_min": 0.0, "price_min": 0.0,
"price_max": 0.0, "price_max": 0.0,
"price_spread": 0.0, "price_spread": 0.0,
} }
price_avg = round(sum(prices_minor) / len(prices_minor), 2) price_mean = round(sum(prices_display) / len(prices_display), 4)
price_min = round(min(prices_minor), 2) median_value = calculate_median(prices_display)
price_max = round(max(prices_minor), 2) price_median = round(median_value, 4) if median_value is not None else 0.0
price_spread = round(price_max - price_min, 2) price_min = round(min(prices_display), 4)
price_max = round(max(prices_display), 4)
price_spread = round(price_max - price_min, 4)
return { return {
"price_avg": price_avg, "price_mean": price_mean,
"price_median": price_median,
"price_min": price_min, "price_min": price_min,
"price_max": price_max, "price_max": price_max,
"price_spread": price_spread, "price_spread": price_spread,
@ -119,6 +139,7 @@ def build_period_summary_dict(
stats: TibberPricesPeriodStatistics, stats: TibberPricesPeriodStatistics,
*, *,
reverse_sort: bool, reverse_sort: bool,
price_context: dict[str, Any] | None = None,
) -> dict: ) -> dict:
""" """
Build the complete period summary dictionary. Build the complete period summary dictionary.
@ -127,6 +148,7 @@ def build_period_summary_dict(
period_data: Period timing and position data period_data: Period timing and position data
stats: Calculated period statistics stats: Calculated period statistics
reverse_sort: True for peak price, False for best price (keyword-only) reverse_sort: True for peak price, False for best price (keyword-only)
price_context: Optional dict with ref_prices, avg_prices, intervals_by_day for day statistics
Returns: Returns:
Complete period summary dictionary following attribute ordering Complete period summary dictionary following attribute ordering
@ -143,10 +165,12 @@ def build_period_summary_dict(
"rating_level": stats.aggregated_rating, "rating_level": stats.aggregated_rating,
"rating_difference_%": stats.rating_difference_pct, "rating_difference_%": stats.rating_difference_pct,
# 3. Price statistics (how much does it cost?) # 3. Price statistics (how much does it cost?)
"price_avg": stats.price_avg, "price_mean": stats.price_mean,
"price_median": stats.price_median,
"price_min": stats.price_min, "price_min": stats.price_min,
"price_max": stats.price_max, "price_max": stats.price_max,
"price_spread": stats.price_spread, "price_spread": stats.price_spread,
"price_coefficient_variation_%": stats.coefficient_of_variation,
"volatility": stats.volatility, "volatility": stats.volatility,
# 4. Price differences will be added below if available # 4. Price differences will be added below if available
# 5. Detail information (additional context) # 5. Detail information (additional context)
@ -169,6 +193,30 @@ def build_period_summary_dict(
if stats.period_price_diff_pct is not None: if stats.period_price_diff_pct is not None:
summary["period_price_diff_from_daily_min_%"] = stats.period_price_diff_pct summary["period_price_diff_from_daily_min_%"] = stats.period_price_diff_pct
# Add day volatility and price statistics (for understanding midnight classification changes)
if price_context:
period_start_date = period_data.start_time.date()
intervals_by_day = price_context.get("intervals_by_day", {})
avg_prices = price_context.get("avg_prices", {})
day_intervals = intervals_by_day.get(period_start_date, [])
if day_intervals:
# Calculate day price statistics (in EUR major units from API)
day_prices = [float(p["total"]) for p in day_intervals]
day_min = min(day_prices)
day_max = max(day_prices)
day_span = day_max - day_min
day_avg = avg_prices.get(period_start_date, sum(day_prices) / len(day_prices))
# Calculate volatility percentage (span / avg * 100)
day_volatility_pct = round((day_span / day_avg * 100), 1) if day_avg > 0 else 0.0
# Convert to minor units (ct/øre) for consistency with other price attributes
summary["day_volatility_%"] = day_volatility_pct
summary["day_price_min"] = round(day_min * 100, 2)
summary["day_price_max"] = round(day_max * 100, 2)
summary["day_price_span"] = round(day_span * 100, 2)
return summary return summary
@ -185,7 +233,7 @@ def extract_period_summaries(
Returns sensor-ready period summaries with: Returns sensor-ready period summaries with:
- Timestamps and positioning (start, end, hour, minute, time) - Timestamps and positioning (start, end, hour, minute, time)
- Aggregated price statistics (price_avg, price_min, price_max, price_spread) - Aggregated price statistics (price_mean, price_median, price_min, price_max, price_spread)
- Volatility categorization (low/moderate/high/very_high based on coefficient of variation) - Volatility categorization (low/moderate/high/very_high based on coefficient of variation)
- Rating difference percentage (aggregated from intervals) - Rating difference percentage (aggregated from intervals)
- Period price differences (period_price_diff_from_daily_min/max) - Period price differences (period_price_diff_from_daily_min/max)
@ -195,11 +243,11 @@ def extract_period_summaries(
All data is pre-calculated and ready for display - no further processing needed. All data is pre-calculated and ready for display - no further processing needed.
Args: Args:
periods: List of periods, where each period is a list of interval dictionaries periods: List of periods, where each period is a list of interval dictionaries.
all_prices: All price data from the API (enriched with level, difference, rating_level) all_prices: All price data from the API (enriched with level, difference, rating_level).
price_context: Dictionary with ref_prices and avg_prices per day price_context: Dictionary with ref_prices and avg_prices per day.
thresholds: Threshold configuration for calculations thresholds: Threshold configuration for calculations.
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required).
""" """
from .types import ( # noqa: PLC0415 - Avoid circular import from .types import ( # noqa: PLC0415 - Avoid circular import
@ -257,18 +305,21 @@ def extract_period_summaries(
thresholds.threshold_high, thresholds.threshold_high,
) )
# Calculate price statistics (in minor units: ct/øre) # Calculate price statistics (in base currency, conversion happens in presentation layer)
price_stats = calculate_period_price_statistics(period_price_data) price_stats = calculate_period_price_statistics(period_price_data)
# Calculate period price difference from daily reference # Calculate period price difference from daily reference
period_price_diff, period_price_diff_pct = calculate_period_price_diff( period_price_diff, period_price_diff_pct = calculate_period_price_diff(
price_stats["price_avg"], start_time, price_context price_stats["price_mean"], start_time, price_context
) )
# Extract prices for volatility calculation (coefficient of variation) # Extract prices for volatility calculation (coefficient of variation)
prices_for_volatility = [float(p["total"]) for p in period_price_data if "total" in p] prices_for_volatility = [float(p["total"]) for p in period_price_data if "total" in p]
# Calculate volatility (categorical) and aggregated rating difference (numeric) # Calculate CV (numeric) for quality gate checks
period_cv = calculate_coefficient_of_variation(prices_for_volatility)
# Calculate volatility (categorical) using thresholds
volatility = calculate_volatility_level( volatility = calculate_volatility_level(
prices_for_volatility, prices_for_volatility,
threshold_moderate=thresholds.threshold_volatility_moderate, threshold_moderate=thresholds.threshold_volatility_moderate,
@ -296,17 +347,21 @@ def extract_period_summaries(
aggregated_level=aggregated_level, aggregated_level=aggregated_level,
aggregated_rating=aggregated_rating, aggregated_rating=aggregated_rating,
rating_difference_pct=rating_difference_pct, rating_difference_pct=rating_difference_pct,
price_avg=price_stats["price_avg"], price_mean=price_stats["price_mean"],
price_median=price_stats["price_median"],
price_min=price_stats["price_min"], price_min=price_stats["price_min"],
price_max=price_stats["price_max"], price_max=price_stats["price_max"],
price_spread=price_stats["price_spread"], price_spread=price_stats["price_spread"],
volatility=volatility, volatility=volatility,
coefficient_of_variation=round(period_cv, 1) if period_cv is not None else None,
period_price_diff=period_price_diff, period_price_diff=period_price_diff,
period_price_diff_pct=period_price_diff_pct, period_price_diff_pct=period_price_diff_pct,
) )
# Build complete period summary # Build complete period summary
summary = build_period_summary_dict(period_data, stats, reverse_sort=thresholds.reverse_sort) summary = build_period_summary_dict(
period_data, stats, reverse_sort=thresholds.reverse_sort, price_context=price_context
)
# Add smoothing information if any intervals benefited from smoothing # Add smoothing information if any intervals benefited from smoothing
if smoothed_impactful_count > 0: if smoothed_impactful_count > 0:

View file

@ -11,7 +11,7 @@ if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from .types import TibberPricesPeriodConfig from custom_components.tibber_prices.utils.price import calculate_coefficient_of_variation
from .period_overlap import ( from .period_overlap import (
recalculate_period_metadata, recalculate_period_metadata,
@ -21,9 +21,12 @@ from .types import (
INDENT_L0, INDENT_L0,
INDENT_L1, INDENT_L1,
INDENT_L2, INDENT_L2,
PERIOD_MAX_CV,
TibberPricesPeriodConfig,
) )
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
# Flex thresholds for warnings (see docs/development/period-calculation-theory.md) # Flex thresholds for warnings (see docs/development/period-calculation-theory.md)
# With relaxation active, high base flex is counterproductive (reduces relaxation effectiveness) # With relaxation active, high base flex is counterproductive (reduces relaxation effectiveness)
@ -31,35 +34,204 @@ FLEX_WARNING_THRESHOLD_RELAXATION = 0.25 # 25% - INFO: suggest lowering to 15-2
MAX_FLEX_HARD_LIMIT = 0.50 # 50% - hard maximum flex value MAX_FLEX_HARD_LIMIT = 0.50 # 50% - hard maximum flex value
FLEX_HIGH_THRESHOLD_RELAXATION = 0.30 # 30% - WARNING: base flex too high for relaxation mode FLEX_HIGH_THRESHOLD_RELAXATION = 0.30 # 30% - WARNING: base flex too high for relaxation mode
# Min duration fallback constants
# When all relaxation phases are exhausted and still no periods found,
# gradually reduce min_period_length to find at least something
MIN_DURATION_FALLBACK_MINIMUM = 30 # Minimum period length to try (30 min = 2 intervals)
MIN_DURATION_FALLBACK_STEP = 15 # Reduce by 15 min (1 interval) each step
def _check_period_quality(
period: dict, all_prices: list[dict], *, time: TibberPricesTimeService
) -> tuple[bool, float | None]:
"""
Check if a period passes the quality gate (internal CV not too high).
The Quality Gate prevents relaxation from creating periods with too much
internal price variation. A "best price period" with prices ranging from
0.5 to 1.0 kr/kWh is not useful - user can't trust it's actually "best".
Args:
period: Period summary dict with "start" and "end" datetime
all_prices: All price intervals (to look up prices for CV calculation)
time: Time service for interval time parsing
Returns:
Tuple of (passes_quality_gate, cv_value)
- passes_quality_gate: True if CV <= PERIOD_MAX_CV
- cv_value: Calculated CV as percentage, or None if not calculable
"""
start_time = period.get("start")
end_time = period.get("end")
if not start_time or not end_time:
return True, None # Can't check, assume OK
# Build lookup for prices
price_lookup: dict[str, float] = {}
for price_data in all_prices:
interval_time = time.get_interval_time(price_data)
if interval_time:
price_lookup[interval_time.isoformat()] = float(price_data["total"])
# Collect prices within the period
period_prices: list[float] = []
interval_duration = time.get_interval_duration()
current = start_time
while current < end_time:
price = price_lookup.get(current.isoformat())
if price is not None:
period_prices.append(price)
current = current + interval_duration
# Need at least 2 prices to calculate CV (same as MIN_PRICES_FOR_VOLATILITY in price.py)
min_prices_for_cv = 2
if len(period_prices) < min_prices_for_cv:
return True, None # Too few prices to calculate CV
cv = calculate_coefficient_of_variation(period_prices)
if cv is None:
return True, None
passes = cv <= PERIOD_MAX_CV
return passes, cv
def _count_quality_periods(
periods: list[dict],
all_prices: list[dict],
prices_by_day: dict[date, list[dict]],
min_periods: int,
*,
time: TibberPricesTimeService,
) -> tuple[int, int]:
"""
Count days meeting requirement when considering quality gate.
Only periods passing the quality gate (CV <= PERIOD_MAX_CV) are counted
towards meeting the min_periods requirement.
Args:
periods: List of all periods
all_prices: All price intervals
prices_by_day: Price intervals grouped by day
min_periods: Target periods per day
time: Time service
Returns:
Tuple of (days_meeting_requirement, total_quality_periods)
"""
periods_by_day = group_periods_by_day(periods)
days_meeting_requirement = 0
total_quality_periods = 0
for day in sorted(prices_by_day.keys()):
day_periods = periods_by_day.get(day, [])
quality_count = 0
for period in day_periods:
passes, cv = _check_period_quality(period, all_prices, time=time)
if passes:
quality_count += 1
else:
_LOGGER_DETAILS.debug(
"%s Day %s: Period %s-%s REJECTED by quality gate (CV=%.1f%% > %.1f%%)",
INDENT_L2,
day,
period.get("start", "?").strftime("%H:%M") if hasattr(period.get("start"), "strftime") else "?",
period.get("end", "?").strftime("%H:%M") if hasattr(period.get("end"), "strftime") else "?",
cv or 0,
PERIOD_MAX_CV,
)
total_quality_periods += quality_count
if quality_count >= min_periods:
days_meeting_requirement += 1
return days_meeting_requirement, total_quality_periods
def group_periods_by_day(periods: list[dict]) -> dict[date, list[dict]]: def group_periods_by_day(periods: list[dict]) -> dict[date, list[dict]]:
""" """
Group periods by the day they end in. Group periods by ALL days they span (including midnight crossings).
This ensures periods crossing midnight are counted towards the day they end, Periods crossing midnight are assigned to ALL affected days.
not the day they start. Example: Period 23:00 yesterday - 02:00 today counts Example: Period 23:00 yesterday - 02:00 today appears in BOTH days.
as "today" since it ends today.
This ensures that:
1. For min_periods checking: A midnight-crossing period counts towards both days
2. For binary sensors: Each day shows all relevant periods (including those starting/ending in other days)
Args: Args:
periods: List of period summary dicts with "start" and "end" datetime periods: List of period summary dicts with "start" and "end" datetime
Returns: Returns:
Dict mapping date to list of periods ending on that date Dict mapping date to list of periods spanning that date
""" """
periods_by_day: dict[date, list[dict]] = {} periods_by_day: dict[date, list[dict]] = {}
for period in periods: for period in periods:
# Use end time for grouping so periods crossing midnight are counted start_time = period.get("start")
# towards the day they end (more relevant for min_periods check)
end_time = period.get("end") end_time = period.get("end")
if end_time:
day = end_time.date() if not start_time or not end_time:
periods_by_day.setdefault(day, []).append(period) continue
# Assign period to ALL days it spans
start_date = start_time.date()
end_date = end_time.date()
# Handle single-day and multi-day periods
current_date = start_date
while current_date <= end_date:
periods_by_day.setdefault(current_date, []).append(period)
# Move to next day
from datetime import timedelta # noqa: PLC0415
current_date = current_date + timedelta(days=1)
return periods_by_day return periods_by_day
def mark_periods_with_relaxation(
periods: list[dict],
relaxation_level: str,
original_threshold: float,
applied_threshold: float,
*,
reverse_sort: bool = False,
) -> None:
"""
Mark periods with relaxation information (mutates period dicts in-place).
Uses consistent 'relaxation_*' prefix for all relaxation-related attributes.
These attributes are read by period_overlap.py and binary_sensor/attributes.py.
For Peak Price periods (reverse_sort=True), thresholds are stored as negative
values to match the user's configuration semantics (negative flex = below maximum).
Args:
periods: List of period dicts to mark
relaxation_level: String describing the relaxation level (e.g., "flex=18.0% +level_any")
original_threshold: Original flex threshold value (decimal, e.g., 0.15 for 15%)
applied_threshold: Actually applied threshold value (decimal, e.g., 0.18 for 18%)
reverse_sort: True for Peak Price (negative values), False for Best Price (positive values)
"""
for period in periods:
period["relaxation_active"] = True
period["relaxation_level"] = relaxation_level
# Convert decimal to percentage for display
# For Peak Prices: Store as negative to match user's config semantics
sign = -1 if reverse_sort else 1
period["relaxation_threshold_original_%"] = round(original_threshold * 100 * sign, 1)
period["relaxation_threshold_applied_%"] = round(applied_threshold * 100 * sign, 1)
def group_prices_by_day(all_prices: list[dict], *, time: TibberPricesTimeService) -> dict[date, list[dict]]: def group_prices_by_day(all_prices: list[dict], *, time: TibberPricesTimeService) -> dict[date, list[dict]]:
""" """
Group price intervals by the day they belong to (today and future only). Group price intervals by the day they belong to (today and future only).
@ -86,89 +258,167 @@ def group_prices_by_day(all_prices: list[dict], *, time: TibberPricesTimeService
return prices_by_day return prices_by_day
def check_min_periods_per_day( def _try_min_duration_fallback(
periods: list[dict], min_periods: int, all_prices: list[dict], *, time: TibberPricesTimeService *,
) -> bool: config: TibberPricesPeriodConfig,
existing_periods: list[dict],
prices_by_day: dict[date, list[dict]],
time: TibberPricesTimeService,
) -> tuple[dict[str, Any] | None, dict[str, Any]]:
""" """
Check if minimum periods requirement is met for each day individually. Try reducing min_period_length to find periods when relaxation is exhausted.
Returns True if we should STOP relaxation (enough periods found per day). This is a LAST RESORT mechanism. It only activates when:
Returns False if we should CONTINUE relaxation (not enough periods yet). 1. All relaxation phases have been tried
2. Some days STILL have zero periods (not just below min_periods)
The fallback progressively reduces min_period_length:
- 60 min (default) 45 min 30 min (minimum)
It does NOT reduce below 30 min (2 intervals) because a single 15-min
interval is essentially just the daily min/max price - not a "period".
Args: Args:
periods: List of period summary dicts config: Period configuration
min_periods: Minimum number of periods required per day existing_periods: Periods found so far (from relaxation)
all_prices: All available price intervals (used to determine which days have data) prices_by_day: Price intervals grouped by day
time: TibberPricesTimeService instance (required) time: Time service instance
Returns: Returns:
True if every day with price data has at least min_periods, False otherwise Tuple of (result dict with periods, metadata dict) or (None, empty metadata)
""" """
if not periods: from .core import calculate_periods # noqa: PLC0415 - Avoid circular import
return False # No periods at all, continue relaxation
# Get all days that have price data (today and future only, not yesterday) metadata: dict[str, Any] = {"phases_used": [], "fallback_active": False}
today = time.now().date()
available_days = set()
for price in all_prices:
starts_at = time.get_interval_time(price)
if starts_at:
price_date = starts_at.date()
# Only count today and future days (not yesterday)
if price_date >= today:
available_days.add(price_date)
if not available_days: # Only try fallback if current min_period_length > minimum
return False # No price data for today/future, continue relaxation if config.min_period_length <= MIN_DURATION_FALLBACK_MINIMUM:
return None, metadata
# Group found periods by day # Check which days have ZERO periods (not just below target)
periods_by_day = group_periods_by_day(periods) existing_by_day = group_periods_by_day(existing_periods)
days_with_zero_periods = [day for day in prices_by_day if not existing_by_day.get(day)]
# Check each day with price data: ALL must have at least min_periods if not days_with_zero_periods:
for day in available_days: _LOGGER_DETAILS.debug(
day_periods = periods_by_day.get(day, []) "%sMin duration fallback: All days have at least one period - no fallback needed",
period_count = len(day_periods) INDENT_L1,
if period_count < min_periods: )
_LOGGER.debug( return None, metadata
"Day %s has only %d periods (need %d) - continuing relaxation",
day,
period_count,
min_periods,
)
return False # This day doesn't have enough, continue relaxation
# All days with price data have enough periods, stop relaxation _LOGGER.info(
return True "Min duration fallback: %d day(s) have zero periods, trying shorter min_period_length...",
len(days_with_zero_periods),
)
# Try progressively shorter min_period_length
current_min_duration = config.min_period_length
fallback_periods: list[dict] = []
while current_min_duration > MIN_DURATION_FALLBACK_MINIMUM:
current_min_duration = max(
current_min_duration - MIN_DURATION_FALLBACK_STEP,
MIN_DURATION_FALLBACK_MINIMUM,
)
_LOGGER_DETAILS.debug(
"%sTrying min_period_length=%d min for days with zero periods",
INDENT_L2,
current_min_duration,
)
# Create modified config with shorter min_period_length
# Use maxed-out flex (50%) since we're in fallback mode
fallback_config = TibberPricesPeriodConfig(
reverse_sort=config.reverse_sort,
flex=MAX_FLEX_HARD_LIMIT, # Max flex
min_distance_from_avg=0, # Disable min_distance in fallback
min_period_length=current_min_duration,
threshold_low=config.threshold_low,
threshold_high=config.threshold_high,
threshold_volatility_moderate=config.threshold_volatility_moderate,
threshold_volatility_high=config.threshold_volatility_high,
threshold_volatility_very_high=config.threshold_volatility_very_high,
level_filter=None, # Disable level filter
gap_count=config.gap_count,
)
# Try to find periods for days with zero periods
for day in days_with_zero_periods:
day_prices = prices_by_day.get(day, [])
if not day_prices:
continue
try:
day_result = calculate_periods(
day_prices,
config=fallback_config,
time=time,
)
day_periods = day_result.get("periods", [])
if day_periods:
# Mark periods with fallback metadata
for period in day_periods:
period["duration_fallback_active"] = True
period["duration_fallback_min_length"] = current_min_duration
period["relaxation_active"] = True
period["relaxation_level"] = f"duration_fallback={current_min_duration}min"
fallback_periods.extend(day_periods)
_LOGGER.info(
"Min duration fallback: Found %d period(s) for %s at min_length=%d min",
len(day_periods),
day,
current_min_duration,
)
except (KeyError, ValueError, TypeError) as err:
_LOGGER.warning(
"Error during min duration fallback for %s: %s",
day,
err,
)
continue
# If we found periods for all zero-period days, we can stop
if fallback_periods:
# Remove days that now have periods from the list
fallback_by_day = group_periods_by_day(fallback_periods)
days_with_zero_periods = [day for day in days_with_zero_periods if not fallback_by_day.get(day)]
if not days_with_zero_periods:
break
if fallback_periods:
# Merge with existing periods
# resolve_period_overlaps merges adjacent/overlapping periods
merged_periods, _new_count = resolve_period_overlaps(
existing_periods,
fallback_periods,
)
recalculate_period_metadata(merged_periods, time=time)
metadata["fallback_active"] = True
metadata["phases_used"] = [f"duration_fallback (min_length={current_min_duration}min)"]
_LOGGER.info(
"Min duration fallback complete: Added %d period(s), total now %d",
len(fallback_periods),
len(merged_periods),
)
return {"periods": merged_periods}, metadata
_LOGGER.warning(
"Min duration fallback: Still %d day(s) with zero periods after trying all durations",
len(days_with_zero_periods),
)
return None, metadata
def mark_periods_with_relaxation( def calculate_periods_with_relaxation( # noqa: PLR0912, PLR0913, PLR0915 - Per-day relaxation requires many parameters and branches
periods: list[dict],
relaxation_level: str,
original_threshold: float,
applied_threshold: float,
) -> None:
"""
Mark periods with relaxation information (mutates period dicts in-place).
Uses consistent 'relaxation_*' prefix for all relaxation-related attributes.
Args:
periods: List of period dicts to mark
relaxation_level: String describing the relaxation level
original_threshold: Original flex threshold value (decimal, e.g., 0.19 for 19%)
applied_threshold: Actually applied threshold value (decimal, e.g., 0.25 for 25%)
"""
for period in periods:
period["relaxation_active"] = True
period["relaxation_level"] = relaxation_level
# Convert decimal to percentage for display (0.19 → 19.0)
period["relaxation_threshold_original_%"] = round(original_threshold * 100, 1)
period["relaxation_threshold_applied_%"] = round(applied_threshold * 100, 1)
def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relaxation requires many parameters and statements
all_prices: list[dict], all_prices: list[dict],
*, *,
config: TibberPricesPeriodConfig, config: TibberPricesPeriodConfig,
@ -177,7 +427,8 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
max_relaxation_attempts: int, max_relaxation_attempts: int,
should_show_callback: Callable[[str | None], bool], should_show_callback: Callable[[str | None], bool],
time: TibberPricesTimeService, time: TibberPricesTimeService,
) -> tuple[dict[str, Any], dict[str, Any]]: config_entry: Any, # ConfigEntry type
) -> dict[str, Any]:
""" """
Calculate periods with optional per-day filter relaxation. Calculate periods with optional per-day filter relaxation.
@ -201,18 +452,23 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
should_show_callback: Callback function(level_override) -> bool should_show_callback: Callback function(level_override) -> bool
Returns True if periods should be shown with given filter overrides. Pass None Returns True if periods should be shown with given filter overrides. Pass None
to use original configured filter values. to use original configured filter values.
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required).
config_entry: Config entry to get display unit configuration.
Returns: Returns:
Tuple of (periods_result, relaxation_metadata): Dict with same format as calculate_periods() output:
- periods_result: Same format as calculate_periods() output, with periods from all days - periods: List of period summaries
- relaxation_metadata: Dict with relaxation information (aggregated across all days) - metadata: Config and statistics (includes relaxation info)
- reference_data: Daily min/max/avg prices
""" """
# Import here to avoid circular dependency # Import here to avoid circular dependency
from .core import ( # noqa: PLC0415 from .core import ( # noqa: PLC0415
calculate_periods, calculate_periods,
) )
from .period_building import ( # noqa: PLC0415
filter_superseded_periods,
)
# Compact INFO-level summary # Compact INFO-level summary
period_type = "PEAK PRICE" if config.reverse_sort else "BEST PRICE" period_type = "PEAK PRICE" if config.reverse_sort else "BEST PRICE"
@ -235,63 +491,61 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
# Detailed DEBUG-level context header # Detailed DEBUG-level context header
period_type_full = "PEAK PRICE (most expensive)" if config.reverse_sort else "BEST PRICE (cheapest)" period_type_full = "PEAK PRICE (most expensive)" if config.reverse_sort else "BEST PRICE (cheapest)"
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s========== %s PERIODS ==========", "%s========== %s PERIODS ==========",
INDENT_L0, INDENT_L0,
period_type_full, period_type_full,
) )
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sRelaxation: %s", "%sRelaxation: %s",
INDENT_L0, INDENT_L0,
"ENABLED (user setting: ON)" if enable_relaxation else "DISABLED by user configuration", "ENABLED (user setting: ON)" if enable_relaxation else "DISABLED by user configuration",
) )
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sBase config: flex=%.1f%%, min_length=%d min", "%sBase config: flex=%.1f%%, min_length=%d min",
INDENT_L0, INDENT_L0,
abs(config.flex) * 100, abs(config.flex) * 100,
config.min_period_length, config.min_period_length,
) )
if enable_relaxation: if enable_relaxation:
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sRelaxation target: %d periods per day", "%sRelaxation target: %d periods per day",
INDENT_L0, INDENT_L0,
min_periods, min_periods,
) )
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sRelaxation strategy: 3%% fixed flex increment per step (%d flex levels x 2 filter combinations)", "%sRelaxation strategy: 3%% fixed flex increment per step (%d flex levels x 2 filter combinations)",
INDENT_L0, INDENT_L0,
max_relaxation_attempts, max_relaxation_attempts,
) )
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sEarly exit: After EACH filter combination when target reached", "%sEarly exit: After EACH filter combination when target reached",
INDENT_L0, INDENT_L0,
) )
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s=============================================", "%s=============================================",
INDENT_L0, INDENT_L0,
) )
# Validate we have price data for today/future # Validate we have price data
today = time.now().date() if not all_prices:
future_prices = [
p
for p in all_prices
if (interval_time := time.get_interval_time(p)) is not None and interval_time.date() >= today
]
if not future_prices:
# No price data for today/future
_LOGGER.warning( _LOGGER.warning(
"No price data available for today/future - cannot calculate periods", "No price data available - cannot calculate periods",
) )
return {"periods": [], "metadata": {}, "reference_data": {}}, { return {
"relaxation_active": False, "periods": [],
"relaxation_attempted": False, "metadata": {
"min_periods_requested": min_periods if enable_relaxation else 0, "relaxation": {
"periods_found": 0, "relaxation_active": False,
"relaxation_attempted": False,
"min_periods_requested": min_periods if enable_relaxation else 0,
"periods_found": 0,
},
},
"reference_data": {},
} }
# Count available days for logging # Count available days for logging (today and future only)
prices_by_day = group_prices_by_day(all_prices, time=time) prices_by_day = group_prices_by_day(all_prices, time=time)
total_days = len(prices_by_day) total_days = len(prices_by_day)
@ -299,14 +553,16 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
"Calculating baseline periods for %d days...", "Calculating baseline periods for %d days...",
total_days, total_days,
) )
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sProcessing ALL %d price intervals together (allows midnight crossing)", "%sProcessing ALL %d price intervals together (yesterday+today+tomorrow, allows midnight crossing)",
INDENT_L1, INDENT_L1,
len(future_prices), len(all_prices),
) )
# === BASELINE CALCULATION (process ALL prices together) === # === BASELINE CALCULATION (process ALL prices together, including yesterday) ===
baseline_result = calculate_periods(future_prices, config=config, time=time) # Periods that ended before yesterday will be filtered out later by filter_periods_by_end_date()
# This keeps yesterday/today/tomorrow periods in the cache
baseline_result = calculate_periods(all_prices, config=config, time=time)
all_periods = baseline_result["periods"] all_periods = baseline_result["periods"]
# Count periods per day for min_periods check # Count periods per day for min_periods check
@ -317,7 +573,7 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
day_periods = periods_by_day.get(day, []) day_periods = periods_by_day.get(day, [])
period_count = len(day_periods) period_count = len(day_periods)
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sDay %s baseline: Found %d periods%s", "%sDay %s baseline: Found %d periods%s",
INDENT_L1, INDENT_L1,
day, day,
@ -334,7 +590,7 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
if enable_relaxation and days_meeting_requirement < total_days: if enable_relaxation and days_meeting_requirement < total_days:
# At least one day doesn't have enough periods # At least one day doesn't have enough periods
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sBaseline insufficient (%d/%d days met target) - starting relaxation", "%sBaseline insufficient (%d/%d days met target) - starting relaxation",
INDENT_L1, INDENT_L1,
days_meeting_requirement, days_meeting_requirement,
@ -342,15 +598,16 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
) )
relaxation_was_needed = True relaxation_was_needed = True
# Run relaxation on ALL prices together # Run relaxation on ALL prices together (including yesterday)
relaxed_result, relax_metadata = relax_all_prices( relaxed_result, relax_metadata = relax_all_prices(
all_prices=future_prices, all_prices=all_prices,
config=config, config=config,
min_periods=min_periods, min_periods=min_periods,
max_relaxation_attempts=max_relaxation_attempts, max_relaxation_attempts=max_relaxation_attempts,
should_show_callback=should_show_callback, should_show_callback=should_show_callback,
baseline_periods=all_periods, baseline_periods=all_periods,
time=time, time=time,
config_entry=config_entry,
) )
all_periods = relaxed_result["periods"] all_periods = relaxed_result["periods"]
@ -365,8 +622,39 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
period_count = len(day_periods) period_count = len(day_periods)
if period_count >= min_periods: if period_count >= min_periods:
days_meeting_requirement += 1 days_meeting_requirement += 1
# === MIN DURATION FALLBACK ===
# If still no periods after relaxation, try reducing min_period_length
# This is a last resort to ensure users always get SOME period
if days_meeting_requirement < total_days and config.min_period_length > MIN_DURATION_FALLBACK_MINIMUM:
_LOGGER.info(
"Relaxation incomplete (%d/%d days). Trying min_duration fallback...",
days_meeting_requirement,
total_days,
)
fallback_result, fallback_metadata = _try_min_duration_fallback(
config=config,
existing_periods=all_periods,
prices_by_day=prices_by_day,
time=time,
)
if fallback_result:
all_periods = fallback_result["periods"]
all_phases_used.extend(fallback_metadata.get("phases_used", []))
# Recount after fallback
periods_by_day = group_periods_by_day(all_periods)
days_meeting_requirement = 0
for day in sorted(prices_by_day.keys()):
day_periods = periods_by_day.get(day, [])
period_count = len(day_periods)
if period_count >= min_periods:
days_meeting_requirement += 1
elif enable_relaxation: elif enable_relaxation:
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%sAll %d days met target with baseline - no relaxation needed", "%sAll %d days met target with baseline - no relaxation needed",
INDENT_L1, INDENT_L1,
total_days, total_days,
@ -378,13 +666,24 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
# Recalculate metadata for combined periods # Recalculate metadata for combined periods
recalculate_period_metadata(all_periods, time=time) recalculate_period_metadata(all_periods, time=time)
# Apply cross-day supersession filter (only for best-price periods)
# This removes late-night today periods that are superseded by better tomorrow alternatives
all_periods = filter_superseded_periods(
all_periods,
time=time,
reverse_sort=config.reverse_sort,
)
# Build final result # Build final result
final_result = baseline_result.copy() final_result = baseline_result.copy()
final_result["periods"] = all_periods final_result["periods"] = all_periods
total_periods = len(all_periods) total_periods = len(all_periods)
return final_result, { # Add relaxation info to metadata
if "metadata" not in final_result:
final_result["metadata"] = {}
final_result["metadata"]["relaxation"] = {
"relaxation_active": relaxation_was_needed, "relaxation_active": relaxation_was_needed,
"relaxation_attempted": relaxation_was_needed, "relaxation_attempted": relaxation_was_needed,
"min_periods_requested": min_periods, "min_periods_requested": min_periods,
@ -395,6 +694,8 @@ def calculate_periods_with_relaxation( # noqa: PLR0913, PLR0915 - Per-day relax
"relaxation_incomplete": days_meeting_requirement < total_days, "relaxation_incomplete": days_meeting_requirement < total_days,
} }
return final_result
def relax_all_prices( # noqa: PLR0913 - Comprehensive filter relaxation requires many parameters and statements def relax_all_prices( # noqa: PLR0913 - Comprehensive filter relaxation requires many parameters and statements
all_prices: list[dict], all_prices: list[dict],
@ -405,22 +706,25 @@ def relax_all_prices( # noqa: PLR0913 - Comprehensive filter relaxation require
baseline_periods: list[dict], baseline_periods: list[dict],
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
config_entry: Any, # ConfigEntry type
) -> tuple[dict[str, Any], dict[str, Any]]: ) -> tuple[dict[str, Any], dict[str, Any]]:
""" """
Relax filters for all prices until min_periods per day is reached. Relax filters for all prices until min_periods per day is reached.
Strategy: Try increasing flex by 3% increments, then relax level filter. Strategy: Try increasing flex by 3% increments, then relax level filter.
Processes all prices together, allowing periods to cross midnight boundaries. Processes all prices together (yesterday+today+tomorrow), allowing periods
Returns when ALL days have min_periods (or max attempts exhausted). to cross midnight boundaries. Returns when ALL days have min_periods
(or max attempts exhausted).
Args: Args:
all_prices: All price intervals (today + future) all_prices: All price intervals (yesterday+today+tomorrow).
config: Base period configuration config: Base period configuration.
min_periods: Target number of periods PER DAY min_periods: Target number of periods PER DAY.
max_relaxation_attempts: Maximum flex levels to try max_relaxation_attempts: Maximum flex levels to try.
should_show_callback: Callback to check if a flex level should be shown should_show_callback: Callback to check if a flex level should be shown.
baseline_periods: Baseline periods (before relaxation) baseline_periods: Baseline periods (before relaxation).
time: TibberPricesTimeService instance time: TibberPricesTimeService instance.
config_entry: Config entry to get display unit configuration.
Returns: Returns:
Tuple of (result_dict, metadata_dict) Tuple of (result_dict, metadata_dict)
@ -448,7 +752,7 @@ def relax_all_prices( # noqa: PLR0913 - Comprehensive filter relaxation require
# Stop if we exceed hard maximum # Stop if we exceed hard maximum
if current_flex > MAX_FLEX_HARD_LIMIT: if current_flex > MAX_FLEX_HARD_LIMIT:
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s Reached 50%% flex hard limit", "%s Reached 50%% flex hard limit",
INDENT_L2, INDENT_L2,
) )
@ -462,20 +766,21 @@ def relax_all_prices( # noqa: PLR0913 - Comprehensive filter relaxation require
# Try current flex with level="any" (in relaxation mode) # Try current flex with level="any" (in relaxation mode)
if original_level_filter != "any": if original_level_filter != "any":
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s Flex=%.1f%%: OVERRIDING level_filter: %s → ANY", "%s Flex=%.1f%%: OVERRIDING level_filter: %s → ANY",
INDENT_L2, INDENT_L2,
current_flex * 100, current_flex * 100,
original_level_filter, original_level_filter,
) )
# NOTE: config.flex is already normalized to positive by get_period_config()
relaxed_config = config._replace( relaxed_config = config._replace(
flex=current_flex if config.flex >= 0 else -current_flex, flex=current_flex, # Already positive from normalization
level_filter="any", level_filter="any",
) )
phase_label_full = f"flex={current_flex * 100:.1f}% +level_any" phase_label_full = f"flex={current_flex * 100:.1f}% +level_any"
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s Trying %s: config has %d intervals (all days together), level_filter=%s", "%s Trying %s: config has %d intervals (all days together), level_filter=%s",
INDENT_L2, INDENT_L2,
phase_label_full, phase_label_full,
@ -487,39 +792,36 @@ def relax_all_prices( # noqa: PLR0913 - Comprehensive filter relaxation require
result = calculate_periods(all_prices, config=relaxed_config, time=time) result = calculate_periods(all_prices, config=relaxed_config, time=time)
new_periods = result["periods"] new_periods = result["periods"]
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s %s: calculate_periods returned %d periods", "%s %s: calculate_periods returned %d periods",
INDENT_L2, INDENT_L2,
phase_label_full, phase_label_full,
len(new_periods), len(new_periods),
) )
# Mark newly found periods with relaxation metadata BEFORE merging
mark_periods_with_relaxation(
new_periods,
relaxation_level=phase_label_full,
original_threshold=base_flex,
applied_threshold=current_flex,
reverse_sort=config.reverse_sort,
)
# Resolve overlaps between existing and new periods # Resolve overlaps between existing and new periods
combined, standalone_count = resolve_period_overlaps( combined, standalone_count = resolve_period_overlaps(
existing_periods=existing_periods, existing_periods=existing_periods,
new_relaxed_periods=new_periods, new_relaxed_periods=new_periods,
) )
# Count periods per day to check if requirement met # Count periods per day with QUALITY GATE check
periods_by_day = group_periods_by_day(combined) # Only periods with CV <= PERIOD_MAX_CV count towards min_periods requirement
days_meeting_requirement = 0 days_meeting_requirement, quality_period_count = _count_quality_periods(
combined, all_prices, prices_by_day, min_periods, time=time
for day in sorted(prices_by_day.keys()): )
day_periods = periods_by_day.get(day, [])
period_count = len(day_periods)
if period_count >= min_periods:
days_meeting_requirement += 1
_LOGGER.debug(
"%s Day %s: %d periods%s",
INDENT_L2,
day,
period_count,
"" if period_count >= min_periods else f" (need {min_periods})",
)
total_periods = len(combined) total_periods = len(combined)
_LOGGER.debug( _LOGGER_DETAILS.debug(
"%s %s: found %d periods total, %d/%d days meet requirement", "%s %s: found %d periods total, %d/%d days meet requirement",
INDENT_L2, INDENT_L2,
phase_label_full, phase_label_full,

View file

@ -15,6 +15,24 @@ from custom_components.tibber_prices.const import (
DEFAULT_VOLATILITY_THRESHOLD_VERY_HIGH, DEFAULT_VOLATILITY_THRESHOLD_VERY_HIGH,
) )
# Quality Gate: Maximum coefficient of variation (CV) allowed within a period
# Periods with internal CV above this are considered too heterogeneous for "best price"
# A 25% CV means the std dev is 25% of the mean - beyond this, prices vary too much
# Example: Period with prices 0.7-0.99 kr has ~15% CV which is acceptable
# Period with prices 0.5-1.0 kr has ~30% CV which would be rejected
PERIOD_MAX_CV = 25.0 # 25% max coefficient of variation within a period
# Cross-Day Extension: Time window constants
# When a period ends late in the day and tomorrow data is available,
# we can extend it past midnight if prices remain favorable
CROSS_DAY_LATE_PERIOD_START_HOUR = 20 # Consider periods starting at 20:00 or later for extension
CROSS_DAY_MAX_EXTENSION_HOUR = 8 # Don't extend beyond 08:00 next day (covers typical night low)
# Cross-Day Supersession: When tomorrow data arrives, late-night periods that are
# worse than early-morning tomorrow periods become obsolete
# A today period is "superseded" if tomorrow has a significantly better alternative
SUPERSESSION_PRICE_IMPROVEMENT_PCT = 10.0 # Tomorrow must be at least 10% cheaper to supersede
# Log indentation levels for visual hierarchy # Log indentation levels for visual hierarchy
INDENT_L0 = "" # Top level (calculate_periods_with_relaxation) INDENT_L0 = "" # Top level (calculate_periods_with_relaxation)
INDENT_L1 = " " # Per-day loop INDENT_L1 = " " # Per-day loop
@ -56,11 +74,13 @@ class TibberPricesPeriodStatistics(NamedTuple):
aggregated_level: str | None aggregated_level: str | None
aggregated_rating: str | None aggregated_rating: str | None
rating_difference_pct: float | None rating_difference_pct: float | None
price_avg: float price_mean: float
price_median: float
price_min: float price_min: float
price_max: float price_max: float
price_spread: float price_spread: float
volatility: str volatility: str
coefficient_of_variation: float | None # CV as percentage (e.g., 15.0 for 15%)
period_price_diff: float | None period_price_diff: float | None
period_price_diff_pct: float | None period_price_diff_pct: float | None

View file

@ -13,16 +13,17 @@ from typing import TYPE_CHECKING, Any
from custom_components.tibber_prices import const as _const from custom_components.tibber_prices import const as _const
if TYPE_CHECKING: if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from collections.abc import Callable
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from homeassistant.config_entries import ConfigEntry
from .helpers import get_intervals_for_day_offsets
from .period_handlers import ( from .period_handlers import (
TibberPricesPeriodConfig, TibberPricesPeriodConfig,
calculate_periods_with_relaxation, calculate_periods_with_relaxation,
) )
if TYPE_CHECKING:
from homeassistant.config_entries import ConfigEntry
_LOGGER = logging.getLogger(__name__) _LOGGER = logging.getLogger(__name__)
@ -33,6 +34,7 @@ class TibberPricesPeriodCalculator:
self, self,
config_entry: ConfigEntry, config_entry: ConfigEntry,
log_prefix: str, log_prefix: str,
get_config_override_fn: Callable[[str, str], Any | None] | None = None,
) -> None: ) -> None:
"""Initialize the period calculator.""" """Initialize the period calculator."""
self.config_entry = config_entry self.config_entry = config_entry
@ -40,11 +42,40 @@ class TibberPricesPeriodCalculator:
self.time: TibberPricesTimeService # Set by coordinator before first use self.time: TibberPricesTimeService # Set by coordinator before first use
self._config_cache: dict[str, dict[str, Any]] | None = None self._config_cache: dict[str, dict[str, Any]] | None = None
self._config_cache_valid = False self._config_cache_valid = False
self._get_config_override = get_config_override_fn
# Period calculation cache # Period calculation cache
self._cached_periods: dict[str, Any] | None = None self._cached_periods: dict[str, Any] | None = None
self._last_periods_hash: str | None = None self._last_periods_hash: str | None = None
def _get_option(
self,
config_key: str,
config_section: str,
default: Any,
) -> Any:
"""
Get a config option, checking overrides first.
Args:
config_key: The configuration key
config_section: The section in options (e.g., "flexibility_settings")
default: Default value if not set
Returns:
Override value if set, otherwise options value, otherwise default
"""
# Check overrides first
if self._get_config_override is not None:
override = self._get_config_override(config_key, config_section)
if override is not None:
return override
# Fall back to options
section = self.config_entry.options.get(config_section, {})
return section.get(config_key, default)
def _log(self, level: str, message: str, *args: object, **kwargs: object) -> None: def _log(self, level: str, message: str, *args: object, **kwargs: object) -> None:
"""Log with calculator-specific prefix.""" """Log with calculator-specific prefix."""
prefixed_message = f"{self._log_prefix} {message}" prefixed_message = f"{self._log_prefix} {message}"
@ -64,7 +95,7 @@ class TibberPricesPeriodCalculator:
Compute hash of price data and config for period calculation caching. Compute hash of price data and config for period calculation caching.
Only includes data that affects period calculation: Only includes data that affects period calculation:
- Today's interval timestamps and enriched rating levels - All interval timestamps and enriched rating levels (yesterday/today/tomorrow)
- Period calculation config (flex, min_distance, min_period_length) - Period calculation config (flex, min_distance, min_period_length)
- Level filter overrides - Level filter overrides
@ -72,9 +103,20 @@ class TibberPricesPeriodCalculator:
Hash string for cache key comparison. Hash string for cache key comparison.
""" """
# Get relevant price data # Get today and tomorrow intervals for hash calculation
today = price_info.get("today", []) # CRITICAL: Only today+tomorrow needed in hash because:
today_signature = tuple((interval.get("startsAt"), interval.get("rating_level")) for interval in today) # 1. Mitternacht: "today" startsAt changes → cache invalidates
# 2. Tomorrow arrival: "tomorrow" startsAt changes from None → cache invalidates
# 3. Yesterday/day-before-yesterday are static (rating_levels don't change retroactively)
# 4. Using first startsAt as representative (changes → entire day changed)
coordinator_data = {"priceInfo": price_info}
today_intervals = get_intervals_for_day_offsets(coordinator_data, [0])
tomorrow_intervals = get_intervals_for_day_offsets(coordinator_data, [1])
# Use first startsAt of each day as representative for entire day's data
# If day is empty, use None (detects data availability changes)
today_start = today_intervals[0].get("startsAt") if today_intervals else None
tomorrow_start = tomorrow_intervals[0].get("startsAt") if tomorrow_intervals else None
# Get period configs (both best and peak) # Get period configs (both best and peak)
best_config = self.get_period_config(reverse_sort=False) best_config = self.get_period_config(reverse_sort=False)
@ -82,12 +124,14 @@ class TibberPricesPeriodCalculator:
# Get level filter overrides from options # Get level filter overrides from options
options = self.config_entry.options options = self.config_entry.options
best_level_filter = options.get(_const.CONF_BEST_PRICE_MAX_LEVEL, _const.DEFAULT_BEST_PRICE_MAX_LEVEL) period_settings = options.get("period_settings", {})
peak_level_filter = options.get(_const.CONF_PEAK_PRICE_MIN_LEVEL, _const.DEFAULT_PEAK_PRICE_MIN_LEVEL) best_level_filter = period_settings.get(_const.CONF_BEST_PRICE_MAX_LEVEL, _const.DEFAULT_BEST_PRICE_MAX_LEVEL)
peak_level_filter = period_settings.get(_const.CONF_PEAK_PRICE_MIN_LEVEL, _const.DEFAULT_PEAK_PRICE_MIN_LEVEL)
# Compute hash from all relevant data # Compute hash from all relevant data
hash_data = ( hash_data = (
today_signature, today_start, # Representative for today's data (changes at midnight)
tomorrow_start, # Representative for tomorrow's data (changes when data arrives)
tuple(best_config.items()), tuple(best_config.items()),
tuple(peak_config.items()), tuple(peak_config.items()),
best_level_filter, best_level_filter,
@ -100,7 +144,7 @@ class TibberPricesPeriodCalculator:
Get period calculation configuration from config options. Get period calculation configuration from config options.
Uses cached config to avoid multiple options.get() calls. Uses cached config to avoid multiple options.get() calls.
Cache is invalidated when config_entry.options change. Cache is invalidated when config_entry.options change or override entities update.
""" """
cache_key = "peak" if reverse_sort else "best" cache_key = "peak" if reverse_sort else "best"
@ -112,45 +156,72 @@ class TibberPricesPeriodCalculator:
if self._config_cache is None: if self._config_cache is None:
self._config_cache = {} self._config_cache = {}
options = self.config_entry.options # Get config values, checking overrides first
data = self.config_entry.data # CRITICAL: Best/Peak price settings are stored in nested sections:
# - period_settings: min_period_length, max_level, gap_count
# - flexibility_settings: flex, min_distance_from_avg
# Override entities can override any of these values at runtime
if reverse_sort: if reverse_sort:
# Peak price configuration # Peak price configuration
flex = options.get( flex = self._get_option(
_const.CONF_PEAK_PRICE_FLEX, data.get(_const.CONF_PEAK_PRICE_FLEX, _const.DEFAULT_PEAK_PRICE_FLEX) _const.CONF_PEAK_PRICE_FLEX,
"flexibility_settings",
_const.DEFAULT_PEAK_PRICE_FLEX,
) )
min_distance_from_avg = options.get( min_distance_from_avg = self._get_option(
_const.CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG, _const.CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG,
data.get(_const.CONF_PEAK_PRICE_MIN_DISTANCE_FROM_AVG, _const.DEFAULT_PEAK_PRICE_MIN_DISTANCE_FROM_AVG), "flexibility_settings",
_const.DEFAULT_PEAK_PRICE_MIN_DISTANCE_FROM_AVG,
) )
min_period_length = options.get( min_period_length = self._get_option(
_const.CONF_PEAK_PRICE_MIN_PERIOD_LENGTH, _const.CONF_PEAK_PRICE_MIN_PERIOD_LENGTH,
data.get(_const.CONF_PEAK_PRICE_MIN_PERIOD_LENGTH, _const.DEFAULT_PEAK_PRICE_MIN_PERIOD_LENGTH), "period_settings",
_const.DEFAULT_PEAK_PRICE_MIN_PERIOD_LENGTH,
) )
else: else:
# Best price configuration # Best price configuration
flex = options.get( flex = self._get_option(
_const.CONF_BEST_PRICE_FLEX, data.get(_const.CONF_BEST_PRICE_FLEX, _const.DEFAULT_BEST_PRICE_FLEX) _const.CONF_BEST_PRICE_FLEX,
"flexibility_settings",
_const.DEFAULT_BEST_PRICE_FLEX,
) )
min_distance_from_avg = options.get( min_distance_from_avg = self._get_option(
_const.CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG, _const.CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG,
data.get(_const.CONF_BEST_PRICE_MIN_DISTANCE_FROM_AVG, _const.DEFAULT_BEST_PRICE_MIN_DISTANCE_FROM_AVG), "flexibility_settings",
_const.DEFAULT_BEST_PRICE_MIN_DISTANCE_FROM_AVG,
) )
min_period_length = options.get( min_period_length = self._get_option(
_const.CONF_BEST_PRICE_MIN_PERIOD_LENGTH, _const.CONF_BEST_PRICE_MIN_PERIOD_LENGTH,
data.get(_const.CONF_BEST_PRICE_MIN_PERIOD_LENGTH, _const.DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH), "period_settings",
_const.DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH,
) )
# Convert flex from percentage to decimal (e.g., 5 -> 0.05) # Convert flex from percentage to decimal (e.g., 5 -> 0.05)
# CRITICAL: Normalize to absolute value for internal calculations
# User-facing values use sign convention:
# - Best price: positive (e.g., +15% above minimum)
# - Peak price: negative (e.g., -20% below maximum)
# Internal calculations always use positive values with reverse_sort flag
try: try:
flex = float(flex) / 100 flex = abs(float(flex)) / 100 # Always positive internally
except (TypeError, ValueError): except (TypeError, ValueError):
flex = _const.DEFAULT_BEST_PRICE_FLEX / 100 if not reverse_sort else _const.DEFAULT_PEAK_PRICE_FLEX / 100 flex = (
abs(_const.DEFAULT_BEST_PRICE_FLEX) / 100
if not reverse_sort
else abs(_const.DEFAULT_PEAK_PRICE_FLEX) / 100
)
# CRITICAL: Normalize min_distance_from_avg to absolute value
# User-facing values use sign convention:
# - Best price: negative (e.g., -5% below average)
# - Peak price: positive (e.g., +5% above average)
# Internal calculations always use positive values with reverse_sort flag
min_distance_from_avg_normalized = abs(float(min_distance_from_avg))
config = { config = {
"flex": flex, "flex": flex,
"min_distance_from_avg": float(min_distance_from_avg), "min_distance_from_avg": min_distance_from_avg_normalized,
"min_period_length": int(min_period_length), "min_period_length": int(min_period_length),
} }
@ -329,13 +400,14 @@ class TibberPricesPeriodCalculator:
# Normal check failed - try splitting at gap clusters as fallback # Normal check failed - try splitting at gap clusters as fallback
# Get minimum period length from config (convert minutes to intervals) # Get minimum period length from config (convert minutes to intervals)
period_settings = self.config_entry.options.get("period_settings", {})
if reverse_sort: if reverse_sort:
min_period_minutes = self.config_entry.options.get( min_period_minutes = period_settings.get(
_const.CONF_PEAK_PRICE_MIN_PERIOD_LENGTH, _const.CONF_PEAK_PRICE_MIN_PERIOD_LENGTH,
_const.DEFAULT_PEAK_PRICE_MIN_PERIOD_LENGTH, _const.DEFAULT_PEAK_PRICE_MIN_PERIOD_LENGTH,
) )
else: else:
min_period_minutes = self.config_entry.options.get( min_period_minutes = period_settings.get(
_const.CONF_BEST_PRICE_MIN_PERIOD_LENGTH, _const.CONF_BEST_PRICE_MIN_PERIOD_LENGTH,
_const.DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH, _const.DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH,
) )
@ -460,13 +532,15 @@ class TibberPricesPeriodCalculator:
# Get appropriate config based on sensor type # Get appropriate config based on sensor type
elif reverse_sort: elif reverse_sort:
# Peak price: minimum level filter (lower bound) # Peak price: minimum level filter (lower bound)
level_config = self.config_entry.options.get( period_settings = self.config_entry.options.get("period_settings", {})
level_config = period_settings.get(
_const.CONF_PEAK_PRICE_MIN_LEVEL, _const.CONF_PEAK_PRICE_MIN_LEVEL,
_const.DEFAULT_PEAK_PRICE_MIN_LEVEL, _const.DEFAULT_PEAK_PRICE_MIN_LEVEL,
) )
else: else:
# Best price: maximum level filter (upper bound) # Best price: maximum level filter (upper bound)
level_config = self.config_entry.options.get( period_settings = self.config_entry.options.get("period_settings", {})
level_config = period_settings.get(
_const.CONF_BEST_PRICE_MAX_LEVEL, _const.CONF_BEST_PRICE_MAX_LEVEL,
_const.DEFAULT_BEST_PRICE_MAX_LEVEL, _const.DEFAULT_BEST_PRICE_MAX_LEVEL,
) )
@ -475,20 +549,23 @@ class TibberPricesPeriodCalculator:
if level_config == "any": if level_config == "any":
return True return True
# Get today's intervals # Get today's intervals from flat list
today_intervals = price_info.get("today", []) # Build minimal coordinator_data structure for get_intervals_for_day_offsets
coordinator_data = {"priceInfo": price_info}
today_intervals = get_intervals_for_day_offsets(coordinator_data, [0])
if not today_intervals: if not today_intervals:
return True # If no data, don't filter return True # If no data, don't filter
# Get gap tolerance configuration # Get gap tolerance configuration
period_settings = self.config_entry.options.get("period_settings", {})
if reverse_sort: if reverse_sort:
max_gap_count = self.config_entry.options.get( max_gap_count = period_settings.get(
_const.CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT, _const.CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT,
_const.DEFAULT_PEAK_PRICE_MAX_LEVEL_GAP_COUNT, _const.DEFAULT_PEAK_PRICE_MAX_LEVEL_GAP_COUNT,
) )
else: else:
max_gap_count = self.config_entry.options.get( max_gap_count = period_settings.get(
_const.CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT, _const.CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT,
_const.DEFAULT_BEST_PRICE_MAX_LEVEL_GAP_COUNT, _const.DEFAULT_BEST_PRICE_MAX_LEVEL_GAP_COUNT,
) )
@ -539,12 +616,14 @@ class TibberPricesPeriodCalculator:
self._log("debug", "Calculating periods (cache miss or hash mismatch)") self._log("debug", "Calculating periods (cache miss or hash mismatch)")
yesterday_prices = price_info.get("yesterday", []) # Get all intervals at once (day before yesterday + yesterday + today + tomorrow)
today_prices = price_info.get("today", []) # CRITICAL: 4 days ensure stable historical period calculations
tomorrow_prices = price_info.get("tomorrow", []) # (periods calculated today for yesterday match periods calculated yesterday)
all_prices = yesterday_prices + today_prices + tomorrow_prices coordinator_data = {"priceInfo": price_info}
all_prices = get_intervals_for_day_offsets(coordinator_data, [-2, -1, 0, 1])
# Get rating thresholds from config # Get rating thresholds from config (flat in options, not in sections)
# CRITICAL: Price rating thresholds are stored FLAT in options (no sections)
threshold_low = self.config_entry.options.get( threshold_low = self.config_entry.options.get(
_const.CONF_PRICE_RATING_THRESHOLD_LOW, _const.CONF_PRICE_RATING_THRESHOLD_LOW,
_const.DEFAULT_PRICE_RATING_THRESHOLD_LOW, _const.DEFAULT_PRICE_RATING_THRESHOLD_LOW,
@ -554,7 +633,8 @@ class TibberPricesPeriodCalculator:
_const.DEFAULT_PRICE_RATING_THRESHOLD_HIGH, _const.DEFAULT_PRICE_RATING_THRESHOLD_HIGH,
) )
# Get volatility thresholds from config # Get volatility thresholds from config (flat in options, not in sections)
# CRITICAL: Volatility thresholds are stored FLAT in options (no sections)
threshold_volatility_moderate = self.config_entry.options.get( threshold_volatility_moderate = self.config_entry.options.get(
_const.CONF_VOLATILITY_THRESHOLD_MODERATE, _const.CONF_VOLATILITY_THRESHOLD_MODERATE,
_const.DEFAULT_VOLATILITY_THRESHOLD_MODERATE, _const.DEFAULT_VOLATILITY_THRESHOLD_MODERATE,
@ -569,8 +649,11 @@ class TibberPricesPeriodCalculator:
) )
# Get relaxation configuration for best price # Get relaxation configuration for best price
enable_relaxation_best = self.config_entry.options.get( # CRITICAL: Relaxation settings are stored in nested section 'relaxation_and_target_periods'
# Override entities can override any of these values at runtime
enable_relaxation_best = self._get_option(
_const.CONF_ENABLE_MIN_PERIODS_BEST, _const.CONF_ENABLE_MIN_PERIODS_BEST,
"relaxation_and_target_periods",
_const.DEFAULT_ENABLE_MIN_PERIODS_BEST, _const.DEFAULT_ENABLE_MIN_PERIODS_BEST,
) )
@ -581,25 +664,30 @@ class TibberPricesPeriodCalculator:
show_best_price = bool(all_prices) show_best_price = bool(all_prices)
else: else:
show_best_price = self.should_show_periods(price_info, reverse_sort=False) if all_prices else False show_best_price = self.should_show_periods(price_info, reverse_sort=False) if all_prices else False
min_periods_best = self.config_entry.options.get( min_periods_best = self._get_option(
_const.CONF_MIN_PERIODS_BEST, _const.CONF_MIN_PERIODS_BEST,
"relaxation_and_target_periods",
_const.DEFAULT_MIN_PERIODS_BEST, _const.DEFAULT_MIN_PERIODS_BEST,
) )
relaxation_attempts_best = self.config_entry.options.get( relaxation_attempts_best = self._get_option(
_const.CONF_RELAXATION_ATTEMPTS_BEST, _const.CONF_RELAXATION_ATTEMPTS_BEST,
"relaxation_and_target_periods",
_const.DEFAULT_RELAXATION_ATTEMPTS_BEST, _const.DEFAULT_RELAXATION_ATTEMPTS_BEST,
) )
# Calculate best price periods (or return empty if filtered) # Calculate best price periods (or return empty if filtered)
if show_best_price: if show_best_price:
best_config = self.get_period_config(reverse_sort=False) best_config = self.get_period_config(reverse_sort=False)
# Get level filter configuration # Get level filter configuration from period_settings section
max_level_best = self.config_entry.options.get( # CRITICAL: max_level and gap_count are stored in nested section 'period_settings'
max_level_best = self._get_option(
_const.CONF_BEST_PRICE_MAX_LEVEL, _const.CONF_BEST_PRICE_MAX_LEVEL,
"period_settings",
_const.DEFAULT_BEST_PRICE_MAX_LEVEL, _const.DEFAULT_BEST_PRICE_MAX_LEVEL,
) )
gap_count_best = self.config_entry.options.get( gap_count_best = self._get_option(
_const.CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT, _const.CONF_BEST_PRICE_MAX_LEVEL_GAP_COUNT,
"period_settings",
_const.DEFAULT_BEST_PRICE_MAX_LEVEL_GAP_COUNT, _const.DEFAULT_BEST_PRICE_MAX_LEVEL_GAP_COUNT,
) )
best_period_config = TibberPricesPeriodConfig( best_period_config = TibberPricesPeriodConfig(
@ -615,7 +703,7 @@ class TibberPricesPeriodCalculator:
level_filter=max_level_best, level_filter=max_level_best,
gap_count=gap_count_best, gap_count=gap_count_best,
) )
best_periods, best_relaxation = calculate_periods_with_relaxation( best_periods = calculate_periods_with_relaxation(
all_prices, all_prices,
config=best_period_config, config=best_period_config,
enable_relaxation=enable_relaxation_best, enable_relaxation=enable_relaxation_best,
@ -627,18 +715,26 @@ class TibberPricesPeriodCalculator:
level_override=lvl, level_override=lvl,
), ),
time=self.time, time=self.time,
config_entry=self.config_entry,
) )
else: else:
best_periods = { best_periods = {
"periods": [], "periods": [],
"intervals": [], "intervals": [],
"metadata": {"total_intervals": 0, "total_periods": 0, "config": {}}, "metadata": {
"total_intervals": 0,
"total_periods": 0,
"config": {},
"relaxation": {"relaxation_active": False, "relaxation_attempted": False},
},
} }
best_relaxation = {"relaxation_active": False, "relaxation_attempted": False}
# Get relaxation configuration for peak price # Get relaxation configuration for peak price
enable_relaxation_peak = self.config_entry.options.get( # CRITICAL: Relaxation settings are stored in nested section 'relaxation_and_target_periods'
# Override entities can override any of these values at runtime
enable_relaxation_peak = self._get_option(
_const.CONF_ENABLE_MIN_PERIODS_PEAK, _const.CONF_ENABLE_MIN_PERIODS_PEAK,
"relaxation_and_target_periods",
_const.DEFAULT_ENABLE_MIN_PERIODS_PEAK, _const.DEFAULT_ENABLE_MIN_PERIODS_PEAK,
) )
@ -649,25 +745,30 @@ class TibberPricesPeriodCalculator:
show_peak_price = bool(all_prices) show_peak_price = bool(all_prices)
else: else:
show_peak_price = self.should_show_periods(price_info, reverse_sort=True) if all_prices else False show_peak_price = self.should_show_periods(price_info, reverse_sort=True) if all_prices else False
min_periods_peak = self.config_entry.options.get( min_periods_peak = self._get_option(
_const.CONF_MIN_PERIODS_PEAK, _const.CONF_MIN_PERIODS_PEAK,
"relaxation_and_target_periods",
_const.DEFAULT_MIN_PERIODS_PEAK, _const.DEFAULT_MIN_PERIODS_PEAK,
) )
relaxation_attempts_peak = self.config_entry.options.get( relaxation_attempts_peak = self._get_option(
_const.CONF_RELAXATION_ATTEMPTS_PEAK, _const.CONF_RELAXATION_ATTEMPTS_PEAK,
"relaxation_and_target_periods",
_const.DEFAULT_RELAXATION_ATTEMPTS_PEAK, _const.DEFAULT_RELAXATION_ATTEMPTS_PEAK,
) )
# Calculate peak price periods (or return empty if filtered) # Calculate peak price periods (or return empty if filtered)
if show_peak_price: if show_peak_price:
peak_config = self.get_period_config(reverse_sort=True) peak_config = self.get_period_config(reverse_sort=True)
# Get level filter configuration # Get level filter configuration from period_settings section
min_level_peak = self.config_entry.options.get( # CRITICAL: min_level and gap_count are stored in nested section 'period_settings'
min_level_peak = self._get_option(
_const.CONF_PEAK_PRICE_MIN_LEVEL, _const.CONF_PEAK_PRICE_MIN_LEVEL,
"period_settings",
_const.DEFAULT_PEAK_PRICE_MIN_LEVEL, _const.DEFAULT_PEAK_PRICE_MIN_LEVEL,
) )
gap_count_peak = self.config_entry.options.get( gap_count_peak = self._get_option(
_const.CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT, _const.CONF_PEAK_PRICE_MAX_LEVEL_GAP_COUNT,
"period_settings",
_const.DEFAULT_PEAK_PRICE_MAX_LEVEL_GAP_COUNT, _const.DEFAULT_PEAK_PRICE_MAX_LEVEL_GAP_COUNT,
) )
peak_period_config = TibberPricesPeriodConfig( peak_period_config = TibberPricesPeriodConfig(
@ -683,7 +784,7 @@ class TibberPricesPeriodCalculator:
level_filter=min_level_peak, level_filter=min_level_peak,
gap_count=gap_count_peak, gap_count=gap_count_peak,
) )
peak_periods, peak_relaxation = calculate_periods_with_relaxation( peak_periods = calculate_periods_with_relaxation(
all_prices, all_prices,
config=peak_period_config, config=peak_period_config,
enable_relaxation=enable_relaxation_peak, enable_relaxation=enable_relaxation_peak,
@ -695,20 +796,23 @@ class TibberPricesPeriodCalculator:
level_override=lvl, level_override=lvl,
), ),
time=self.time, time=self.time,
config_entry=self.config_entry,
) )
else: else:
peak_periods = { peak_periods = {
"periods": [], "periods": [],
"intervals": [], "intervals": [],
"metadata": {"total_intervals": 0, "total_periods": 0, "config": {}}, "metadata": {
"total_intervals": 0,
"total_periods": 0,
"config": {},
"relaxation": {"relaxation_active": False, "relaxation_attempted": False},
},
} }
peak_relaxation = {"relaxation_active": False, "relaxation_attempted": False}
result = { result = {
"best_price": best_periods, "best_price": best_periods,
"best_price_relaxation": best_relaxation,
"peak_price": peak_periods, "peak_price": peak_periods,
"peak_price_relaxation": peak_relaxation,
} }
# Cache the result # Cache the result

View file

@ -0,0 +1,631 @@
"""
Price data management for the coordinator.
This module manages all price-related data for the Tibber Prices integration:
**User Data** (fetched directly via API):
- Home metadata (name, address, timezone)
- Account info (subscription status)
- Currency settings
- Refreshed daily (24h interval)
**Price Data** (fetched via IntervalPool):
- Quarter-hourly price intervals
- Yesterday/today/tomorrow coverage
- The IntervalPool handles actual API fetching, deduplication, and caching
- This manager coordinates the data flow and user data refresh
Data flow:
Tibber API IntervalPool PriceDataManager Coordinator Sensors
(actual fetching) (orchestration + user data)
Note: Price data is NOT cached in this module - IntervalPool is the single
source of truth. This module only caches user_data for daily refresh cycle.
"""
from __future__ import annotations
import logging
from datetime import timedelta
from typing import TYPE_CHECKING, Any
from custom_components.tibber_prices.api import (
TibberPricesApiClientAuthenticationError,
TibberPricesApiClientCommunicationError,
TibberPricesApiClientError,
)
from homeassistant.exceptions import ConfigEntryAuthFailed
from homeassistant.helpers.update_coordinator import UpdateFailed
from . import cache, helpers
if TYPE_CHECKING:
from collections.abc import Callable
from datetime import datetime
from custom_components.tibber_prices.api import TibberPricesApiClient
from custom_components.tibber_prices.interval_pool import TibberPricesIntervalPool
from .time_service import TibberPricesTimeService
_LOGGER = logging.getLogger(__name__)
# Hour when Tibber publishes tomorrow's prices (around 13:00 local time)
# Before this hour, requesting tomorrow data will always fail → wasted API call
TOMORROW_DATA_AVAILABLE_HOUR = 13
class TibberPricesPriceDataManager:
"""
Manages price and user data for the coordinator.
Responsibilities:
- User data: Fetches directly via API, validates, caches with persistence
- Price data: Coordinates with IntervalPool (which does actual API fetching)
- Cache management: Loads/stores both data types to HA persistent storage
- Update decisions: Determines when fresh data is needed
Note: Despite the name, this class does NOT do the actual price fetching.
The IntervalPool handles API calls, deduplication, and interval management.
This class orchestrates WHEN to fetch and processes the results.
"""
def __init__( # noqa: PLR0913
self,
api: TibberPricesApiClient,
store: Any,
log_prefix: str,
user_update_interval: timedelta,
time: TibberPricesTimeService,
home_id: str,
interval_pool: TibberPricesIntervalPool,
) -> None:
"""
Initialize the price data manager.
Args:
api: API client for direct requests (user data only).
store: Home Assistant storage for persistence.
log_prefix: Prefix for log messages (e.g., "[Home Name]").
user_update_interval: How often to refresh user data (default: 1 day).
time: TimeService for time operations.
home_id: Home ID this manager is responsible for.
interval_pool: IntervalPool for price data (handles actual fetching).
"""
self.api = api
self._store = store
self._log_prefix = log_prefix
self._user_update_interval = user_update_interval
self.time: TibberPricesTimeService = time
self.home_id = home_id
self._interval_pool = interval_pool
# Cached data (user data only - price data is in IntervalPool)
self._cached_user_data: dict[str, Any] | None = None
self._last_user_update: datetime | None = None
def _log(self, level: str, message: str, *args: object, **kwargs: object) -> None:
"""Log with coordinator-specific prefix."""
prefixed_message = f"{self._log_prefix} {message}"
getattr(_LOGGER, level)(prefixed_message, *args, **kwargs)
async def load_cache(self) -> None:
"""Load cached user data from storage (price data is in IntervalPool)."""
cache_data = await cache.load_cache(self._store, self._log_prefix, time=self.time)
self._cached_user_data = cache_data.user_data
self._last_user_update = cache_data.last_user_update
def should_fetch_tomorrow_data(
self,
current_price_info: list[dict[str, Any]] | None,
) -> bool:
"""
Determine if tomorrow's data should be requested from the API.
This is the key intelligence that prevents API spam:
- Tibber publishes tomorrow's prices around 13:00 each day
- Before 13:00, requesting tomorrow data will always fail wasted API call
- If we already have tomorrow data, no need to request it again
The decision logic:
1. Before 13:00 local time Don't fetch (data not available yet)
2. After 13:00 AND tomorrow data already present Don't fetch (already have it)
3. After 13:00 AND tomorrow data missing Fetch (data should be available)
Args:
current_price_info: List of price intervals from current coordinator data.
Used to check if tomorrow data already exists.
Returns:
True if tomorrow data should be requested, False otherwise.
"""
now = self.time.now()
now_local = self.time.as_local(now)
current_hour = now_local.hour
# Before TOMORROW_DATA_AVAILABLE_HOUR - tomorrow data not available yet
if current_hour < TOMORROW_DATA_AVAILABLE_HOUR:
self._log("debug", "Before %d:00 - not requesting tomorrow data", TOMORROW_DATA_AVAILABLE_HOUR)
return False
# After TOMORROW_DATA_AVAILABLE_HOUR - check if we already have tomorrow data
if current_price_info:
has_tomorrow = self.has_tomorrow_data(current_price_info)
if has_tomorrow:
self._log(
"debug", "After %d:00 but already have tomorrow data - not requesting", TOMORROW_DATA_AVAILABLE_HOUR
)
return False
self._log("debug", "After %d:00 and tomorrow data missing - will request", TOMORROW_DATA_AVAILABLE_HOUR)
return True
# No current data - request tomorrow data if after TOMORROW_DATA_AVAILABLE_HOUR
self._log(
"debug", "After %d:00 with no current data - will request tomorrow data", TOMORROW_DATA_AVAILABLE_HOUR
)
return True
async def store_cache(self, last_midnight_check: datetime | None = None) -> None:
"""Store cache data (user metadata only, price data is in IntervalPool)."""
cache_data = cache.TibberPricesCacheData(
user_data=self._cached_user_data,
last_user_update=self._last_user_update,
last_midnight_check=last_midnight_check,
)
await cache.save_cache(self._store, cache_data, self._log_prefix)
def _validate_user_data(self, user_data: dict, home_id: str) -> bool: # noqa: PLR0911
"""
Validate user data completeness.
Rejects incomplete/invalid data from API to prevent caching temporary errors.
Currency information is critical - if missing, we cannot safely calculate prices.
Args:
user_data: User data dict from API.
home_id: Home ID to validate against.
Returns:
True if data is valid and complete, False otherwise.
"""
if not user_data:
self._log("warning", "User data validation failed: Empty data")
return False
viewer = user_data.get("viewer")
if not viewer or not isinstance(viewer, dict):
self._log("warning", "User data validation failed: Missing or invalid viewer")
return False
homes = viewer.get("homes")
if not homes or not isinstance(homes, list) or len(homes) == 0:
self._log("warning", "User data validation failed: No homes found")
return False
# Find our home and validate it has required data
home_found = False
for home in homes:
if home.get("id") == home_id:
home_found = True
# Validate home has timezone (required for cursor calculation)
if not home.get("timeZone"):
self._log("warning", "User data validation failed: Home %s missing timezone", home_id)
return False
# Currency is REQUIRED - we cannot function without it
# The currency is nested in currentSubscription.priceInfo.current.currency
subscription = home.get("currentSubscription")
if not subscription:
self._log(
"warning",
"User data validation failed: Home %s has no active subscription",
home_id,
)
return False
price_info = subscription.get("priceInfo")
if not price_info:
self._log(
"warning",
"User data validation failed: Home %s subscription has no priceInfo",
home_id,
)
return False
current = price_info.get("current")
if not current:
self._log(
"warning",
"User data validation failed: Home %s priceInfo has no current data",
home_id,
)
return False
currency = current.get("currency")
if not currency:
self._log(
"warning",
"User data validation failed: Home %s has no currency",
home_id,
)
return False
break
if not home_found:
self._log("warning", "User data validation failed: Home %s not found in homes list", home_id)
return False
self._log("debug", "User data validation passed for home %s", home_id)
return True
async def update_user_data_if_needed(self, current_time: datetime) -> bool:
"""
Update user data if needed (daily check).
Only accepts complete and valid data. If API returns incomplete data
(e.g., during maintenance), keeps existing cached data and retries later.
Returns:
True if user data was updated, False otherwise
"""
if self._last_user_update is None or current_time - self._last_user_update >= self._user_update_interval:
try:
self._log("debug", "Updating user data")
user_data = await self.api.async_get_viewer_details()
# Validate before caching
if not self._validate_user_data(user_data, self.home_id):
self._log(
"warning",
"Rejecting incomplete user data from API - keeping existing cached data",
)
return False # Keep existing data, don't update timestamp
# Data is valid, cache it
self._cached_user_data = user_data
self._last_user_update = current_time
self._log("debug", "User data updated successfully")
except (
TibberPricesApiClientError,
TibberPricesApiClientCommunicationError,
) as ex:
self._log("warning", "Failed to update user data: %s", ex)
return False # Update failed
else:
return True # User data was updated
return False # No update needed
async def fetch_home_data(
self,
home_id: str,
current_time: datetime,
*,
include_tomorrow: bool = True,
) -> tuple[dict[str, Any], bool]:
"""
Fetch data for a single home via pool.
Args:
home_id: Home ID to fetch data for.
current_time: Current time for timestamp in result.
include_tomorrow: If True, request tomorrow's data too. If False,
only request up to end of today.
Returns:
Tuple of (data_dict, api_called):
- data_dict: Dictionary with timestamp, home_id, price_info, currency.
- api_called: True if API was called to fetch missing data.
"""
if not home_id:
self._log("warning", "No home ID provided - cannot fetch price data")
return (
{
"timestamp": current_time,
"home_id": "",
"price_info": [],
"currency": "EUR",
},
False, # No API call made
)
# Ensure we have user_data before fetching price data
# This is critical for timezone-aware cursor calculation
if not self._cached_user_data:
self._log("info", "User data not cached, fetching before price data")
try:
user_data = await self.api.async_get_viewer_details()
# Validate data before accepting it (especially on initial setup)
if not self._validate_user_data(user_data, self.home_id):
msg = "Received incomplete user data from API - cannot proceed with price fetching"
self._log("error", msg)
raise TibberPricesApiClientError(msg) # noqa: TRY301
self._cached_user_data = user_data
self._last_user_update = current_time
except (
TibberPricesApiClientError,
TibberPricesApiClientCommunicationError,
) as ex:
msg = f"Failed to fetch user data (required for price fetching): {ex}"
self._log("error", msg)
raise TibberPricesApiClientError(msg) from ex
# At this point, _cached_user_data is guaranteed to be not None (checked above)
if not self._cached_user_data:
msg = "User data unexpectedly None after fetch attempt"
raise TibberPricesApiClientError(msg)
# Retrieve price data via IntervalPool (single source of truth)
price_info, api_called = await self._fetch_via_pool(home_id, include_tomorrow=include_tomorrow)
# Extract currency for this home from user_data
currency = self._get_currency_for_home(home_id)
self._log(
"debug",
"Successfully fetched data for home %s (%d intervals, api_called=%s)",
home_id,
len(price_info),
api_called,
)
return (
{
"timestamp": current_time,
"home_id": home_id,
"price_info": price_info,
"currency": currency,
},
api_called,
)
async def _fetch_via_pool(
self,
home_id: str,
*,
include_tomorrow: bool = True,
) -> tuple[list[dict[str, Any]], bool]:
"""
Retrieve price data via IntervalPool.
The IntervalPool is the single source of truth for price data:
- Handles actual API calls to Tibber
- Manages deduplication and caching
- Provides intervals from day-before-yesterday to end-of-today/tomorrow
This method delegates to the Pool's get_sensor_data() which returns
all relevant intervals for sensor display.
Args:
home_id: Home ID (currently unused, Pool knows its home).
include_tomorrow: If True, request tomorrow's data too. If False,
only request up to end of today. This prevents
API spam before 13:00 when Tibber doesn't have
tomorrow data yet.
Returns:
Tuple of (intervals, api_called):
- intervals: List of price interval dicts.
- api_called: True if API was called to fetch missing data.
"""
# user_data is guaranteed by fetch_home_data(), but needed for type narrowing
if self._cached_user_data is None:
return [], False # No data, no API call
self._log(
"debug",
"Retrieving price data for home %s via interval pool (include_tomorrow=%s)",
home_id,
include_tomorrow,
)
intervals, api_called = await self._interval_pool.get_sensor_data(
api_client=self.api,
user_data=self._cached_user_data,
include_tomorrow=include_tomorrow,
)
return intervals, api_called
def _get_currency_for_home(self, home_id: str) -> str:
"""
Get currency for a specific home from cached user_data.
Note: The cached user_data is validated before storage, so if we have
cached data it should contain valid currency. This method extracts
the currency from the nested structure.
Returns:
Currency code (e.g., "EUR", "NOK", "SEK").
Raises:
TibberPricesApiClientError: If currency cannot be determined.
"""
if not self._cached_user_data:
msg = "No user data cached - cannot determine currency"
self._log("error", msg)
raise TibberPricesApiClientError(msg)
viewer = self._cached_user_data.get("viewer", {})
homes = viewer.get("homes", [])
for home in homes:
if home.get("id") == home_id:
# Extract currency from nested structure
# Use 'or {}' to handle None values (homes without active subscription)
subscription = home.get("currentSubscription") or {}
price_info = subscription.get("priceInfo") or {}
current = price_info.get("current") or {}
currency = current.get("currency")
if not currency:
# This should not happen if validation worked correctly
msg = f"Home {home_id} has no active subscription - currency unavailable"
self._log("error", msg)
raise TibberPricesApiClientError(msg)
self._log("debug", "Extracted currency %s for home %s", currency, home_id)
return currency
# Home not found in cached data - data validation should have caught this
msg = f"Home {home_id} not found in user data - data validation failed"
self._log("error", msg)
raise TibberPricesApiClientError(msg)
def _check_home_exists(self, home_id: str) -> bool:
"""
Check if a home ID exists in cached user data.
Args:
home_id: The home ID to check.
Returns:
True if home exists, False otherwise.
"""
if not self._cached_user_data:
# No user data yet - assume home exists (will be checked on next update)
return True
viewer = self._cached_user_data.get("viewer", {})
homes = viewer.get("homes", [])
return any(home.get("id") == home_id for home in homes)
async def handle_main_entry_update(
self,
current_time: datetime,
home_id: str,
transform_fn: Callable[[dict[str, Any]], dict[str, Any]],
*,
current_price_info: list[dict[str, Any]] | None = None,
) -> tuple[dict[str, Any], bool]:
"""
Handle update for main entry - fetch data for this home.
The IntervalPool is the single source of truth for price data:
- It handles API fetching, deduplication, and caching internally
- We decide WHEN to fetch tomorrow data (after 13:00, if not already present)
- This prevents API spam before 13:00 when Tibber doesn't have tomorrow data
This method:
1. Updates user data if needed (daily)
2. Determines if tomorrow data should be requested
3. Fetches price data via IntervalPool
4. Transforms result for coordinator
Args:
current_time: Current time for update decisions.
home_id: Home ID to fetch data for.
transform_fn: Function to transform raw data for coordinator.
current_price_info: Current price intervals (from coordinator.data["priceInfo"]).
Used to check if tomorrow data already exists.
Returns:
Tuple of (transformed_data, api_called):
- transformed_data: Transformed data dict for coordinator.
- api_called: True if API was called to fetch missing data.
"""
# Update user data if needed (daily check)
user_data_updated = await self.update_user_data_if_needed(current_time)
# Check if this home still exists in user data after update
# This detects when a home was removed from the Tibber account
home_exists = self._check_home_exists(home_id)
if not home_exists:
self._log("warning", "Home ID %s not found in Tibber account", home_id)
# Return a special marker in the result that coordinator can check
result = transform_fn({})
result["_home_not_found"] = True # Special marker for coordinator
return result, False # No API call made (home doesn't exist)
# Determine if we should request tomorrow data
include_tomorrow = self.should_fetch_tomorrow_data(current_price_info)
# Fetch price data via IntervalPool
self._log(
"debug",
"Fetching price data for home %s via interval pool (include_tomorrow=%s)",
home_id,
include_tomorrow,
)
raw_data, api_called = await self.fetch_home_data(home_id, current_time, include_tomorrow=include_tomorrow)
# Parse timestamps immediately after fetch
raw_data = helpers.parse_all_timestamps(raw_data, time=self.time)
# Store user data cache (price data persisted by IntervalPool)
if user_data_updated:
await self.store_cache()
# Transform for main entry
return transform_fn(raw_data), api_called
async def handle_api_error(
self,
error: Exception,
) -> None:
"""
Handle API errors - re-raise appropriate exceptions.
Note: With IntervalPool as source of truth, there's no local price cache
to fall back to. The Pool has its own persistence, so on next update
it will use its cached intervals if API is unavailable.
"""
if isinstance(error, TibberPricesApiClientAuthenticationError):
msg = "Invalid access token"
raise ConfigEntryAuthFailed(msg) from error
msg = f"Error communicating with API: {error}"
raise UpdateFailed(msg) from error
@property
def cached_user_data(self) -> dict[str, Any] | None:
"""Get cached user data."""
return self._cached_user_data
def has_tomorrow_data(self, price_info: list[dict[str, Any]]) -> bool:
"""
Check if tomorrow's price data is available.
Args:
price_info: List of price intervals from coordinator data.
Returns:
True if at least one interval from tomorrow is present.
"""
if not price_info:
return False
# Get tomorrow's date
now = self.time.now()
tomorrow = (self.time.as_local(now) + timedelta(days=1)).date()
# Check if any interval is from tomorrow
for interval in price_info:
if "startsAt" not in interval:
continue
# startsAt is already a datetime object after _transform_data()
interval_time = interval["startsAt"]
if isinstance(interval_time, str):
# Fallback: parse if still string (shouldn't happen with transformed data)
interval_time = self.time.parse_datetime(interval_time)
if interval_time and self.time.as_local(interval_time).date() == tomorrow:
return True
return False

View file

@ -0,0 +1,228 @@
"""
Repair issue management for Tibber Prices integration.
This module handles creation and cleanup of repair issues that notify users
about problems requiring attention in the Home Assistant UI.
Repair Types:
1. Tomorrow Data Missing - Warns when tomorrow's price data is unavailable after 18:00
2. Persistent Rate Limits - Warns when API rate limiting persists after multiple errors
3. Home Not Found - Warns when a home no longer exists in the Tibber account
"""
from __future__ import annotations
import logging
from typing import TYPE_CHECKING
from custom_components.tibber_prices.const import DOMAIN
from homeassistant.helpers import issue_registry as ir
if TYPE_CHECKING:
from datetime import datetime
from homeassistant.core import HomeAssistant
_LOGGER = logging.getLogger(__name__)
# Repair issue tracking thresholds
TOMORROW_DATA_WARNING_HOUR = 18 # Warn after 18:00 if tomorrow data missing
RATE_LIMIT_WARNING_THRESHOLD = 3 # Warn after 3 consecutive rate limit errors
class TibberPricesRepairManager:
"""Manage repair issues for Tibber Prices integration."""
def __init__(self, hass: HomeAssistant, entry_id: str, home_name: str) -> None:
"""
Initialize repair manager.
Args:
hass: Home Assistant instance
entry_id: Config entry ID for this home
home_name: Display name of the home (for user-friendly messages)
"""
self._hass = hass
self._entry_id = entry_id
self._home_name = home_name
# Track consecutive rate limit errors
self._rate_limit_error_count = 0
# Track if repairs are currently active
self._tomorrow_data_repair_active = False
self._rate_limit_repair_active = False
self._home_not_found_repair_active = False
async def check_tomorrow_data_availability(
self,
has_tomorrow_data: bool, # noqa: FBT001 - Clear meaning in context
current_time: datetime,
) -> None:
"""
Check if tomorrow data is available and create/clear repair as needed.
Creates repair if:
- Current hour >= 18:00 (after expected data availability)
- Tomorrow's data is missing
Clears repair if:
- Tomorrow's data is now available
Args:
has_tomorrow_data: Whether tomorrow's data is available
current_time: Current local datetime for hour check
"""
should_warn = current_time.hour >= TOMORROW_DATA_WARNING_HOUR and not has_tomorrow_data
if should_warn and not self._tomorrow_data_repair_active:
await self._create_tomorrow_data_repair()
elif not should_warn and self._tomorrow_data_repair_active:
await self._clear_tomorrow_data_repair()
async def track_rate_limit_error(self) -> None:
"""
Track rate limit error and create repair if threshold exceeded.
Increments rate limit error counter and creates repair issue
if threshold (3 consecutive errors) is reached.
"""
self._rate_limit_error_count += 1
if self._rate_limit_error_count >= RATE_LIMIT_WARNING_THRESHOLD and not self._rate_limit_repair_active:
await self._create_rate_limit_repair()
async def clear_rate_limit_tracking(self) -> None:
"""
Clear rate limit error tracking after successful API call.
Resets counter and clears any active repair issue.
"""
self._rate_limit_error_count = min(self._rate_limit_error_count, 0)
if self._rate_limit_repair_active:
await self._clear_rate_limit_repair()
async def create_home_not_found_repair(self) -> None:
"""
Create repair for home no longer found in Tibber account.
This indicates the home was deleted from the user's Tibber account
but the config entry still exists in Home Assistant.
"""
if self._home_not_found_repair_active:
return
_LOGGER.warning(
"Home '%s' not found in Tibber account - creating repair issue",
self._home_name,
)
ir.async_create_issue(
self._hass,
DOMAIN,
f"home_not_found_{self._entry_id}",
is_fixable=True,
severity=ir.IssueSeverity.ERROR,
translation_key="home_not_found",
translation_placeholders={
"home_name": self._home_name,
"entry_id": self._entry_id,
},
)
self._home_not_found_repair_active = True
async def clear_home_not_found_repair(self) -> None:
"""Clear home not found repair (home is available again or entry removed)."""
if not self._home_not_found_repair_active:
return
_LOGGER.debug("Clearing home not found repair for '%s'", self._home_name)
ir.async_delete_issue(
self._hass,
DOMAIN,
f"home_not_found_{self._entry_id}",
)
self._home_not_found_repair_active = False
async def clear_all_repairs(self) -> None:
"""
Clear all active repair issues.
Called during coordinator shutdown or entry removal.
"""
if self._tomorrow_data_repair_active:
await self._clear_tomorrow_data_repair()
if self._rate_limit_repair_active:
await self._clear_rate_limit_repair()
if self._home_not_found_repair_active:
await self.clear_home_not_found_repair()
async def _create_tomorrow_data_repair(self) -> None:
"""Create repair issue for missing tomorrow data."""
_LOGGER.warning(
"Tomorrow's price data missing after %d:00 for home '%s' - creating repair issue",
TOMORROW_DATA_WARNING_HOUR,
self._home_name,
)
ir.async_create_issue(
self._hass,
DOMAIN,
f"tomorrow_data_missing_{self._entry_id}",
is_fixable=False,
severity=ir.IssueSeverity.WARNING,
translation_key="tomorrow_data_missing",
translation_placeholders={
"home_name": self._home_name,
"warning_hour": str(TOMORROW_DATA_WARNING_HOUR),
},
)
self._tomorrow_data_repair_active = True
async def _clear_tomorrow_data_repair(self) -> None:
"""Clear tomorrow data repair issue."""
_LOGGER.debug("Tomorrow's data now available for '%s' - clearing repair issue", self._home_name)
ir.async_delete_issue(
self._hass,
DOMAIN,
f"tomorrow_data_missing_{self._entry_id}",
)
self._tomorrow_data_repair_active = False
async def _create_rate_limit_repair(self) -> None:
"""Create repair issue for persistent rate limiting."""
_LOGGER.warning(
"Persistent API rate limiting detected for home '%s' (%d consecutive errors) - creating repair issue",
self._home_name,
self._rate_limit_error_count,
)
ir.async_create_issue(
self._hass,
DOMAIN,
f"rate_limit_exceeded_{self._entry_id}",
is_fixable=False,
severity=ir.IssueSeverity.WARNING,
translation_key="rate_limit_exceeded",
translation_placeholders={
"home_name": self._home_name,
"error_count": str(self._rate_limit_error_count),
},
)
self._rate_limit_repair_active = True
async def _clear_rate_limit_repair(self) -> None:
"""Clear rate limit repair issue."""
_LOGGER.debug("Rate limiting resolved for '%s' - clearing repair issue", self._home_name)
ir.async_delete_issue(
self._hass,
DOMAIN,
f"rate_limit_exceeded_{self._entry_id}",
)
self._rate_limit_repair_active = False

View file

@ -12,6 +12,33 @@ All datetime operations MUST go through TimeService to ensure:
- Proper timezone handling (local time, not UTC) - Proper timezone handling (local time, not UTC)
- Testability (mock time in one place) - Testability (mock time in one place)
- Future time-travel feature support - Future time-travel feature support
TIMER ARCHITECTURE:
This integration uses three distinct timer mechanisms:
1. **Timer #1: API Polling (DataUpdateCoordinator)**
- Runs every 15 minutes at a RANDOM offset (e.g., 10:04:37, 10:19:37, 10:34:37)
- Offset determined by when last API call completed
- Tracked via _last_coordinator_update for next poll prediction
- NO tolerance needed - offset variation is INTENTIONAL
- Purpose: Spread API load, avoid thundering herd problem
2. **Timer #2: Entity Updates (quarter-hour boundaries)**
- Must trigger at EXACT boundaries (00, 15, 30, 45 minutes)
- Uses _BOUNDARY_TOLERANCE_SECONDS for HA scheduling jitter correction
- Scheduled via async_track_utc_time_change(minute=[0,15,30,45], second=0)
- If HA triggers at 15:00:01 round to 15:00:00 (within ±2s tolerance)
- Purpose: Entity state updates reflect correct quarter-hour interval
3. **Timer #3: Timing Sensors (30-second boundaries)**
- Must trigger at EXACT boundaries (0, 30 seconds)
- Uses _BOUNDARY_TOLERANCE_SECONDS for HA scheduling jitter correction
- Scheduled via async_track_utc_time_change(second=[0,30])
- Purpose: Update countdown/time-to sensors
CRITICAL: The tolerance is ONLY for Timer #2 and #3 to correct HA's
scheduling delays. It is NOT used for Timer #1's offset tracking.
""" """
from __future__ import annotations from __future__ import annotations
@ -42,6 +69,13 @@ _INTERVALS_PER_HOUR = 60 // _DEFAULT_INTERVAL_MINUTES # 4
_INTERVALS_PER_DAY = 24 * _INTERVALS_PER_HOUR # 96 _INTERVALS_PER_DAY = 24 * _INTERVALS_PER_HOUR # 96
# Rounding tolerance for boundary detection (±2 seconds) # Rounding tolerance for boundary detection (±2 seconds)
# This handles Home Assistant's scheduling jitter for Timer #2 (entity updates)
# and Timer #3 (timing sensors). When HA schedules a callback for exactly
# 15:00:00 but actually triggers it at 15:00:01, this tolerance ensures we
# still recognize it as the 15:00:00 boundary.
#
# NOT used for Timer #1 (API polling), which intentionally runs at random
# offsets determined by last API call completion time.
_BOUNDARY_TOLERANCE_SECONDS = 2 _BOUNDARY_TOLERANCE_SECONDS = 2
@ -498,16 +532,17 @@ class TibberPricesTimeService:
rounded_seconds = interval_start_seconds rounded_seconds = interval_start_seconds
elif distance_to_next <= _BOUNDARY_TOLERANCE_SECONDS: elif distance_to_next <= _BOUNDARY_TOLERANCE_SECONDS:
# Near next interval start → use it # Near next interval start → use it
# CRITICAL: If rounding to next interval and it wraps to midnight (index 0),
# we need to increment to next day, not stay on same day!
if next_interval_index == 0:
# Rounding to midnight of NEXT day
return (target + timedelta(days=1)).replace(hour=0, minute=0, second=0, microsecond=0)
rounded_seconds = next_interval_start_seconds rounded_seconds = next_interval_start_seconds
else: else:
# Not near any boundary → floor to current interval # Not near any boundary → floor to current interval
rounded_seconds = interval_start_seconds rounded_seconds = interval_start_seconds
# Handle midnight wrap # Build rounded datetime (no midnight wrap needed here - handled above)
if rounded_seconds >= 24 * 3600:
rounded_seconds = 0
# Build rounded datetime
hours = int(rounded_seconds // 3600) hours = int(rounded_seconds // 3600)
minutes = int((rounded_seconds % 3600) // 60) minutes = int((rounded_seconds % 3600) // 60)

View file

@ -1,7 +1,20 @@
{ {
"apexcharts": { "apexcharts": {
"title_rating_level": "Preisphasen Tagesverlauf", "title_rating_level": "Preisphasen Tagesverlauf",
"title_level": "Preisniveau" "title_level": "Preisniveau",
"hourly_suffix": "(Ø stündlich)",
"best_price_period_name": "Bestpreis-Zeitraum",
"peak_price_period_name": "Spitzenpreis-Zeitraum",
"notification": {
"metadata_sensor_unavailable": {
"title": "Tibber Prices: ApexCharts YAML mit eingeschränkter Funktionalität generiert",
"message": "Du hast gerade eine ApexCharts-Card-Konfiguration über die Entwicklerwerkzeuge generiert. Der Chart-Metadaten-Sensor ist aktuell deaktiviert, daher zeigt das generierte YAML nur **Basisfunktionalität** (Auto-Skalierung, fester Gradient bei 50%).\n\n**Für volle Funktionalität** (optimierte Skalierung, dynamische Verlaufsfarben):\n1. [Tibber Prices Integration öffnen](https://my.home-assistant.io/redirect/integration/?domain=tibber_prices)\n2. Aktiviere den 'Chart Metadata' Sensor\n3. **Generiere das YAML erneut** über die Entwicklerwerkzeuge\n4. **Ersetze den alten YAML-Code** in deinem Dashboard durch die neue Version\n\n⚠ Nur den Sensor zu aktivieren reicht nicht - du musst das YAML neu generieren und ersetzen!"
},
"missing_cards": {
"title": "Tibber Prices: ApexCharts YAML kann nicht verwendet werden",
"message": "Du hast gerade eine ApexCharts-Card-Konfiguration über die Entwicklerwerkzeuge generiert, aber das generierte YAML **funktioniert nicht**, weil erforderliche Custom Cards fehlen.\n\n**Fehlende Cards:**\n{cards}\n\n**Um das generierte YAML zu nutzen:**\n1. Klicke auf die obigen Links, um die fehlenden Cards über HACS zu installieren\n2. Starte Home Assistant neu (manchmal erforderlich)\n3. **Generiere das YAML erneut** über die Entwicklerwerkzeuge\n4. Füge das YAML zu deinem Dashboard hinzu\n\n⚠ Der aktuelle YAML-Code funktioniert nicht, bis alle Cards installiert sind!"
}
}
}, },
"sensor": { "sensor": {
"current_interval_price": { "current_interval_price": {
@ -9,7 +22,7 @@
"long_description": "Zeigt den aktuellen Preis pro kWh von deinem Tibber-Abonnement an", "long_description": "Zeigt den aktuellen Preis pro kWh von deinem Tibber-Abonnement an",
"usage_tips": "Nutze dies, um Preise zu verfolgen oder Automatisierungen zu erstellen, die bei günstigem Strom ausgeführt werden" "usage_tips": "Nutze dies, um Preise zu verfolgen oder Automatisierungen zu erstellen, die bei günstigem Strom ausgeführt werden"
}, },
"current_interval_price_major": { "current_interval_price_base": {
"description": "Aktueller Strompreis in Hauptwährung (EUR/kWh, NOK/kWh, etc.) für Energie-Dashboard", "description": "Aktueller Strompreis in Hauptwährung (EUR/kWh, NOK/kWh, etc.) für Energie-Dashboard",
"long_description": "Zeigt den aktuellen Preis pro kWh in Hauptwährungseinheiten an (z.B. EUR/kWh statt ct/kWh, NOK/kWh statt øre/kWh). Dieser Sensor ist speziell für die Verwendung mit dem Energie-Dashboard von Home Assistant konzipiert, das Preise in Standard-Währungseinheiten benötigt.", "long_description": "Zeigt den aktuellen Preis pro kWh in Hauptwährungseinheiten an (z.B. EUR/kWh statt ct/kWh, NOK/kWh statt øre/kWh). Dieser Sensor ist speziell für die Verwendung mit dem Energie-Dashboard von Home Assistant konzipiert, das Preise in Standard-Währungseinheiten benötigt.",
"usage_tips": "Verwende diesen Sensor beim Konfigurieren des Energie-Dashboards unter Einstellungen → Dashboards → Energie. Wähle diesen Sensor als 'Entität mit dem aktuellen Preis' aus, um deine Energiekosten automatisch zu berechnen. Das Energie-Dashboard multipliziert deinen Energieverbrauch (kWh) mit diesem Preis, um die Gesamtkosten anzuzeigen." "usage_tips": "Verwende diesen Sensor beim Konfigurieren des Energie-Dashboards unter Einstellungen → Dashboards → Energie. Wähle diesen Sensor als 'Entität mit dem aktuellen Preis' aus, um deine Energiekosten automatisch zu berechnen. Das Energie-Dashboard multipliziert deinen Energieverbrauch (kWh) mit diesem Preis, um die Gesamtkosten anzuzeigen."
@ -45,9 +58,9 @@
"usage_tips": "Nutze dies, um den Betrieb von Geräten während Spitzenpreiszeiten zu vermeiden" "usage_tips": "Nutze dies, um den Betrieb von Geräten während Spitzenpreiszeiten zu vermeiden"
}, },
"average_price_today": { "average_price_today": {
"description": "Der durchschnittliche Strompreis für heute pro kWh", "description": "Der typische Strompreis für heute pro kWh (konfigurierbares Anzeigeformat)",
"long_description": "Zeigt den durchschnittlichen Preis pro kWh für den aktuellen Tag von deinem Tibber-Abonnement an", "long_description": "Zeigt den typischen Preis pro kWh für heute. **Standardmäßig zeigt der Status den Median** (resistent gegen extreme Preisspitzen, zeigt was du generell erwarten kannst). Du kannst dies in den Integrationsoptionen ändern, um stattdessen das arithmetische Mittel anzuzeigen. Der alternative Wert ist immer als Attribut `price_mean` oder `price_median` für Automatisierungen verfügbar.",
"usage_tips": "Nutze dies als Grundlage für den Vergleich mit aktuellen Preisen" "usage_tips": "Nutze den Status-Wert für die Anzeige. Für exakte Kostenberechnungen in Automatisierungen nutze: {{ state_attr('sensor.average_price_today', 'price_mean') }}"
}, },
"lowest_price_tomorrow": { "lowest_price_tomorrow": {
"description": "Der niedrigste Strompreis für morgen pro kWh", "description": "Der niedrigste Strompreis für morgen pro kWh",
@ -60,9 +73,9 @@
"usage_tips": "Nutze dies, um den Betrieb von Geräten während der teuersten Stunden morgen zu vermeiden. Plane nicht-essentielle Lasten außerhalb dieser Spitzenpreiszeiten." "usage_tips": "Nutze dies, um den Betrieb von Geräten während der teuersten Stunden morgen zu vermeiden. Plane nicht-essentielle Lasten außerhalb dieser Spitzenpreiszeiten."
}, },
"average_price_tomorrow": { "average_price_tomorrow": {
"description": "Der durchschnittliche Strompreis für morgen pro kWh", "description": "Der typische Strompreis für morgen pro kWh (konfigurierbares Anzeigeformat)",
"long_description": "Zeigt den durchschnittlichen Preis pro kWh für den morgigen Tag von deinem Tibber-Abonnement an. Dieser Sensor wird nicht verfügbar, bis die Preise für morgen von Tibber veröffentlicht werden (typischerweise zwischen 13:00 und 14:00 Uhr MEZ).", "long_description": "Zeigt den typischen Preis pro kWh für morgen. **Standardmäßig zeigt der Status den Median** (resistent gegen extreme Preisspitzen). Du kannst dies in den Integrationsoptionen ändern, um stattdessen das arithmetische Mittel anzuzeigen. Der alternative Wert ist als Attribut verfügbar. Dieser Sensor wird nicht verfügbar, bis die Preise für morgen von Tibber veröffentlicht werden (typischerweise zwischen 13:00 und 14:00 Uhr MEZ).",
"usage_tips": "Nutze dies als Grundlinie für den Vergleich mit den morgigen Preisen und zur Verbrauchsplanung. Vergleiche mit dem heutigen Durchschnitt, um zu sehen, ob morgen insgesamt teurer oder günstiger wird." "usage_tips": "Nutze den Status-Wert für Anzeige und schnelle Vergleiche. Für Automatisierungen, die exakte Kostenberechnungen benötigen, nutze das Attribut `price_mean`: {{ state_attr('sensor.average_price_tomorrow', 'price_mean') }}"
}, },
"yesterday_price_level": { "yesterday_price_level": {
"description": "Aggregiertes Preisniveau für gestern", "description": "Aggregiertes Preisniveau für gestern",
@ -95,14 +108,14 @@
"usage_tips": "Nutze dies, um den morgigen Energieverbrauch basierend auf deinen persönlichen Preisschwellenwerten zu planen. Vergleiche mit heute, um zu entscheiden, ob du den Verbrauch auf morgen verschieben oder heute nutzen solltest." "usage_tips": "Nutze dies, um den morgigen Energieverbrauch basierend auf deinen persönlichen Preisschwellenwerten zu planen. Vergleiche mit heute, um zu entscheiden, ob du den Verbrauch auf morgen verschieben oder heute nutzen solltest."
}, },
"trailing_price_average": { "trailing_price_average": {
"description": "Der durchschnittliche Strompreis für die letzten 24 Stunden pro kWh", "description": "Der typische Strompreis der letzten 24 Stunden pro kWh (konfigurierbares Anzeigeformat)",
"long_description": "Zeigt den durchschnittlichen Preis pro kWh berechnet aus den letzten 24 Stunden (nachlaufender Durchschnitt) von deinem Tibber-Abonnement an. Dies bietet einen gleitenden Durchschnitt, der alle 15 Minuten basierend auf historischen Daten aktualisiert wird.", "long_description": "Zeigt den typischen Preis pro kWh der letzten 24 Stunden. **Standardmäßig zeigt der Status den Median** (resistent gegen extreme Spitzen, zeigt welches Preisniveau typisch war). Du kannst dies in den Integrationsoptionen ändern, um stattdessen das arithmetische Mittel anzuzeigen. Der alternative Wert ist als Attribut verfügbar. Wird alle 15 Minuten aktualisiert.",
"usage_tips": "Nutze dies, um aktuelle Preise mit den jüngsten Trends zu vergleichen. Ein aktueller Preis deutlich über diesem Durchschnitt kann ein guter Zeitpunkt sein, um den Verbrauch zu reduzieren." "usage_tips": "Nutze den Status-Wert, um das typische aktuelle Preisniveau zu sehen. Für Kostenberechnungen nutze: {{ state_attr('sensor.trailing_price_average', 'price_mean') }}"
}, },
"leading_price_average": { "leading_price_average": {
"description": "Der durchschnittliche Strompreis für die nächsten 24 Stunden pro kWh", "description": "Der typische Strompreis für die nächsten 24 Stunden pro kWh (konfigurierbares Anzeigeformat)",
"long_description": "Zeigt den durchschnittlichen Preis pro kWh berechnet aus den nächsten 24 Stunden (vorlaufender Durchschnitt) von deinem Tibber-Abonnement an. Dies bietet einen vorausschauenden Durchschnitt basierend auf verfügbaren Prognosedaten.", "long_description": "Zeigt den typischen Preis pro kWh für die nächsten 24 Stunden. **Standardmäßig zeigt der Status den Median** (resistent gegen extreme Spitzen, zeigt welches Preisniveau zu erwarten ist). Du kannst dies in den Integrationsoptionen ändern, um stattdessen das arithmetische Mittel anzuzeigen. Der alternative Wert ist als Attribut verfügbar.",
"usage_tips": "Nutze dies zur Energieverbrauchsplanung. Wenn der aktuelle Preis unter dem vorlaufenden Durchschnitt liegt, kann es ein guter Zeitpunkt sein, um energieintensive Geräte zu betreiben." "usage_tips": "Nutze den Status-Wert, um das typische kommende Preisniveau zu sehen. Für Kostenberechnungen nutze: {{ state_attr('sensor.leading_price_average', 'price_mean') }}"
}, },
"trailing_price_min": { "trailing_price_min": {
"description": "Der niedrigste Strompreis für die letzten 24 Stunden pro kWh", "description": "Der niedrigste Strompreis für die letzten 24 Stunden pro kWh",
@ -276,27 +289,27 @@
}, },
"data_timestamp": { "data_timestamp": {
"description": "Zeitstempel des letzten verfügbaren Preisintervalls", "description": "Zeitstempel des letzten verfügbaren Preisintervalls",
"long_description": "Zeigt den Zeitstempel des letzten verfügbaren Preisdatenintervalls von Ihrem Tibber-Abonnement" "long_description": "Zeigt den Zeitstempel des letzten verfügbaren Preisdatenintervalls von deinem Tibber-Abonnement"
}, },
"today_volatility": { "today_volatility": {
"description": "Preisvolatilitätsklassifizierung für heute", "description": "Wie stark sich die Strompreise heute verändern",
"long_description": "Zeigt, wie stark die Strompreise im Laufe des heutigen Tages variieren, basierend auf der Spannweite (Differenz zwischen höchstem und niedrigstem Preis). Klassifizierung: NIEDRIG = Spannweite < 5ct, MODERAT = 5-15ct, HOCH = 15-30ct, SEHR HOCH = >30ct.", "long_description": "Zeigt, ob die heutigen Preise stabil bleiben oder stark schwanken. Niedrige Volatilität bedeutet recht konstante Preise Timing ist kaum wichtig. Hohe Volatilität bedeutet spürbare Preisunterschiede über den Tag gute Chance, den Verbrauch auf günstigere Zeiten zu verschieben. `price_coefficient_variation_%` zeigt den Prozentwert, `price_spread` die absolute Preisspanne.",
"usage_tips": "Verwenden Sie dies, um zu entscheiden, ob preisbasierte Optimierung lohnenswert ist. Zum Beispiel lohnt sich bei einer Balkonbatterie mit 15% Effizienzverlusten die Optimierung nur, wenn die Volatilität mindestens MODERAT ist. Erstellen Sie Automatisierungen, die die Volatilität prüfen, bevor Lade-/Entladezyklen geplant werden." "usage_tips": "Nutze dies, um zu entscheiden, ob Optimierung sich lohnt. Bei niedriger Volatilität kannst du Geräte jederzeit laufen lassen. Bei hoher Volatilität sparst du spürbar, wenn du Best-Price-Perioden nutzt."
}, },
"tomorrow_volatility": { "tomorrow_volatility": {
"description": "Preisvolatilitätsklassifizierung für morgen", "description": "Wie stark sich die Strompreise morgen verändern werden",
"long_description": "Zeigt, wie stark die Strompreise im Laufe des morgigen Tages variieren werden, basierend auf der Spannweite (Differenz zwischen höchstem und niedrigstem Preis). Wird nicht verfügbar, bis morgige Daten veröffentlicht sind (typischerweise 13:00-14:00 MEZ).", "long_description": "Zeigt, ob die Preise morgen stabil bleiben oder stark schwanken. Verfügbar, sobald die morgigen Daten veröffentlicht sind (typischerweise 13:0014:00 MEZ). Niedrige Volatilität bedeutet recht konstante Preise Timing ist nicht kritisch. Hohe Volatilität bedeutet deutliche Preisunterschiede über den Tag gute Gelegenheit, energieintensive Aufgaben zu planen. `price_coefficient_variation_%` zeigt den Prozentwert, `price_spread` die absolute Preisspanne.",
"usage_tips": "Verwenden Sie dies zur Vorausplanung des morgigen Energieverbrauchs. Bei HOHER oder SEHR HOHER Volatilität morgen lohnt sich die Optimierung des Energieverbrauchs. Bei NIEDRIGER Volatilität können Sie Geräte jederzeit ohne wesentliche Kostenunterschiede betreiben." "usage_tips": "Nutze dies für die Planung des morgigen Energieverbrauchs. Hohe Volatilität? Plane flexible Lasten in Best-Price-Perioden. Niedrige Volatilität? Lass Geräte laufen, wann es dir passt."
}, },
"next_24h_volatility": { "next_24h_volatility": {
"description": "Preisvolatilitätsklassifizierung für die rollierenden nächsten 24 Stunden", "description": "Wie stark sich die Preise in den nächsten 24 Stunden verändern",
"long_description": "Zeigt, wie stark die Strompreise in den nächsten 24 Stunden ab jetzt variieren (rollierendes Fenster). Dies überschreitet Tagesgrenzen und aktualisiert sich alle 15 Minuten, wodurch eine vorausschauende Volatilitätsbewertung unabhängig von Kalendertagen bereitgestellt wird.", "long_description": "Zeigt die Preisvolatilität für ein rollierendes 24-Stunden-Fenster ab jetzt (aktualisiert alle 15 Minuten). Niedrige Volatilität bedeutet recht konstante Preise. Hohe Volatilität bedeutet spürbare Preisschwankungen und damit Chancen zur Optimierung. Im Unterschied zu Heute/Morgen-Sensoren überschreitet dieser Tagesgrenzen und liefert eine durchgängige Vorhersage. `price_coefficient_variation_%` zeigt den Prozentwert, `price_spread` die absolute Preisspanne.",
"usage_tips": "Bester Sensor für Echtzeitoptimierungsentscheidungen. Im Gegensatz zu Heute/Morgen-Sensoren, die um Mitternacht wechseln, bietet dies eine kontinuierliche 24h-Volatilitätsbewertung. Verwenden Sie dies für Batterielade-Strategien, die Tagesgrenzen überschreiten." "usage_tips": "Am besten für Entscheidungen in Echtzeit. Nutze dies für Batterieladestrategien oder andere flexible Lasten, die über Mitternacht laufen könnten. Bietet eine konsistente 24h-Perspektive unabhängig vom Kalendertag."
}, },
"today_tomorrow_volatility": { "today_tomorrow_volatility": {
"description": "Kombinierte Preisvolatilitätsklassifizierung für heute und morgen", "description": "Kombinierte Preisvolatilität für heute und morgen",
"long_description": "Zeigt die Volatilität über heute und morgen zusammen (wenn morgige Daten verfügbar sind). Bietet eine erweiterte Ansicht der Preisvariation über bis zu 48 Stunden. Fällt auf Nur-Heute zurück, wenn morgige Daten noch nicht verfügbar sind.", "long_description": "Zeigt die Gesamtvolatilität, wenn heute und morgen gemeinsam betrachtet werden (sobald die morgigen Daten verfügbar sind). Zeigt, ob über die Tagesgrenze hinweg deutliche Preisunterschiede bestehen. Fällt auf nur-heute zurück, wenn morgige Daten noch fehlen. Hilfreich für mehrtägige Optimierung. `price_coefficient_variation_%` zeigt den Prozentwert, `price_spread` die absolute Preisspanne.",
"usage_tips": "Verwenden Sie dies für Mehrtagsplanung und um zu verstehen, ob Preismöglichkeiten über die Tagesgrenze hinweg bestehen. Die Attribute 'today_volatility' und 'tomorrow_volatility' zeigen individuelle Tagesbeiträge. Nützlich für die Planung von Ladesitzungen, die Mitternacht überschreiten könnten." "usage_tips": "Nutze dies für Aufgaben, die sich über mehrere Tage erstrecken. Prüfe, ob die Preisunterschiede groß genug für eine Planung sind. Die einzelnen Tages-Sensoren zeigen die Beiträge pro Tag, falls du mehr Details brauchst."
}, },
"data_lifecycle_status": { "data_lifecycle_status": {
"description": "Aktueller Status des Preisdaten-Lebenszyklus und der Zwischenspeicherung", "description": "Aktueller Status des Preisdaten-Lebenszyklus und der Zwischenspeicherung",
@ -309,14 +322,14 @@
"usage_tips": "Nutze dies, um einen Countdown wie 'Günstiger Zeitraum endet in 2 Stunden' (wenn aktiv) oder 'Nächster günstiger Zeitraum endet um 14:00' (wenn inaktiv) anzuzeigen. Home Assistant zeigt automatisch relative Zeit für Zeitstempel-Sensoren an." "usage_tips": "Nutze dies, um einen Countdown wie 'Günstiger Zeitraum endet in 2 Stunden' (wenn aktiv) oder 'Nächster günstiger Zeitraum endet um 14:00' (wenn inaktiv) anzuzeigen. Home Assistant zeigt automatisch relative Zeit für Zeitstempel-Sensoren an."
}, },
"best_price_period_duration": { "best_price_period_duration": {
"description": "Gesamtlänge des aktuellen oder nächsten günstigen Zeitraums in Minuten", "description": "Gesamtlänge des aktuellen oder nächsten günstigen Zeitraums",
"long_description": "Zeigt, wie lange der günstige Zeitraum insgesamt dauert. Während eines aktiven Zeitraums zeigt dies die Dauer des aktuellen Zeitraums. Wenn kein Zeitraum aktiv ist, zeigt dies die Dauer des nächsten kommenden Zeitraums. Gibt nur 'Unbekannt' zurück, wenn keine Zeiträume ermittelt wurden.", "long_description": "Zeigt, wie lange der günstige Zeitraum insgesamt dauert. Der State wird in Stunden angezeigt (z. B. 1,5 h) für eine einfache Lesbarkeit in der UI, während das Attribut `period_duration_minutes` denselben Wert in Minuten bereitstellt (z. B. 90) für Automationen. Während eines aktiven Zeitraums zeigt dies die Dauer des aktuellen Zeitraums. Wenn kein Zeitraum aktiv ist, zeigt dies die Dauer des nächsten kommenden Zeitraums. Gibt nur 'Unbekannt' zurück, wenn keine Zeiträume ermittelt wurden.",
"usage_tips": "Nützlich für Planung: 'Der nächste günstige Zeitraum dauert 90 Minuten' oder 'Der aktuelle günstige Zeitraum ist 120 Minuten lang'. Kombiniere mit remaining_minutes, um zu berechnen, wann langlaufende Geräte gestartet werden sollten." "usage_tips": "Für Anzeige: State-Wert (Stunden) in Dashboards nutzen. Für Automationen: Attribut `period_duration_minutes` verwenden, um zu prüfen, ob genug Zeit für langläufige Geräte ist (z. B. 'Wenn period_duration_minutes >= 90, starte Waschmaschine')."
}, },
"best_price_remaining_minutes": { "best_price_remaining_minutes": {
"description": "Verbleibende Minuten im aktuellen günstigen Zeitraum (0 wenn inaktiv)", "description": "Verbleibende Zeit im aktuellen günstigen Zeitraum",
"long_description": "Zeigt, wie viele Minuten im aktuellen günstigen Zeitraum noch verbleiben. Gibt 0 zurück, wenn kein Zeitraum aktiv ist. Aktualisiert sich jede Minute. Prüfe binary_sensor.best_price_period, um zu sehen, ob ein Zeitraum aktuell aktiv ist.", "long_description": "Zeigt, wie viel Zeit im aktuellen günstigen Zeitraum noch verbleibt. Der State wird in Stunden angezeigt (z. B. 0,5 h) für eine einfache Lesbarkeit, während das Attribut `remaining_minutes` Minuten bereitstellt (z. B. 30) für Automationslogik. Gibt 0 zurück, wenn kein Zeitraum aktiv ist. Aktualisiert sich jede Minute. Prüfe binary_sensor.best_price_period, um zu sehen, ob ein Zeitraum aktuell aktiv ist.",
"usage_tips": "Perfekt für Automatisierungen: 'Wenn remaining_minutes > 0 UND remaining_minutes < 30, starte Waschmaschine jetzt'. Der Wert 0 macht es einfach zu prüfen, ob ein Zeitraum aktiv ist (Wert > 0) oder nicht (Wert = 0)." "usage_tips": "Für Automationen: Attribut `remaining_minutes` mit numerischen Vergleichen nutzen wie 'Wenn remaining_minutes > 0 UND remaining_minutes < 30, starte Waschmaschine jetzt'. Der Wert 0 macht es einfach zu prüfen, ob ein Zeitraum aktiv ist (Wert > 0) oder nicht (Wert = 0)."
}, },
"best_price_progress": { "best_price_progress": {
"description": "Fortschritt durch aktuellen günstigen Zeitraum (0% wenn inaktiv)", "description": "Fortschritt durch aktuellen günstigen Zeitraum (0% wenn inaktiv)",
@ -329,9 +342,9 @@
"usage_tips": "Immer nützlich für Vorausplanung: 'Nächster günstiger Zeitraum startet in 3 Stunden' (egal, ob du gerade in einem Zeitraum bist oder nicht). Kombiniere mit Automatisierungen: 'Wenn nächste Startzeit in 10 Minuten ist, sende Benachrichtigung zur Vorbereitung der Waschmaschine'." "usage_tips": "Immer nützlich für Vorausplanung: 'Nächster günstiger Zeitraum startet in 3 Stunden' (egal, ob du gerade in einem Zeitraum bist oder nicht). Kombiniere mit Automatisierungen: 'Wenn nächste Startzeit in 10 Minuten ist, sende Benachrichtigung zur Vorbereitung der Waschmaschine'."
}, },
"best_price_next_in_minutes": { "best_price_next_in_minutes": {
"description": "Minuten bis nächster günstiger Zeitraum startet (0 beim Übergang)", "description": "Zeit bis zum nächsten günstigen Zeitraum",
"long_description": "Zeigt Minuten bis der nächste günstige Zeitraum startet. Während eines aktiven Zeitraums zeigt dies die Zeit bis zum Zeitraum nach dem aktuellen. Gibt 0 während kurzer Übergangsphasen zurück. Aktualisiert sich jede Minute.", "long_description": "Zeigt, wie lange es bis zum nächsten günstigen Zeitraum dauert. Der State wird in Stunden angezeigt (z. B. 2,25 h) für Dashboards, während das Attribut `next_in_minutes` Minuten bereitstellt (z. B. 135) für Automationsbedingungen. Während eines aktiven Zeitraums zeigt dies die Zeit bis zum Zeitraum nach dem aktuellen. Gibt 0 während kurzer Übergangsphasen zurück. Aktualisiert sich jede Minute.",
"usage_tips": "Perfekt für 'warte bis günstiger Zeitraum' Automatisierungen: 'Wenn next_in_minutes > 0 UND next_in_minutes < 15, warte, bevor du die Geschirrspülmaschine startest'. Wert > 0 zeigt immer an, dass ein zukünftiger Zeitraum geplant ist." "usage_tips": "Für Automationen: Attribut `next_in_minutes` nutzen wie 'Wenn next_in_minutes > 0 UND next_in_minutes < 15, warte, bevor du die Geschirrspülmaschine startest'. Wert > 0 zeigt immer an, dass ein zukünftiger Zeitraum geplant ist."
}, },
"peak_price_end_time": { "peak_price_end_time": {
"description": "Wann der aktuelle oder nächste teure Zeitraum endet", "description": "Wann der aktuelle oder nächste teure Zeitraum endet",
@ -339,14 +352,14 @@
"usage_tips": "Nutze dies, um 'Teurer Zeitraum endet in 1 Stunde' (wenn aktiv) oder 'Nächster teurer Zeitraum endet um 18:00' (wenn inaktiv) anzuzeigen. Kombiniere mit Automatisierungen, um den Betrieb nach der Spitzenzeit fortzusetzen." "usage_tips": "Nutze dies, um 'Teurer Zeitraum endet in 1 Stunde' (wenn aktiv) oder 'Nächster teurer Zeitraum endet um 18:00' (wenn inaktiv) anzuzeigen. Kombiniere mit Automatisierungen, um den Betrieb nach der Spitzenzeit fortzusetzen."
}, },
"peak_price_period_duration": { "peak_price_period_duration": {
"description": "Gesamtlänge des aktuellen oder nächsten teuren Zeitraums in Minuten", "description": "Länge des aktuellen/nächsten teuren Zeitraums",
"long_description": "Zeigt, wie lange der teure Zeitraum insgesamt dauert. Während eines aktiven Zeitraums zeigt dies die Dauer des aktuellen Zeitraums. Wenn kein Zeitraum aktiv ist, zeigt dies die Dauer des nächsten kommenden Zeitraums. Gibt nur 'Unbekannt' zurück, wenn keine Zeiträume ermittelt wurden.", "long_description": "Gesamtdauer des aktuellen oder nächsten teuren Zeitraums. Der State wird in Stunden angezeigt (z. B. 1,5 h) für leichtes Ablesen in der UI, während das Attribut `period_duration_minutes` denselben Wert in Minuten bereitstellt (z. B. 90) für Automationen. Dieser Wert repräsentiert die **volle geplante Dauer** des Zeitraums und ist konstant während des gesamten Zeitraums, auch wenn die verbleibende Zeit (remaining_minutes) abnimmt.",
"usage_tips": "Nützlich für Planung: 'Der nächste teure Zeitraum dauert 60 Minuten' oder 'Der aktuelle Spitzenzeitraum ist 90 Minuten lang'. Kombiniere mit remaining_minutes, um zu entscheiden, ob die Spitze abgewartet oder der Betrieb fortgesetzt werden soll." "usage_tips": "Kombiniere mit remaining_minutes, um zu berechnen, wann langlaufende Geräte gestoppt werden sollen: Zeitraum begann vor `period_duration_minutes - remaining_minutes` Minuten. Dieses Attribut unterstützt Energiespar-Strategien, indem es hilft, Hochverbrauchsaktivitäten außerhalb teurer Perioden zu planen."
}, },
"peak_price_remaining_minutes": { "peak_price_remaining_minutes": {
"description": "Verbleibende Minuten im aktuellen teuren Zeitraum (0 wenn inaktiv)", "description": "Verbleibende Zeit im aktuellen teuren Zeitraum",
"long_description": "Zeigt, wie viele Minuten im aktuellen teuren Zeitraum noch verbleiben. Gibt 0 zurück, wenn kein Zeitraum aktiv ist. Aktualisiert sich jede Minute. Prüfe binary_sensor.peak_price_period, um zu sehen, ob ein Zeitraum aktuell aktiv ist.", "long_description": "Zeigt, wie viel Zeit im aktuellen teuren Zeitraum noch verbleibt. Der State wird in Stunden angezeigt (z. B. 0,75 h) für einfaches Ablesen in Dashboards, während das Attribut `remaining_minutes` dieselbe Zeit in Minuten liefert (z. B. 45) für Automationsbedingungen. **Countdown-Timer**: Dieser Wert dekrementiert jede Minute während eines aktiven Zeitraums. Gibt 0 zurück, wenn kein teurer Zeitraum aktiv ist. Aktualisiert sich minütlich.",
"usage_tips": "Nutze in Automatisierungen: 'Wenn remaining_minutes > 60, breche aufgeschobene Ladesitzung ab'. Wert 0 macht es einfach zu unterscheiden zwischen aktivem (Wert > 0) und inaktivem (Wert = 0) Zeitraum." "usage_tips": "Für Automationen: Nutze Attribut `remaining_minutes` wie 'Wenn remaining_minutes > 60, setze Heizung auf Energiesparmodus' oder 'Wenn remaining_minutes < 15, erhöhe Temperatur wieder'. UI zeigt benutzerfreundliche Stunden (z. B. 1,25 h). Wert 0 zeigt an, dass kein teurer Zeitraum aktiv ist."
}, },
"peak_price_progress": { "peak_price_progress": {
"description": "Fortschritt durch aktuellen teuren Zeitraum (0% wenn inaktiv)", "description": "Fortschritt durch aktuellen teuren Zeitraum (0% wenn inaktiv)",
@ -359,9 +372,9 @@
"usage_tips": "Immer nützlich für Planung: 'Nächster teurer Zeitraum startet in 2 Stunden'. Automatisierung: 'Wenn nächste Startzeit in 30 Minuten ist, reduziere Heiztemperatur vorsorglich'." "usage_tips": "Immer nützlich für Planung: 'Nächster teurer Zeitraum startet in 2 Stunden'. Automatisierung: 'Wenn nächste Startzeit in 30 Minuten ist, reduziere Heiztemperatur vorsorglich'."
}, },
"peak_price_next_in_minutes": { "peak_price_next_in_minutes": {
"description": "Minuten bis nächster teurer Zeitraum startet (0 beim Übergang)", "description": "Zeit bis zum nächsten teuren Zeitraum",
"long_description": "Zeigt Minuten bis der nächste teure Zeitraum startet. Während eines aktiven Zeitraums zeigt dies die Zeit bis zum Zeitraum nach dem aktuellen. Gibt 0 während kurzer Übergangsphasen zurück. Aktualisiert sich jede Minute.", "long_description": "Zeigt, wie lange es bis zum nächsten teuren Zeitraum dauert. Der State wird in Stunden angezeigt (z. B. 2,25 h) für Dashboards, während das Attribut `next_in_minutes` Minuten bereitstellt (z. B. 135) für Automationsbedingungen. Während eines aktiven Zeitraums zeigt dies die Zeit bis zum Zeitraum nach dem aktuellen. Gibt 0 während kurzer Übergangsphasen zurück. Aktualisiert sich jede Minute.",
"usage_tips": "Präventive Automatisierung: 'Wenn next_in_minutes > 0 UND next_in_minutes < 10, beende aktuellen Ladezyklus jetzt, bevor die Preise steigen'." "usage_tips": "Für Automationen: Attribut `next_in_minutes` nutzen wie 'Wenn next_in_minutes > 0 UND next_in_minutes < 10, reduziere Heizung vorsorglich bevor der teure Zeitraum beginnt'. Wert > 0 zeigt immer an, dass ein zukünftiger teurer Zeitraum geplant ist."
}, },
"home_type": { "home_type": {
"description": "Art der Wohnung (Wohnung, Haus usw.)", "description": "Art der Wohnung (Wohnung, Haus usw.)",
@ -437,6 +450,11 @@
"description": "Datenexport für Dashboard-Integrationen", "description": "Datenexport für Dashboard-Integrationen",
"long_description": "Dieser Sensor ruft den get_chartdata-Service mit deiner konfigurierten YAML-Konfiguration auf und stellt das Ergebnis als Entity-Attribute bereit. Der Status zeigt 'ready' wenn Daten verfügbar sind, 'error' bei Fehlern, oder 'pending' vor dem ersten Aufruf. Perfekt für Dashboard-Integrationen wie ApexCharts, die Preisdaten aus Entity-Attributen lesen.", "long_description": "Dieser Sensor ruft den get_chartdata-Service mit deiner konfigurierten YAML-Konfiguration auf und stellt das Ergebnis als Entity-Attribute bereit. Der Status zeigt 'ready' wenn Daten verfügbar sind, 'error' bei Fehlern, oder 'pending' vor dem ersten Aufruf. Perfekt für Dashboard-Integrationen wie ApexCharts, die Preisdaten aus Entity-Attributen lesen.",
"usage_tips": "Konfiguriere die YAML-Parameter in den Integrationsoptionen entsprechend deinem get_chartdata-Service-Aufruf. Der Sensor aktualisiert automatisch bei Preisdaten-Updates (typischerweise nach Mitternacht und wenn morgige Daten eintreffen). Greife auf die Service-Response-Daten direkt über die Entity-Attribute zu - die Struktur entspricht exakt dem, was get_chartdata zurückgibt." "usage_tips": "Konfiguriere die YAML-Parameter in den Integrationsoptionen entsprechend deinem get_chartdata-Service-Aufruf. Der Sensor aktualisiert automatisch bei Preisdaten-Updates (typischerweise nach Mitternacht und wenn morgige Daten eintreffen). Greife auf die Service-Response-Daten direkt über die Entity-Attribute zu - die Struktur entspricht exakt dem, was get_chartdata zurückgibt."
},
"chart_metadata": {
"description": "Leichtgewichtige Metadaten für Diagrammkonfiguration",
"long_description": "Liefert wesentliche Diagrammkonfigurationswerte als Sensor-Attribute. Nützlich für jede Diagrammkarte, die Y-Achsen-Grenzen benötigt. Der Sensor ruft get_chartdata im Nur-Metadaten-Modus auf (keine Datenverarbeitung) und extrahiert: yaxis_min, yaxis_max (vorgeschlagener Y-Achsenbereich für optimale Skalierung). Der Status spiegelt das Service-Call-Ergebnis wider: 'ready' bei Erfolg, 'error' bei Fehler, 'pending' während der Initialisierung.",
"usage_tips": "Konfiguriere über configuration.yaml unter tibber_prices.chart_metadata_config (optional: day, subunit_currency, resolution). Der Sensor aktualisiert sich automatisch bei Preisdatenänderungen. Greife auf Metadaten aus Attributen zu: yaxis_min, yaxis_max. Verwende mit config-template-card oder jedem Tool, das Entity-Attribute liest - perfekt für dynamische Diagrammkonfiguration ohne manuelle Berechnungen."
} }
}, },
"binary_sensor": { "binary_sensor": {
@ -471,11 +489,95 @@
"usage_tips": "Verwende dies, um zu überprüfen, ob Echtzeit-Verbrauchsdaten verfügbar sind. Aktiviere Benachrichtigungen, falls dies unerwartet auf 'Aus' wechselt, was auf potenzielle Hardware- oder Verbindungsprobleme hinweist." "usage_tips": "Verwende dies, um zu überprüfen, ob Echtzeit-Verbrauchsdaten verfügbar sind. Aktiviere Benachrichtigungen, falls dies unerwartet auf 'Aus' wechselt, was auf potenzielle Hardware- oder Verbindungsprobleme hinweist."
} }
}, },
"number": {
"best_price_flex_override": {
"description": "Maximaler Prozentsatz über dem Tagesminimumpreis, den Intervalle haben können und trotzdem als 'Bestpreis' gelten. Empfohlen: 15-20 mit Lockerung aktiviert (Standard), oder 25-35 ohne Lockerung. Maximum: 50 (Obergrenze für zuverlässige Periodenerkennung).",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Flexibilität' aus dem Optionen-Dialog für die Bestpreis-Periodenberechnung.",
"usage_tips": "Aktiviere diese Entität, um die Bestpreiserkennung dynamisch über Automatisierungen anzupassen, z.B. höhere Flexibilität bei kritischen Lasten oder engere Anforderungen für flexible Geräte."
},
"best_price_min_distance_override": {
"description": "Minimaler prozentualer Abstand unter dem Tagesdurchschnitt. Intervalle müssen so weit unter dem Durchschnitt liegen, um als 'Bestpreis' zu gelten. Hilft, echte Niedrigpreis-Perioden von durchschnittlichen Preisen zu unterscheiden.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Mindestabstand' aus dem Optionen-Dialog für die Bestpreis-Periodenberechnung.",
"usage_tips": "Erhöhe den Wert, wenn du strengere Bestpreis-Kriterien möchtest. Verringere ihn, wenn zu wenige Perioden erkannt werden."
},
"best_price_min_period_length_override": {
"description": "Minimale Periodenl\u00e4nge in 15-Minuten-Intervallen. Perioden kürzer als diese werden nicht gemeldet. Beispiel: 2 = mindestens 30 Minuten.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Mindestperiodenlänge' aus dem Optionen-Dialog für die Bestpreis-Periodenberechnung.",
"usage_tips": "Passe an die typische Laufzeit deiner Geräte an: 2 (30 Min) für Schnellprogramme, 4-8 (1-2 Std) für normale Zyklen, 8+ für lange ECO-Programme."
},
"best_price_min_periods_override": {
"description": "Minimale Anzahl an Bestpreis-Perioden, die täglich gefunden werden sollen. Wenn Lockerung aktiviert ist, wird das System die Kriterien automatisch anpassen, um diese Zahl zu erreichen.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Mindestperioden' aus dem Optionen-Dialog für die Bestpreis-Periodenberechnung.",
"usage_tips": "Setze dies auf die Anzahl zeitkritischer Aufgaben, die du täglich hast. Beispiel: 2 für zwei Waschmaschinenladungen."
},
"best_price_relaxation_attempts_override": {
"description": "Anzahl der Versuche, die Kriterien schrittweise zu lockern, um die Mindestperiodenanzahl zu erreichen. Jeder Versuch erhöht die Flexibilität um 3 Prozent. Bei 0 werden nur Basis-Kriterien verwendet.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Lockerungsversuche' aus dem Optionen-Dialog für die Bestpreis-Periodenberechnung.",
"usage_tips": "Höhere Werte machen die Periodenerkennung anpassungsfähiger an Tage mit stabilen Preisen. Setze auf 0, um strenge Kriterien ohne Lockerung zu erzwingen."
},
"best_price_gap_count_override": {
"description": "Maximale Anzahl teurerer Intervalle, die zwischen günstigen Intervallen erlaubt sind und trotzdem als eine zusammenhängende Periode gelten. Bei 0 müssen günstige Intervalle aufeinander folgen.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Lückentoleranz' aus dem Optionen-Dialog für die Bestpreis-Periodenberechnung.",
"usage_tips": "Erhöhe dies für Geräte mit variabler Last (z.B. Wärmepumpen), die kurze teurere Intervalle tolerieren können. Setze auf 0 für kontinuierliche günstige Perioden."
},
"peak_price_flex_override": {
"description": "Maximaler Prozentsatz unter dem Tagesmaximumpreis, den Intervalle haben können und trotzdem als 'Spitzenpreis' gelten. Gleiche Empfehlungen wie für Bestpreis-Flexibilität.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Flexibilität' aus dem Optionen-Dialog für die Spitzenpreis-Periodenberechnung.",
"usage_tips": "Nutze dies, um den Spitzenpreis-Schwellenwert zur Laufzeit für Automatisierungen anzupassen, die den Verbrauch während teurer Stunden vermeiden."
},
"peak_price_min_distance_override": {
"description": "Minimaler prozentualer Abstand über dem Tagesdurchschnitt. Intervalle müssen so weit über dem Durchschnitt liegen, um als 'Spitzenpreis' zu gelten.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Mindestabstand' aus dem Optionen-Dialog für die Spitzenpreis-Periodenberechnung.",
"usage_tips": "Erhöhe den Wert, um nur extreme Preisspitzen zu erfassen. Verringere ihn, um mehr Hochpreiszeiten einzubeziehen."
},
"peak_price_min_period_length_override": {
"description": "Minimale Periodenl\u00e4nge in 15-Minuten-Intervallen für Spitzenpreise. Kürzere Preisspitzen werden nicht als Perioden gemeldet.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Mindestperiodenlänge' aus dem Optionen-Dialog für die Spitzenpreis-Periodenberechnung.",
"usage_tips": "Kürzere Werte erfassen kurze Preisspitzen. Längere Werte fokussieren auf anhaltende Hochpreisphasen."
},
"peak_price_min_periods_override": {
"description": "Minimale Anzahl an Spitzenpreis-Perioden, die täglich gefunden werden sollen.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Mindestperioden' aus dem Optionen-Dialog für die Spitzenpreis-Periodenberechnung.",
"usage_tips": "Setze dies basierend darauf, wie viele Hochpreisphasen du pro Tag für Automatisierungen erfassen möchtest."
},
"peak_price_relaxation_attempts_override": {
"description": "Anzahl der Versuche, die Kriterien zu lockern, um die Mindestanzahl an Spitzenpreis-Perioden zu erreichen.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Lockerungsversuche' aus dem Optionen-Dialog für die Spitzenpreis-Periodenberechnung.",
"usage_tips": "Erhöhe dies, wenn an Tagen mit stabilen Preisen keine Perioden gefunden werden. Setze auf 0, um strenge Kriterien zu erzwingen."
},
"peak_price_gap_count_override": {
"description": "Maximale Anzahl günstigerer Intervalle, die zwischen teuren Intervallen erlaubt sind und trotzdem als eine Spitzenpreis-Periode gelten.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Lückentoleranz' aus dem Optionen-Dialog für die Spitzenpreis-Periodenberechnung.",
"usage_tips": "Höhere Werte erfassen längere Hochpreisphasen auch mit kurzen Preiseinbrüchen. Setze auf 0, um strikt zusammenhängende Spitzenpreise zu erfassen."
}
},
"switch": {
"best_price_enable_relaxation_override": {
"description": "Wenn aktiviert, werden die Kriterien automatisch gelockert, um die Mindestperiodenanzahl zu erreichen. Wenn deaktiviert, werden nur Perioden gemeldet, die die strengen Kriterien erfüllen (möglicherweise null Perioden bei stabilen Preisen).",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Mindestanzahl erreichen' aus dem Optionen-Dialog für die Bestpreis-Periodenberechnung.",
"usage_tips": "Aktiviere dies für garantierte tägliche Automatisierungsmöglichkeiten. Deaktiviere es, wenn du nur wirklich günstige Zeiträume willst, auch wenn das bedeutet, dass an manchen Tagen keine Perioden gefunden werden."
},
"peak_price_enable_relaxation_override": {
"description": "Wenn aktiviert, werden die Kriterien automatisch gelockert, um die Mindestperiodenanzahl zu erreichen. Wenn deaktiviert, werden nur echte Preisspitzen gemeldet.",
"long_description": "Wenn diese Entität aktiviert ist, überschreibt ihr Wert die Einstellung 'Mindestanzahl erreichen' aus dem Optionen-Dialog für die Spitzenpreis-Periodenberechnung.",
"usage_tips": "Aktiviere dies für konsistente Spitzenpreis-Warnungen. Deaktiviere es, um nur extreme Preisspitzen zu erfassen."
}
},
"home_types": { "home_types": {
"APARTMENT": "Wohnung", "APARTMENT": "Wohnung",
"ROWHOUSE": "Reihenhaus", "ROWHOUSE": "Reihenhaus",
"HOUSE": "Haus", "HOUSE": "Haus",
"COTTAGE": "Ferienhaus" "COTTAGE": "Ferienhaus"
}, },
"time_units": {
"day": "{count} Tag",
"days": "{count} Tagen",
"hour": "{count} Stunde",
"hours": "{count} Stunden",
"minute": "{count} Minute",
"minutes": "{count} Minuten",
"ago": "vor {parts}",
"now": "jetzt"
},
"attribution": "Daten bereitgestellt von Tibber" "attribution": "Daten bereitgestellt von Tibber"
} }

View file

@ -1,7 +1,20 @@
{ {
"apexcharts": { "apexcharts": {
"title_rating_level": "Price Phases Daily Progress", "title_rating_level": "Price Phases Daily Progress",
"title_level": "Price Level" "title_level": "Price Level",
"hourly_suffix": "(Ø hourly)",
"best_price_period_name": "Best Price Period",
"peak_price_period_name": "Peak Price Period",
"notification": {
"metadata_sensor_unavailable": {
"title": "Tibber Prices: ApexCharts YAML Generated with Limited Functionality",
"message": "You just generated an ApexCharts card configuration via Developer Tools. The Chart Metadata sensor is currently disabled, so the generated YAML will only show **basic functionality** (auto-scale axis, fixed gradient at 50%).\n\n**To enable full functionality** (optimized scaling, dynamic gradient colors):\n1. [Open Tibber Prices Integration](https://my.home-assistant.io/redirect/integration/?domain=tibber_prices)\n2. Enable the 'Chart Metadata' sensor\n3. **Generate the YAML again** via Developer Tools\n4. **Replace the old YAML** in your dashboard with the new version\n\n⚠ Simply enabling the sensor is not enough - you must regenerate and replace the YAML code!"
},
"missing_cards": {
"title": "Tibber Prices: ApexCharts YAML Cannot Be Used",
"message": "You just generated an ApexCharts card configuration via Developer Tools, but the generated YAML **will not work** because required custom cards are missing.\n\n**Missing cards:**\n{cards}\n\n**To use the generated YAML:**\n1. Click the links above to install the missing cards from HACS\n2. Restart Home Assistant (sometimes needed)\n3. **Generate the YAML again** via Developer Tools\n4. Add the YAML to your dashboard\n\n⚠ The current YAML code will not work until all cards are installed!"
}
}
}, },
"sensor": { "sensor": {
"current_interval_price": { "current_interval_price": {
@ -9,9 +22,9 @@
"long_description": "Shows the current price per kWh from your Tibber subscription", "long_description": "Shows the current price per kWh from your Tibber subscription",
"usage_tips": "Use this to track prices or to create automations that run when electricity is cheap" "usage_tips": "Use this to track prices or to create automations that run when electricity is cheap"
}, },
"current_interval_price_major": { "current_interval_price_base": {
"description": "Current electricity price in major currency (EUR/kWh, NOK/kWh, etc.) for Energy Dashboard", "description": "Current electricity price in base currency (EUR/kWh, NOK/kWh, etc.) for Energy Dashboard",
"long_description": "Shows the current price per kWh in major currency units (e.g., EUR/kWh instead of ct/kWh, NOK/kWh instead of øre/kWh). This sensor is specifically designed for use with Home Assistant's Energy Dashboard, which requires prices in standard currency units.", "long_description": "Shows the current price per kWh in base currency units (e.g., EUR/kWh instead of ct/kWh, NOK/kWh instead of øre/kWh). This sensor is specifically designed for use with Home Assistant's Energy Dashboard, which requires prices in standard currency units.",
"usage_tips": "Use this sensor when configuring the Energy Dashboard under Settings → Dashboards → Energy. Select this sensor as the 'Entity with current price' to automatically calculate your energy costs. The Energy Dashboard multiplies your energy consumption (kWh) by this price to show total costs." "usage_tips": "Use this sensor when configuring the Energy Dashboard under Settings → Dashboards → Energy. Select this sensor as the 'Entity with current price' to automatically calculate your energy costs. The Energy Dashboard multiplies your energy consumption (kWh) by this price to show total costs."
}, },
"next_interval_price": { "next_interval_price": {
@ -45,9 +58,9 @@
"usage_tips": "Use this to avoid running appliances during peak price times" "usage_tips": "Use this to avoid running appliances during peak price times"
}, },
"average_price_today": { "average_price_today": {
"description": "The average electricity price for today per kWh", "description": "The typical electricity price for today per kWh (configurable display format)",
"long_description": "Shows the average price per kWh for the current day from your Tibber subscription", "long_description": "Shows the typical price per kWh for today. **By default, the state displays the median** (resistant to extreme spikes, showing what you can generally expect). You can change this in the integration options to show the arithmetic mean instead. The alternate value is always available as attribute `price_mean` or `price_median` for automations.",
"usage_tips": "Use this as a baseline for comparing current prices" "usage_tips": "Use the state value for display. For exact cost calculations in automations, use: {{ state_attr('sensor.average_price_today', 'price_mean') }}"
}, },
"lowest_price_tomorrow": { "lowest_price_tomorrow": {
"description": "The lowest electricity price for tomorrow per kWh", "description": "The lowest electricity price for tomorrow per kWh",
@ -60,9 +73,9 @@
"usage_tips": "Use this to avoid running appliances during tomorrow's peak price times. Helpful for planning around expensive periods." "usage_tips": "Use this to avoid running appliances during tomorrow's peak price times. Helpful for planning around expensive periods."
}, },
"average_price_tomorrow": { "average_price_tomorrow": {
"description": "The average electricity price for tomorrow per kWh", "description": "The typical electricity price for tomorrow per kWh (configurable display format)",
"long_description": "Shows the average price per kWh for tomorrow from your Tibber subscription. This sensor becomes unavailable until tomorrow's data is published by Tibber (typically around 13:00-14:00 CET).", "long_description": "Shows the typical price per kWh for tomorrow. **By default, the state displays the median** (resistant to extreme spikes). You can change this in the integration options to show the arithmetic mean instead. The alternate value is available as attribute. This sensor becomes unavailable until tomorrow's data is published by Tibber (typically around 13:00-14:00 CET).",
"usage_tips": "Use this as a baseline for comparing tomorrow's prices and planning consumption. Compare with today's average to see if tomorrow will be more or less expensive overall." "usage_tips": "Use this to plan tomorrow's energy consumption. For cost calculations, use: {{ state_attr('sensor.average_price_tomorrow', 'price_mean') }}"
}, },
"yesterday_price_level": { "yesterday_price_level": {
"description": "Aggregated price level for yesterday", "description": "Aggregated price level for yesterday",
@ -95,14 +108,14 @@
"usage_tips": "Use this to plan tomorrow's energy consumption based on your personalized price thresholds. Compare with today to decide if you should shift consumption to tomorrow or use energy today." "usage_tips": "Use this to plan tomorrow's energy consumption based on your personalized price thresholds. Compare with today to decide if you should shift consumption to tomorrow or use energy today."
}, },
"trailing_price_average": { "trailing_price_average": {
"description": "The average electricity price for the past 24 hours per kWh", "description": "The typical electricity price for the past 24 hours per kWh (configurable display format)",
"long_description": "Shows the average price per kWh calculated from the past 24 hours (trailing average) from your Tibber subscription. This provides a rolling average that updates every 15 minutes based on historical data.", "long_description": "Shows the typical price per kWh for the past 24 hours. **By default, the state displays the median** (resistant to extreme spikes, showing what price level was typical). You can change this in the integration options to show the arithmetic mean instead. The alternate value is available as attribute. Updates every 15 minutes.",
"usage_tips": "Use this to compare current prices against recent trends. A current price significantly above this average may indicate a good time to reduce consumption." "usage_tips": "Use the state value to see the typical recent price level. For cost calculations, use: {{ state_attr('sensor.trailing_price_average', 'price_mean') }}"
}, },
"leading_price_average": { "leading_price_average": {
"description": "The average electricity price for the next 24 hours per kWh", "description": "The typical electricity price for the next 24 hours per kWh (configurable display format)",
"long_description": "Shows the average price per kWh calculated from the next 24 hours (leading average) from your Tibber subscription. This provides a forward-looking average based on available forecast data.", "long_description": "Shows the typical price per kWh for the next 24 hours. **By default, the state displays the median** (resistant to extreme spikes, showing what price level to expect). You can change this in the integration options to show the arithmetic mean instead. The alternate value is available as attribute.",
"usage_tips": "Use this to plan energy usage. If the current price is below the leading average, it may be a good time to run energy-intensive appliances." "usage_tips": "Use the state value to see the typical upcoming price level. For cost calculations, use: {{ state_attr('sensor.leading_price_average', 'price_mean') }}"
}, },
"trailing_price_min": { "trailing_price_min": {
"description": "The minimum electricity price for the past 24 hours per kWh", "description": "The minimum electricity price for the past 24 hours per kWh",
@ -279,24 +292,24 @@
"long_description": "Shows the timestamp of the latest available price data interval from your Tibber subscription" "long_description": "Shows the timestamp of the latest available price data interval from your Tibber subscription"
}, },
"today_volatility": { "today_volatility": {
"description": "Price volatility classification for today", "description": "How much electricity prices change throughout today",
"long_description": "Shows how much electricity prices vary throughout today based on the spread (difference between highest and lowest price). Classification: LOW = spread < 5ct, MODERATE = 5-15ct, HIGH = 15-30ct, VERY HIGH = >30ct.", "long_description": "Indicates whether today's prices are stable or have big swings. Low volatility means prices stay fairly consistent—timing doesn't matter much. High volatility means significant price differences throughout the day—great opportunity to shift consumption to cheaper periods. Check `price_coefficient_variation_%` for the variance percentage and `price_spread` for the absolute price span.",
"usage_tips": "Use this to decide if price-based optimization is worthwhile. For example, with a balcony battery that has 15% efficiency losses, optimization only makes sense when volatility is at least MODERATE. Create automations that check volatility before scheduling charging/discharging cycles." "usage_tips": "Use this to decide if optimization is worth your effort. On low-volatility days, you can run devices anytime. On high-volatility days, following Best Price periods saves meaningful money."
}, },
"tomorrow_volatility": { "tomorrow_volatility": {
"description": "Price volatility classification for tomorrow", "description": "How much electricity prices will change tomorrow",
"long_description": "Shows how much electricity prices will vary throughout tomorrow based on the spread (difference between highest and lowest price). Becomes unavailable until tomorrow's data is published (typically 13:00-14:00 CET).", "long_description": "Indicates whether tomorrow's prices will be stable or have big swings. Available once tomorrow's data is published (typically 13:00-14:00 CET). Low volatility means prices stay fairly consistent—timing isn't critical. High volatility means significant price differences throughout the day—good opportunity for scheduling energy-intensive activities. Check `price_coefficient_variation_%` for the variance percentage and `price_spread` for the absolute price span.",
"usage_tips": "Use this for advance planning of tomorrow's energy usage. If tomorrow has HIGH or VERY HIGH volatility, it's worth optimizing energy consumption timing. If LOW, you can run devices anytime without significant cost differences." "usage_tips": "Use for planning tomorrow's energy consumption. High volatility? Schedule flexible loads during Best Price periods. Low volatility? Run devices whenever is convenient."
}, },
"next_24h_volatility": { "next_24h_volatility": {
"description": "Price volatility classification for the rolling next 24 hours", "description": "How much prices will change over the next 24 hours",
"long_description": "Shows how much electricity prices vary in the next 24 hours from now (rolling window). This crosses day boundaries and updates every 15 minutes, providing a forward-looking volatility assessment independent of calendar days.", "long_description": "Indicates price volatility for a rolling 24-hour window from now (updates every 15 minutes). Low volatility means prices stay fairly consistent. High volatility means significant price swings offer optimization opportunities. Unlike today/tomorrow sensors, this crosses day boundaries and provides a continuous forward-looking assessment. Check `price_coefficient_variation_%` for the variance percentage and `price_spread` for the absolute price span.",
"usage_tips": "Best sensor for real-time optimization decisions. Unlike today/tomorrow sensors that switch at midnight, this provides continuous 24h volatility assessment. Use for battery charging strategies that span across day boundaries." "usage_tips": "Best for real-time decisions. Use when planning battery charging strategies or other flexible loads that might span across midnight. Provides consistent 24h perspective regardless of calendar day."
}, },
"today_tomorrow_volatility": { "today_tomorrow_volatility": {
"description": "Combined price volatility classification for today and tomorrow", "description": "Combined price volatility across today and tomorrow",
"long_description": "Shows volatility across both today and tomorrow combined (when tomorrow's data is available). Provides an extended view of price variation spanning up to 48 hours. Falls back to today-only when tomorrow's data isn't available yet.", "long_description": "Shows overall price volatility when considering both today and tomorrow together (when available). Indicates whether there are significant price differences across the day boundary. Falls back to today-only when tomorrow's data isn't available yet. Useful for understanding multi-day optimization opportunities. Check `price_coefficient_variation_%` for the variance percentage and `price_spread` for the absolute price span.",
"usage_tips": "Use this for multi-day planning and to understand if price opportunities exist across the day boundary. The 'today_volatility' and 'tomorrow_volatility' breakdown attributes show individual day contributions. Useful for scheduling charging sessions that might span midnight." "usage_tips": "Use for planning tasks that span multiple days. Check if prices vary enough to make scheduling worthwhile. The individual day volatility sensors show breakdown per day if you need more detail."
}, },
"data_lifecycle_status": { "data_lifecycle_status": {
"description": "Current state of price data lifecycle and caching", "description": "Current state of price data lifecycle and caching",
@ -309,14 +322,14 @@
"usage_tips": "Use this to display a countdown like 'Cheap period ends in 2 hours' (when active) or 'Next cheap period ends at 14:00' (when inactive). Home Assistant automatically shows relative time for timestamp sensors." "usage_tips": "Use this to display a countdown like 'Cheap period ends in 2 hours' (when active) or 'Next cheap period ends at 14:00' (when inactive). Home Assistant automatically shows relative time for timestamp sensors."
}, },
"best_price_period_duration": { "best_price_period_duration": {
"description": "Total length of current or next best price period in minutes", "description": "Total length of current or next best price period",
"long_description": "Shows how long the best price period lasts in total. During an active period, shows the duration of the current period. When no period is active, shows the duration of the next upcoming period. Returns 'Unknown' only when no periods are configured.", "long_description": "Shows how long the best price period lasts in total. The state is displayed in hours (e.g., 1.5 h) for easy reading in the UI, while the `period_duration_minutes` attribute provides the same value in minutes (e.g., 90) for use in automations. During an active period, shows the duration of the current period. When no period is active, shows the duration of the next upcoming period. Returns 'Unknown' only when no periods are configured.",
"usage_tips": "Useful for planning: 'The next cheap period lasts 90 minutes' or 'Current cheap period is 120 minutes long'. Combine with remaining_minutes to calculate when to start long-running appliances." "usage_tips": "For display: Use the state value (hours) in dashboards. For automations: Use `period_duration_minutes` attribute to check if there's enough time for long-running tasks (e.g., 'If period_duration_minutes >= 90, start washing machine')."
}, },
"best_price_remaining_minutes": { "best_price_remaining_minutes": {
"description": "Minutes remaining in current best price period (0 when inactive)", "description": "Time remaining in current best price period",
"long_description": "Shows how many minutes are left in the current best price period. Returns 0 when no period is active. Updates every minute. Check binary_sensor.best_price_period to see if a period is currently active.", "long_description": "Shows how much time is left in the current best price period. The state displays in hours (e.g., 0.5 h) for easy reading, while the `remaining_minutes` attribute provides minutes (e.g., 30) for automation logic. Returns 0 when no period is active. Updates every minute. Check binary_sensor.best_price_period to see if a period is currently active.",
"usage_tips": "Perfect for automations: 'If remaining_minutes > 0 AND remaining_minutes < 30, start washing machine now'. The value 0 makes it easy to check if a period is active (value > 0) or not (value = 0)." "usage_tips": "For automations: Use `remaining_minutes` attribute with numeric comparisons like 'If remaining_minutes > 0 AND remaining_minutes < 30, start washing machine now'. The value 0 makes it easy to check if a period is active (value > 0) or not (value = 0)."
}, },
"best_price_progress": { "best_price_progress": {
"description": "Progress through current best price period (0% when inactive)", "description": "Progress through current best price period (0% when inactive)",
@ -329,9 +342,9 @@
"usage_tips": "Always useful for planning ahead: 'Next cheap period starts in 3 hours' (whether you're in a period now or not). Combine with automations: 'When next start time is in 10 minutes, send notification to prepare washing machine'." "usage_tips": "Always useful for planning ahead: 'Next cheap period starts in 3 hours' (whether you're in a period now or not). Combine with automations: 'When next start time is in 10 minutes, send notification to prepare washing machine'."
}, },
"best_price_next_in_minutes": { "best_price_next_in_minutes": {
"description": "Minutes until next best price period starts (0 when in transition)", "description": "Time until next best price period starts",
"long_description": "Shows minutes until the next best price period starts. During an active period, shows time until the period AFTER the current one. Returns 0 during brief transition moments. Updates every minute.", "long_description": "Shows how long until the next best price period starts. The state displays in hours (e.g., 2.25 h) for dashboards, while the `next_in_minutes` attribute provides minutes (e.g., 135) for automation conditions. During an active period, shows time until the period AFTER the current one. Returns 0 during brief transition moments. Updates every minute.",
"usage_tips": "Perfect for 'wait until cheap period' automations: 'If next_in_minutes > 0 AND next_in_minutes < 15, wait before starting dishwasher'. Value > 0 always indicates a future period is scheduled." "usage_tips": "For automations: Use `next_in_minutes` attribute like 'If next_in_minutes > 0 AND next_in_minutes < 15, wait before starting dishwasher'. Value > 0 always indicates a future period is scheduled."
}, },
"peak_price_end_time": { "peak_price_end_time": {
"description": "When the current or next peak price period ends", "description": "When the current or next peak price period ends",
@ -339,14 +352,14 @@
"usage_tips": "Use this to display 'Expensive period ends in 1 hour' (when active) or 'Next expensive period ends at 18:00' (when inactive). Combine with automations to resume operations after peak." "usage_tips": "Use this to display 'Expensive period ends in 1 hour' (when active) or 'Next expensive period ends at 18:00' (when inactive). Combine with automations to resume operations after peak."
}, },
"peak_price_period_duration": { "peak_price_period_duration": {
"description": "Total length of current or next peak price period in minutes", "description": "Total length of current or next peak price period",
"long_description": "Shows how long the peak price period lasts in total. During an active period, shows the duration of the current period. When no period is active, shows the duration of the next upcoming period. Returns 'Unknown' only when no periods are configured.", "long_description": "Shows how long the peak price period lasts in total. The state is displayed in hours (e.g., 0.75 h) for easy reading in the UI, while the `period_duration_minutes` attribute provides the same value in minutes (e.g., 45) for use in automations. During an active period, shows the duration of the current period. When no period is active, shows the duration of the next upcoming period. Returns 'Unknown' only when no periods are configured.",
"usage_tips": "Useful for planning: 'The next expensive period lasts 60 minutes' or 'Current peak is 90 minutes long'. Combine with remaining_minutes to decide whether to wait out the peak or proceed with operations." "usage_tips": "For display: Use the state value (hours) in dashboards. For automations: Use `period_duration_minutes` attribute to decide whether to wait out the peak or proceed (e.g., 'If period_duration_minutes <= 60, pause operations')."
}, },
"peak_price_remaining_minutes": { "peak_price_remaining_minutes": {
"description": "Minutes remaining in current peak price period (0 when inactive)", "description": "Time remaining in current peak price period",
"long_description": "Shows how many minutes are left in the current peak price period. Returns 0 when no period is active. Updates every minute. Check binary_sensor.peak_price_period to see if a period is currently active.", "long_description": "Shows how much time is left in the current peak price period. The state displays in hours (e.g., 1.0 h) for easy reading, while the `remaining_minutes` attribute provides minutes (e.g., 60) for automation logic. Returns 0 when no period is active. Updates every minute. Check binary_sensor.peak_price_period to see if a period is currently active.",
"usage_tips": "Use in automations: 'If remaining_minutes > 60, cancel deferred charging session'. Value 0 makes it easy to distinguish active (value > 0) from inactive (value = 0) periods." "usage_tips": "For automations: Use `remaining_minutes` attribute like 'If remaining_minutes > 60, cancel deferred charging session'. Value 0 makes it easy to distinguish active (value > 0) from inactive (value = 0) periods."
}, },
"peak_price_progress": { "peak_price_progress": {
"description": "Progress through current peak price period (0% when inactive)", "description": "Progress through current peak price period (0% when inactive)",
@ -359,9 +372,9 @@
"usage_tips": "Always useful for planning: 'Next expensive period starts in 2 hours'. Automation: 'When next start time is in 30 minutes, reduce heating temperature preemptively'." "usage_tips": "Always useful for planning: 'Next expensive period starts in 2 hours'. Automation: 'When next start time is in 30 minutes, reduce heating temperature preemptively'."
}, },
"peak_price_next_in_minutes": { "peak_price_next_in_minutes": {
"description": "Minutes until next peak price period starts (0 when in transition)", "description": "Time until next peak price period starts",
"long_description": "Shows minutes until the next peak price period starts. During an active period, shows time until the period AFTER the current one. Returns 0 during brief transition moments. Updates every minute.", "long_description": "Shows how long until the next peak price period starts. The state displays in hours (e.g., 0.5 h) for dashboards, while the `next_in_minutes` attribute provides minutes (e.g., 30) for automation conditions. During an active period, shows time until the period AFTER the current one. Returns 0 during brief transition moments. Updates every minute.",
"usage_tips": "Pre-emptive automation: 'If next_in_minutes > 0 AND next_in_minutes < 10, complete current charging cycle now before prices increase'." "usage_tips": "For automations: Use `next_in_minutes` attribute like 'If next_in_minutes > 0 AND next_in_minutes < 10, complete current charging cycle now before prices increase'."
}, },
"home_type": { "home_type": {
"description": "Type of home (apartment, house, etc.)", "description": "Type of home (apartment, house, etc.)",
@ -432,6 +445,16 @@
"description": "Status of your Tibber subscription", "description": "Status of your Tibber subscription",
"long_description": "Shows whether your Tibber subscription is currently running, has ended, or is pending activation. A status of 'running' means you're actively receiving electricity through Tibber.", "long_description": "Shows whether your Tibber subscription is currently running, has ended, or is pending activation. A status of 'running' means you're actively receiving electricity through Tibber.",
"usage_tips": "Use this to monitor your subscription status. Set up alerts if status changes from 'running' to ensure uninterrupted service." "usage_tips": "Use this to monitor your subscription status. Set up alerts if status changes from 'running' to ensure uninterrupted service."
},
"chart_data_export": {
"description": "Data export for dashboard integrations",
"long_description": "This binary sensor calls the get_chartdata service with your configured YAML parameters and exposes the result as entity attributes. The state is 'on' when the service call succeeds and data is available, 'off' when the call fails or no configuration is set. Perfect for dashboard integrations like ApexCharts that need to read price data from entity attributes.",
"usage_tips": "Configure the YAML parameters in the integration options to match your get_chartdata service call. The sensor will automatically refresh when price data updates (typically after midnight and when tomorrow's data arrives). Access the service response data directly from the entity's attributes - the structure matches exactly what get_chartdata returns."
},
"chart_metadata": {
"description": "Lightweight metadata for chart configuration",
"long_description": "Provides essential chart configuration values as sensor attributes. Useful for any chart card that needs Y-axis bounds. The sensor calls get_chartdata with metadata-only mode (no data processing) and extracts: yaxis_min, yaxis_max (suggested Y-axis range for optimal scaling). The state reflects the service call result: 'ready' when successful, 'error' on failure, 'pending' during initialization.",
"usage_tips": "Configure via configuration.yaml under tibber_prices.chart_metadata_config (optional: day, subunit_currency, resolution). The sensor automatically refreshes when price data updates. Access metadata from attributes: yaxis_min, yaxis_max. Use with config-template-card or any tool that reads entity attributes - perfect for dynamic chart configuration without manual calculations."
} }
}, },
"binary_sensor": { "binary_sensor": {
@ -464,11 +487,80 @@
"description": "Whether realtime consumption monitoring is active", "description": "Whether realtime consumption monitoring is active",
"long_description": "Indicates if realtime electricity consumption monitoring is enabled and active for your Tibber home. This requires compatible metering hardware (e.g., Tibber Pulse) and an active subscription.", "long_description": "Indicates if realtime electricity consumption monitoring is enabled and active for your Tibber home. This requires compatible metering hardware (e.g., Tibber Pulse) and an active subscription.",
"usage_tips": "Use this to verify that realtime consumption data is available. Enable notifications if this changes to 'off' unexpectedly, indicating potential hardware or connectivity issues." "usage_tips": "Use this to verify that realtime consumption data is available. Enable notifications if this changes to 'off' unexpectedly, indicating potential hardware or connectivity issues."
}
},
"number": {
"best_price_flex_override": {
"description": "Maximum above the daily minimum price that intervals can be and still qualify as 'best price'. Recommended: 15-20 with relaxation enabled (default), or 25-35 without relaxation. Maximum: 50 (hard cap for reliable period detection).",
"long_description": "When this entity is enabled, its value overrides the 'Flexibility' setting from the options flow for best price period calculations.",
"usage_tips": "Enable this entity to dynamically adjust best price detection via automations. Higher values create longer periods, lower values are stricter."
}, },
"chart_data_export": { "best_price_min_distance_override": {
"description": "Data export for dashboard integrations", "description": "Ensures periods are significantly cheaper than the daily average, not just marginally below it. This filters out noise and prevents marking slightly-below-average periods as 'best price' on days with flat prices. Higher values = stricter filtering (only truly cheap periods qualify).",
"long_description": "This binary sensor calls the get_chartdata service with your configured YAML parameters and exposes the result as entity attributes. The state is 'on' when the service call succeeds and data is available, 'off' when the call fails or no configuration is set. Perfect for dashboard integrations like ApexCharts that need to read price data from entity attributes.", "long_description": "When this entity is enabled, its value overrides the 'Minimum Distance' setting from the options flow for best price period calculations.",
"usage_tips": "Configure the YAML parameters in the integration options to match your get_chartdata service call. The sensor will automatically refresh when price data updates (typically after midnight and when tomorrow's data arrives). Access the service response data directly from the entity's attributes - the structure matches exactly what get_chartdata returns." "usage_tips": "Use in automations to adjust how much better than average the best price periods must be. Higher values require prices to be further below average."
},
"best_price_min_period_length_override": {
"description": "Minimum duration for a period to be considered as 'best price'. Longer periods are more practical for running appliances like dishwashers or heat pumps. Best price periods require 60 minutes minimum (vs. 30 minutes for peak price warnings) because they should provide meaningful time windows for consumption planning.",
"long_description": "When this entity is enabled, its value overrides the 'Minimum Period Length' setting from the options flow for best price period calculations.",
"usage_tips": "Increase when your appliances need longer uninterrupted run times (e.g., washing machines, dishwashers)."
},
"best_price_min_periods_override": {
"description": "Minimum number of best price periods to aim for per day. Filters will be relaxed step-by-step to try achieving this count. Only active when 'Achieve Minimum Count' is enabled.",
"long_description": "When this entity is enabled, its value overrides the 'Minimum Periods' setting from the options flow for best price period calculations.",
"usage_tips": "Adjust dynamically based on how many times per day you need cheap electricity windows."
},
"best_price_relaxation_attempts_override": {
"description": "How many flex levels (attempts) to try before giving up. Each attempt runs all filter combinations at the new flex level. More attempts increase the chance of finding additional periods at the cost of longer processing time.",
"long_description": "When this entity is enabled, its value overrides the 'Relaxation Attempts' setting from the options flow for best price period calculations.",
"usage_tips": "Increase when periods are hard to find. Decrease for stricter price filtering."
},
"best_price_gap_count_override": {
"description": "Maximum number of consecutive intervals allowed that deviate by exactly one level step from the required level. This prevents periods from being split by occasional level deviations. Gap tolerance requires periods ≥90 minutes (6 intervals) to detect outliers effectively.",
"long_description": "When this entity is enabled, its value overrides the 'Gap Tolerance' setting from the options flow for best price period calculations.",
"usage_tips": "Increase to allow longer periods with occasional price spikes. Keep low for stricter continuous cheap periods."
},
"peak_price_flex_override": {
"description": "Maximum below the daily maximum price that intervals can be and still qualify as 'peak price'. Recommended: -15 to -20 with relaxation enabled (default), or -25 to -35 without relaxation. Maximum: -50 (hard cap for reliable period detection). Note: Negative values indicate distance below maximum.",
"long_description": "When this entity is enabled, its value overrides the 'Flexibility' setting from the options flow for peak price period calculations.",
"usage_tips": "Enable this entity to dynamically adjust peak price detection via automations. Higher values create longer peak periods."
},
"peak_price_min_distance_override": {
"description": "Ensures periods are significantly more expensive than the daily average, not just marginally above it. This filters out noise and prevents marking slightly-above-average periods as 'peak price' on days with flat prices. Higher values = stricter filtering (only truly expensive periods qualify).",
"long_description": "When this entity is enabled, its value overrides the 'Minimum Distance' setting from the options flow for peak price period calculations.",
"usage_tips": "Use in automations to adjust how much higher than average the peak price periods must be."
},
"peak_price_min_period_length_override": {
"description": "Minimum duration for a period to be considered as 'peak price'. Peak price warnings are allowed for shorter periods (30 minutes minimum vs. 60 minutes for best price) because brief expensive spikes are worth alerting about, even if they're too short for consumption planning.",
"long_description": "When this entity is enabled, its value overrides the 'Minimum Period Length' setting from the options flow for peak price period calculations.",
"usage_tips": "Increase to filter out brief price spikes, focusing on sustained expensive periods."
},
"peak_price_min_periods_override": {
"description": "Minimum number of peak price periods to aim for per day. Filters will be relaxed step-by-step to try achieving this count. Only active when 'Achieve Minimum Count' is enabled.",
"long_description": "When this entity is enabled, its value overrides the 'Minimum Periods' setting from the options flow for peak price period calculations.",
"usage_tips": "Adjust based on how many peak periods you want to identify and avoid."
},
"peak_price_relaxation_attempts_override": {
"description": "How many flex levels (attempts) to try before giving up. Each attempt runs all filter combinations at the new flex level. More attempts increase the chance of finding additional peak periods at the cost of longer processing time.",
"long_description": "When this entity is enabled, its value overrides the 'Relaxation Attempts' setting from the options flow for peak price period calculations.",
"usage_tips": "Increase when peak periods are hard to detect. Decrease for stricter peak price filtering."
},
"peak_price_gap_count_override": {
"description": "Maximum number of consecutive intervals allowed that deviate by exactly one level step from the required level. This prevents periods from being split by occasional level deviations. Gap tolerance requires periods ≥90 minutes (6 intervals) to detect outliers effectively.",
"long_description": "When this entity is enabled, its value overrides the 'Gap Tolerance' setting from the options flow for peak price period calculations.",
"usage_tips": "Increase to identify sustained expensive periods with brief dips. Keep low for stricter continuous peak detection."
}
},
"switch": {
"best_price_enable_relaxation_override": {
"description": "When enabled, filters will be gradually relaxed if not enough periods are found. This attempts to reach the desired minimum number of periods, which may include less optimal time windows as best-price periods.",
"long_description": "When this entity is enabled, its value overrides the 'Achieve Minimum Count' setting from the options flow for best price period calculations.",
"usage_tips": "Turn OFF to disable relaxation and use strict filtering only. Turn ON to allow the algorithm to relax criteria to find more periods."
},
"peak_price_enable_relaxation_override": {
"description": "When enabled, filters will be gradually relaxed if not enough periods are found. This attempts to reach the desired minimum number of periods to ensure you're warned about expensive periods even on days with unusual price patterns.",
"long_description": "When this entity is enabled, its value overrides the 'Achieve Minimum Count' setting from the options flow for peak price period calculations.",
"usage_tips": "Turn OFF to disable relaxation and use strict filtering only. Turn ON to allow the algorithm to relax criteria to find more peak periods."
} }
}, },
"home_types": { "home_types": {
@ -477,5 +569,15 @@
"HOUSE": "House", "HOUSE": "House",
"COTTAGE": "Cottage" "COTTAGE": "Cottage"
}, },
"time_units": {
"day": "{count} day",
"days": "{count} days",
"hour": "{count} hour",
"hours": "{count} hours",
"minute": "{count} minute",
"minutes": "{count} minutes",
"ago": "{parts} ago",
"now": "now"
},
"attribution": "Data provided by Tibber" "attribution": "Data provided by Tibber"
} }

View file

@ -1,7 +1,20 @@
{ {
"apexcharts": { "apexcharts": {
"title_rating_level": "Prisfaser daglig fremgang", "title_rating_level": "Prisfaser dagsfremdrift",
"title_level": "Prisnivå" "title_level": "Prisnivå",
"hourly_suffix": "(Ø per time)",
"best_price_period_name": "Beste prisperiode",
"peak_price_period_name": "Toppprisperiode",
"notification": {
"metadata_sensor_unavailable": {
"title": "Tibber Prices: ApexCharts YAML generert med begrenset funksjonalitet",
"message": "Du har nettopp generert en ApexCharts-kort-konfigurasjon via Utviklerverktøy. Diagram-metadata-sensoren er deaktivert, så den genererte YAML-en vil bare vise **grunnleggende funksjonalitet** (auto-skalering, fast gradient på 50%).\n\n**For full funksjonalitet** (optimert skalering, dynamiske gradientfarger):\n1. [Åpne Tibber Prices-integrasjonen](https://my.home-assistant.io/redirect/integration/?domain=tibber_prices)\n2. Aktiver 'Chart Metadata'-sensoren\n3. **Generer YAML-en på nytt** via Utviklerverktøy\n4. **Erstatt den gamle YAML-en** i dashbordet ditt med den nye versjonen\n\n⚠ Det er ikke nok å bare aktivere sensoren - du må regenerere og erstatte YAML-koden!"
},
"missing_cards": {
"title": "Tibber Prices: ApexCharts YAML kan ikke brukes",
"message": "Du har nettopp generert en ApexCharts-kort-konfigurasjon via Utviklerverktøy, men den genererte YAML-en **vil ikke fungere** fordi nødvendige tilpassede kort mangler.\n\n**Manglende kort:**\n{cards}\n\n**For å bruke den genererte YAML-en:**\n1. Klikk på lenkene ovenfor for å installere de manglende kortene fra HACS\n2. Start Home Assistant på nytt (noen ganger nødvendig)\n3. **Generer YAML-en på nytt** via Utviklerverktøy\n4. Legg til YAML-en i dashbordet ditt\n\n⚠ Den nåværende YAML-koden vil ikke fungere før alle kort er installert!"
}
}
}, },
"sensor": { "sensor": {
"current_interval_price": { "current_interval_price": {
@ -9,7 +22,7 @@
"long_description": "Viser nåværende pris per kWh fra ditt Tibber-abonnement", "long_description": "Viser nåværende pris per kWh fra ditt Tibber-abonnement",
"usage_tips": "Bruk dette til å spore priser eller lage automatiseringer som kjører når strøm er billig" "usage_tips": "Bruk dette til å spore priser eller lage automatiseringer som kjører når strøm er billig"
}, },
"current_interval_price_major": { "current_interval_price_base": {
"description": "Nåværende elektrisitetspris i hovedvaluta (EUR/kWh, NOK/kWh, osv.) for Energi-dashboard", "description": "Nåværende elektrisitetspris i hovedvaluta (EUR/kWh, NOK/kWh, osv.) for Energi-dashboard",
"long_description": "Viser nåværende pris per kWh i hovedvalutaenheter (f.eks. EUR/kWh i stedet for ct/kWh, NOK/kWh i stedet for øre/kWh). Denne sensoren er spesielt designet for bruk med Home Assistants Energi-dashboard, som krever priser i standard valutaenheter.", "long_description": "Viser nåværende pris per kWh i hovedvalutaenheter (f.eks. EUR/kWh i stedet for ct/kWh, NOK/kWh i stedet for øre/kWh). Denne sensoren er spesielt designet for bruk med Home Assistants Energi-dashboard, som krever priser i standard valutaenheter.",
"usage_tips": "Bruk denne sensoren når du konfigurerer Energi-dashboardet under Innstillinger → Dashbord → Energi. Velg denne sensoren som 'Entitet med nåværende pris' for automatisk å beregne energikostnadene. Energi-dashboardet multipliserer energiforbruket ditt (kWh) med denne prisen for å vise totale kostnader." "usage_tips": "Bruk denne sensoren når du konfigurerer Energi-dashboardet under Innstillinger → Dashbord → Energi. Velg denne sensoren som 'Entitet med nåværende pris' for automatisk å beregne energikostnadene. Energi-dashboardet multipliserer energiforbruket ditt (kWh) med denne prisen for å vise totale kostnader."
@ -45,9 +58,9 @@
"usage_tips": "Bruk dette til å unngå å kjøre apparater i toppristider" "usage_tips": "Bruk dette til å unngå å kjøre apparater i toppristider"
}, },
"average_price_today": { "average_price_today": {
"description": "Den gjennomsnittlige elektrisitetsprisen i dag per kWh", "description": "Typisk elektrisitetspris i dag per kWh (konfigurerbart visningsformat)",
"long_description": "Viser gjennomsnittsprisen per kWh for gjeldende dag fra ditt Tibber-abonnement", "long_description": "Viser prisen per kWh for gjeldende dag fra ditt Tibber-abonnement. **Som standard viser statusen medianen** (motstandsdyktig mot ekstreme prisspiss, viser typisk prisnivå). Du kan endre dette i integrasjonsinnstillingene for å vise det aritmetiske gjennomsnittet i stedet. Den alternative verdien er tilgjengelig som attributt.",
"usage_tips": "Bruk dette som en baseline for å sammenligne nåværende priser" "usage_tips": "Bruk dette som baseline for å sammenligne nåværende priser. For beregninger bruk: {{ state_attr('sensor.average_price_today', 'price_mean') }}"
}, },
"lowest_price_tomorrow": { "lowest_price_tomorrow": {
"description": "Den laveste elektrisitetsprisen i morgen per kWh", "description": "Den laveste elektrisitetsprisen i morgen per kWh",
@ -60,9 +73,9 @@
"usage_tips": "Bruk dette til å unngå å kjøre apparater i morgendagens toppristider. Nyttig for å planlegge rundt dyre perioder." "usage_tips": "Bruk dette til å unngå å kjøre apparater i morgendagens toppristider. Nyttig for å planlegge rundt dyre perioder."
}, },
"average_price_tomorrow": { "average_price_tomorrow": {
"description": "Den gjennomsnittlige elektrisitetsprisen i morgen per kWh", "description": "Typisk elektrisitetspris i morgen per kWh (konfigurerbart visningsformat)",
"long_description": "Viser gjennomsnittsprisen per kWh for morgendagen fra ditt Tibber-abonnement. Denne sensoren blir utilgjengelig inntil morgendagens data er publisert av Tibber (vanligvis rundt 13:00-14:00 CET).", "long_description": "Viser prisen per kWh for morgendagen fra ditt Tibber-abonnement. **Som standard viser statusen medianen** (motstandsdyktig mot ekstreme prisspiss). Du kan endre dette i integrasjonsinnstillingene for å vise det aritmetiske gjennomsnittet i stedet. Den alternative verdien er tilgjengelig som attributt. Denne sensoren blir utilgjengelig inntil morgendagens data er publisert av Tibber (vanligvis rundt 13:00-14:00 CET).",
"usage_tips": "Bruk dette som en baseline for å sammenligne morgendagens priser og planlegge forbruk. Sammenlign med dagens gjennomsnitt for å se om morgendagen vil være mer eller mindre dyr totalt sett." "usage_tips": "Bruk dette som baseline for å sammenligne morgendagens priser og planlegge forbruk. Sammenlign med dagens median for å se om morgendagen vil være mer eller mindre dyr totalt sett."
}, },
"yesterday_price_level": { "yesterday_price_level": {
"description": "Aggregert prisnivå for i går", "description": "Aggregert prisnivå for i går",
@ -95,14 +108,14 @@
"usage_tips": "Bruk dette for å planlegge morgendagens energiforbruk basert på dine personlige pristerskelverdier. Sammenlign med i dag for å bestemme om du skal flytte forbruk til i morgen eller bruke energi i dag." "usage_tips": "Bruk dette for å planlegge morgendagens energiforbruk basert på dine personlige pristerskelverdier. Sammenlign med i dag for å bestemme om du skal flytte forbruk til i morgen eller bruke energi i dag."
}, },
"trailing_price_average": { "trailing_price_average": {
"description": "Den gjennomsnittlige elektrisitetsprisen for de siste 24 timene per kWh", "description": "Typisk elektrisitetspris for de siste 24 timene per kWh (konfigurerbart visningsformat)",
"long_description": "Viser gjennomsnittsprisen per kWh beregnet fra de siste 24 timene (glidende gjennomsnitt) fra ditt Tibber-abonnement. Dette gir et rullende gjennomsnitt som oppdateres hvert 15. minutt basert på historiske data.", "long_description": "Viser prisen per kWh beregnet fra de siste 24 timene. **Som standard viser statusen medianen** (motstandsdyktig mot ekstreme prisspiss, viser typisk prisnivå). Du kan endre dette i integrasjonsinnstillingene for å vise det aritmetiske gjennomsnittet i stedet. Den alternative verdien er tilgjengelig som attributt. Oppdateres hvert 15. minutt.",
"usage_tips": "Bruk dette til å sammenligne nåværende priser mot nylige trender. En nåværende pris betydelig over dette gjennomsnittet kan indikere et godt tidspunkt å redusere forbruket." "usage_tips": "Bruk statusverdien for å se det typiske nåværende prisnivået. For kostnadsberegninger bruk: {{ state_attr('sensor.trailing_price_average', 'price_mean') }}"
}, },
"leading_price_average": { "leading_price_average": {
"description": "Den gjennomsnittlige elektrisitetsprisen for de neste 24 timene per kWh", "description": "Typisk elektrisitetspris for de neste 24 timene per kWh (konfigurerbart visningsformat)",
"long_description": "Viser gjennomsnittsprisen per kWh beregnet fra de neste 24 timene (fremtidsrettet gjennomsnitt) fra ditt Tibber-abonnement. Dette gir et fremtidsrettet gjennomsnitt basert på tilgjengelige prognosedata.", "long_description": "Viser prisen per kWh beregnet fra de neste 24 timene. **Som standard viser statusen medianen** (motstandsdyktig mot ekstreme prisspiss, viser forventet prisnivå). Du kan endre dette i integrasjonsinnstillingene for å vise det aritmetiske gjennomsnittet i stedet. Den alternative verdien er tilgjengelig som attributt.",
"usage_tips": "Bruk dette til å planlegge energibruk. Hvis nåværende pris er under det fremtidsrettede gjennomsnittet, kan det være et godt tidspunkt å kjøre energikrevende apparater." "usage_tips": "Bruk statusverdien for å se det typiske kommende prisnivået. For kostnadsberegninger bruk: {{ state_attr('sensor.leading_price_average', 'price_mean') }}"
}, },
"trailing_price_min": { "trailing_price_min": {
"description": "Den minste elektrisitetsprisen for de siste 24 timene per kWh", "description": "Den minste elektrisitetsprisen for de siste 24 timene per kWh",
@ -279,24 +292,24 @@
"long_description": "Viser tidsstempelet for siste tilgjengelige prisdataintervall fra ditt Tibber-abonnement" "long_description": "Viser tidsstempelet for siste tilgjengelige prisdataintervall fra ditt Tibber-abonnement"
}, },
"today_volatility": { "today_volatility": {
"description": "Prisvolatilitetsklassifisering for i dag", "description": "Hvor mye strømprisene endrer seg i dag",
"long_description": "Viser hvor mye strømprisene varierer gjennom dagen basert på spredningen (forskjellen mellom høyeste og laveste pris). Klassifisering: LOW = spredning < 5øre, MODERATE = 5-15øre, HIGH = 15-30øre, VERY HIGH = >30øre.", "long_description": "Viser om dagens priser er stabile eller har store svingninger. Lav volatilitet betyr ganske jevne priser timing betyr lite. Høy volatilitet betyr tydelige prisforskjeller gjennom dagen en god sjanse til å flytte forbruk til billigere perioder. `price_coefficient_variation_%` viser prosentverdien, `price_spread` viser den absolutte prisspennet.",
"usage_tips": "Bruk dette til å bestemme om prisbasert optimalisering er verdt det. For eksempel, med et balkongbatteri som har 15% effektivitetstap, er optimalisering kun meningsfull når volatiliteten er minst MODERATE. Opprett automatiseringer som sjekker volatilitet før planlegging av lade-/utladingssykluser." "usage_tips": "Bruk dette for å avgjøre om optimalisering er verdt innsatsen. Ved lav volatilitet kan du kjøre enheter når som helst. Ved høy volatilitet sparer du merkbart ved å følge Best Price-perioder."
}, },
"tomorrow_volatility": { "tomorrow_volatility": {
"description": "Prisvolatilitetsklassifisering for i morgen", "description": "Hvor mye strømprisene vil endre seg i morgen",
"long_description": "Viser hvor mye strømprisene vil variere gjennom morgendagen basert på spredningen (forskjellen mellom høyeste og laveste pris). Blir utilgjengelig til morgendagens data er publisert (typisk 13:00-14:00 CET).", "long_description": "Viser om prisene i morgen blir stabile eller får store svingninger. Tilgjengelig når morgendagens data er publisert (vanligvis 13:0014:00 CET). Lav volatilitet betyr jevne priser timing er ikke kritisk. Høy volatilitet betyr tydelige prisforskjeller gjennom dagen en god mulighet til å planlegge energikrevende oppgaver. `price_coefficient_variation_%` viser prosentverdien, `price_spread` viser den absolutte prisspennet.",
"usage_tips": "Bruk dette til forhåndsplanlegging av morgendagens energiforbruk. Hvis morgendagen har HIGH eller VERY HIGH volatilitet, er det verdt å optimalisere tidspunktet for energiforbruk. Hvis LOW, kan du kjøre enheter når som helst uten betydelige kostnadsforskjeller." "usage_tips": "Bruk dette til å planlegge morgendagens forbruk. Høy volatilitet? Planlegg fleksible laster i Best Price-perioder. Lav volatilitet? Kjør enheter når det passer deg."
}, },
"next_24h_volatility": { "next_24h_volatility": {
"description": "Prisvolatilitetsklassifisering for de rullerende neste 24 timene", "description": "Hvor mye prisene endrer seg de neste 24 timene",
"long_description": "Viser hvor mye strømprisene varierer i de neste 24 timene fra nå (rullerende vindu). Dette krysser daggrenser og oppdateres hvert 15. minutt, og gir en fremoverskuende volatilitetsvurdering uavhengig av kalenderdager.", "long_description": "Viser prisvolatilitet for et rullerende 24-timers vindu fra nå (oppdateres hvert 15. minutt). Lav volatilitet betyr jevne priser. Høy volatilitet betyr merkbare prissvingninger og mulighet for optimalisering. I motsetning til i dag/i morgen-sensorer krysser denne daggrenser og gir en kontinuerlig fremoverskuende vurdering. `price_coefficient_variation_%` viser prosentverdien, `price_spread` viser den absolutte prisspennet.",
"usage_tips": "Beste sensor for sanntids optimaliseringsbeslutninger. I motsetning til dagens/morgendagens sensorer som bytter ved midnatt, gir denne kontinuerlig 24t volatilitetsvurdering. Bruk til batteriladingsstrategier som spenner over daggrenser." "usage_tips": "Best for beslutninger i sanntid. Bruk når du planlegger batterilading eller andre fleksible laster som kan gå over midnatt. Gir et konsistent 24t-bilde uavhengig av kalenderdag."
}, },
"today_tomorrow_volatility": { "today_tomorrow_volatility": {
"description": "Kombinert prisvolatilitetsklassifisering for i dag og i morgen", "description": "Kombinert prisvolatilitet for i dag og i morgen",
"long_description": "Viser volatilitet på tvers av både i dag og i morgen kombinert (når morgendagens data er tilgjengelig). Gir en utvidet visning av prisvariasjoner som spenner over opptil 48 timer. Faller tilbake til bare i dag når morgendagens data ikke er tilgjengelig ennå.", "long_description": "Viser samlet volatilitet når i dag og i morgen sees sammen (når morgendata er tilgjengelig). Viser om det finnes klare prisforskjeller over dagsgrensen. Faller tilbake til kun i dag hvis morgendata mangler. Nyttig for flerdagers optimalisering. `price_coefficient_variation_%` viser prosentverdien, `price_spread` viser den absolutte prisspennet.",
"usage_tips": "Bruk dette for flersdagers planlegging og for å forstå om prismuligheter eksisterer på tvers av dags grensen. Attributtene 'today_volatility' og 'tomorrow_volatility' viser individuelle dagbidrag. Nyttig for planlegging av ladeøkter som kan strekke seg over midnatt." "usage_tips": "Bruk for oppgaver som går over flere dager. Sjekk om prisforskjellene er store nok til å planlegge etter. De enkelte dagssensorene viser bidrag per dag om du trenger mer detalj."
}, },
"data_lifecycle_status": { "data_lifecycle_status": {
"description": "Gjeldende tilstand for prisdatalivssyklus og hurtigbufring", "description": "Gjeldende tilstand for prisdatalivssyklus og hurtigbufring",
@ -304,39 +317,49 @@
"usage_tips": "Bruk denne diagnosesensoren for å forstå dataferskhet og API-anropsmønstre. Sjekk 'cache_age'-attributtet for å se hvor gamle de nåværende dataene er. Overvåk 'next_api_poll' for å vite når neste oppdatering er planlagt. Bruk 'data_completeness' for å se om data for i går/i dag/i morgen er tilgjengelig. 'api_calls_today'-telleren hjelper med å spore API-bruk. Perfekt for feilsøking eller forståelse av integrasjonens oppførsel." "usage_tips": "Bruk denne diagnosesensoren for å forstå dataferskhet og API-anropsmønstre. Sjekk 'cache_age'-attributtet for å se hvor gamle de nåværende dataene er. Overvåk 'next_api_poll' for å vite når neste oppdatering er planlagt. Bruk 'data_completeness' for å se om data for i går/i dag/i morgen er tilgjengelig. 'api_calls_today'-telleren hjelper med å spore API-bruk. Perfekt for feilsøking eller forståelse av integrasjonens oppførsel."
}, },
"best_price_end_time": { "best_price_end_time": {
"description": "Når gjeldende eller neste billigperiode slutter", "description": "Total lengde på nåværende eller neste billigperiode (state i timer, attributt i minutter)",
"long_description": "Viser sluttidspunktet for gjeldende billigperiode når aktiv, eller slutten av neste periode når ingen periode er aktiv. Viser alltid en nyttig tidsreferanse for planlegging. Returnerer 'Ukjent' bare når ingen perioder er konfigurert.", "long_description": "Viser hvor lenge billigperioden varer. State bruker timer (desimal) for lesbar UI; attributtet `period_duration_minutes` beholder avrundede minutter for automasjoner. Aktiv → varighet for gjeldende periode, ellers neste.",
"usage_tips": "Bruk dette til å vise en nedtelling som 'Billigperiode slutter om 2 timer' (når aktiv) eller 'Neste billigperiode slutter kl 14:00' (når inaktiv). Home Assistant viser automatisk relativ tid for tidsstempelsensorer." "usage_tips": "UI kan vise 1,5 t mens `period_duration_minutes` = 90 for automasjoner."
},
"best_price_period_duration": {
"description": "Lengde på gjeldende/neste billigperiode",
"long_description": "Total varighet av gjeldende eller neste billigperiode. State vises i timer (f.eks. 1,5 t) for enkel lesing i UI, mens attributtet `period_duration_minutes` gir samme verdi i minutter (f.eks. 90) for automasjoner. Denne verdien representerer den **fulle planlagte varigheten** av perioden og er konstant gjennom hele perioden, selv om gjenværende tid (remaining_minutes) reduseres.",
"usage_tips": "Kombiner med remaining_minutes for å beregne når langvarige enheter skal stoppes: Perioden startet for `period_duration_minutes - remaining_minutes` minutter siden. Dette attributtet støtter energioptimeringsstrategier ved å hjelpe til med å planlegge høyforbruksaktiviteter innenfor billige perioder."
}, },
"best_price_remaining_minutes": { "best_price_remaining_minutes": {
"description": "Gjenværende minutter i gjeldende billigperiode (0 når inaktiv)", "description": "Gjenværende tid i gjeldende billigperiode",
"long_description": "Viser hvor mange minutter som er igjen i gjeldende billigperiode. Returnerer 0 når ingen periode er aktiv. Oppdateres hvert minutt. Sjekk binary_sensor.best_price_period for å se om en periode er aktiv.", "long_description": "Viser hvor mye tid som gjenstår i gjeldende billigperiode. State vises i timer (f.eks. 0,75 t) for enkel lesing i dashboards, mens attributtet `remaining_minutes` gir samme tid i minutter (f.eks. 45) for automasjonsbetingelser. **Nedtellingstimer**: Denne verdien reduseres hvert minutt under en aktiv periode. Returnerer 0 når ingen billigperiode er aktiv. Oppdateres hvert minutt.",
"usage_tips": "Perfekt for automatiseringer: 'Hvis remaining_minutes > 0 OG remaining_minutes < 30, start vaskemaskin nå'. Verdien 0 gjør det enkelt å sjekke om en periode er aktiv (verdi > 0) eller ikke (verdi = 0)." "usage_tips": "For automasjoner: Bruk attributtet `remaining_minutes` som 'Hvis remaining_minutes > 60, start oppvaskmaskinen nå (nok tid til å fullføre)' eller 'Hvis remaining_minutes < 15, fullfør gjeldende syklus snart'. UI viser brukervennlige timer (f.eks. 1,25 t). Verdi 0 indikerer ingen aktiv billigperiode."
}, },
"best_price_progress": { "best_price_progress": {
"description": "Fremdrift gjennom gjeldende billigperiode (0% når inaktiv)", "description": "Fremdrift gjennom gjeldende billigperiode (0% når inaktiv)",
"long_description": "Viser fremdrift gjennom gjeldende billigperiode som 0-100%. Returnerer 0% når ingen periode er aktiv. Oppdateres hvert minutt. 0% betyr periode nettopp startet, 100% betyr den snart slutter.", "long_description": "Viser fremdrift gjennom gjeldende billigperiode som 0-100%. Returnerer 0% når ingen periode er aktiv. Oppdateres hvert minutt. 0% betyr perioden nettopp startet, 100% betyr den slutter snart.",
"usage_tips": "Flott for visuelle fremdriftslinjer. Bruk i automatiseringer: 'Hvis progress > 0 OG progress > 75, send varsel om at billigperiode snart slutter'. Verdi 0 indikerer ingen aktiv periode." "usage_tips": "Flott for visuelle fremgangsindikatorer. Bruk i automatiseringer: 'Hvis progress > 0 OG progress > 75, send varsel om at billigperioden snart slutter'. Verdi 0 indikerer ingen aktiv periode."
}, },
"best_price_next_start_time": { "best_price_next_start_time": {
"description": "Når neste billigperiode starter", "description": "Total lengde på nåværende eller neste dyr-periode (state i timer, attributt i minutter)",
"long_description": "Viser når neste kommende billigperiode starter. Under en aktiv periode viser dette starten av NESTE periode etter den gjeldende. Returnerer 'Ukjent' bare når ingen fremtidige perioder er konfigurert.", "long_description": "Viser hvor lenge den dyre perioden varer. State bruker timer (desimal) for UI; attributtet `period_duration_minutes` beholder avrundede minutter for automasjoner. Aktiv → varighet for gjeldende periode, ellers neste.",
"usage_tips": "Alltid nyttig for planlegging: 'Neste billigperiode starter om 3 timer' (enten du er i en periode nå eller ikke). Kombiner med automatiseringer: 'Når neste starttid er om 10 minutter, send varsel for å forberede vaskemaskin'." "usage_tips": "UI kan vise 0,75 t mens `period_duration_minutes` = 45 for automasjoner."
}, },
"best_price_next_in_minutes": { "best_price_next_in_minutes": {
"description": "Minutter til neste billigperiode starter (0 ved overgang)", "description": "Tid til neste billigperiode",
"long_description": "Viser minutter til neste billigperiode starter. Under en aktiv periode viser dette tiden til perioden ETTER den gjeldende. Returnerer 0 under korte overgangsmomenter. Oppdateres hvert minutt.", "long_description": "Viser hvor lenge til neste billigperiode. State vises i timer (f.eks. 2,25 t) for dashboards, mens attributtet `next_in_minutes` gir minutter (f.eks. 135) for automasjonsbetingelser. Under en aktiv periode viser dette tiden til perioden ETTER den gjeldende. Returnerer 0 under korte overgangsmomenter. Oppdateres hvert minutt.",
"usage_tips": "Perfekt for 'vent til billigperiode' automatiseringer: 'Hvis next_in_minutes > 0 OG next_in_minutes < 15, vent før oppvaskmaskin startes'. Verdi > 0 indikerer alltid at en fremtidig periode er planlagt." "usage_tips": "For automasjoner: Bruk attributtet `next_in_minutes` som 'Hvis next_in_minutes > 0 OG next_in_minutes < 15, vent før start av oppvaskmaskin'. Verdi > 0 indikerer alltid at en fremtidig periode er planlagt."
}, },
"peak_price_end_time": { "peak_price_end_time": {
"description": "Når gjeldende eller neste dyrperiode slutter", "description": "Tid til neste dyr-periode (state i timer, attributt i minutter)",
"long_description": "Viser sluttidspunktet for gjeldende dyrperiode når aktiv, eller slutten av neste periode når ingen periode er aktiv. Viser alltid en nyttig tidsreferanse for planlegging. Returnerer 'Ukjent' bare når ingen perioder er konfigurert.", "long_description": "Viser hvor lenge til neste dyre periode starter. State bruker timer (desimal); attributtet `next_in_minutes` beholder avrundede minutter for automasjoner. Under aktiv periode viser dette tiden til perioden etter den nåværende. 0 i korte overgangsøyeblikk. Oppdateres hvert minutt.",
"usage_tips": "Bruk dette til å vise 'Dyrperiode slutter om 1 time' (når aktiv) eller 'Neste dyrperiode slutter kl 18:00' (når inaktiv). Kombiner med automatiseringer for å gjenoppta drift etter topp." "usage_tips": "Bruk `next_in_minutes` i automasjoner (f.eks. < 10) mens state er lett å lese i timer."
},
"peak_price_period_duration": {
"description": "Lengde på gjeldende/neste dyr periode",
"long_description": "Total varighet av gjeldende eller neste dyre periode. State vises i timer (f.eks. 1,5 t) for enkel lesing i UI, mens attributtet `period_duration_minutes` gir samme verdi i minutter (f.eks. 90) for automasjoner. Denne verdien representerer den **fulle planlagte varigheten** av perioden og er konstant gjennom hele perioden, selv om gjenværende tid (remaining_minutes) reduseres.",
"usage_tips": "Kombiner med remaining_minutes for å beregne når langvarige enheter skal stoppes: Perioden startet for `period_duration_minutes - remaining_minutes` minutter siden. Dette attributtet støtter energisparingsstrategier ved å hjelpe til med å planlegge høyforbruksaktiviteter utenfor dyre perioder."
}, },
"peak_price_remaining_minutes": { "peak_price_remaining_minutes": {
"description": "Gjenværende minutter i gjeldende dyrperiode (0 når inaktiv)", "description": "Gjenværende tid i gjeldende dyre periode",
"long_description": "Viser hvor mange minutter som er igjen i gjeldende dyrperiode. Returnerer 0 når ingen periode er aktiv. Oppdateres hvert minutt. Sjekk binary_sensor.peak_price_period for å se om en periode er aktiv.", "long_description": "Viser hvor mye tid som gjenstår i gjeldende dyre periode. State vises i timer (f.eks. 0,75 t) for enkel lesing i dashboards, mens attributtet `remaining_minutes` gir samme tid i minutter (f.eks. 45) for automasjonsbetingelser. **Nedtellingstimer**: Denne verdien reduseres hvert minutt under en aktiv periode. Returnerer 0 når ingen dyr periode er aktiv. Oppdateres hvert minutt.",
"usage_tips": "Bruk i automatiseringer: 'Hvis remaining_minutes > 60, avbryt utsatt ladeøkt'. Verdi 0 gjør det enkelt å skille mellom aktive (verdi > 0) og inaktive (verdi = 0) perioder." "usage_tips": "For automasjoner: Bruk attributtet `remaining_minutes` som 'Hvis remaining_minutes > 60, avbryt utsatt ladeøkt' eller 'Hvis remaining_minutes < 15, fortsett normal drift snart'. UI viser brukervennlige timer (f.eks. 1,0 t). Verdi 0 indikerer ingen aktiv dyr periode."
}, },
"peak_price_progress": { "peak_price_progress": {
"description": "Fremdrift gjennom gjeldende dyrperiode (0% når inaktiv)", "description": "Fremdrift gjennom gjeldende dyrperiode (0% når inaktiv)",
@ -349,19 +372,9 @@
"usage_tips": "Alltid nyttig for planlegging: 'Neste dyrperiode starter om 2 timer'. Automatisering: 'Når neste starttid er om 30 minutter, reduser varmetemperatur forebyggende'." "usage_tips": "Alltid nyttig for planlegging: 'Neste dyrperiode starter om 2 timer'. Automatisering: 'Når neste starttid er om 30 minutter, reduser varmetemperatur forebyggende'."
}, },
"peak_price_next_in_minutes": { "peak_price_next_in_minutes": {
"description": "Minutter til neste dyrperiode starter (0 ved overgang)", "description": "Tid til neste dyre periode",
"long_description": "Viser minutter til neste dyrperiode starter. Under en aktiv periode viser dette tiden til perioden ETTER den gjeldende. Returnerer 0 under korte overgangsmomenter. Oppdateres hvert minutt.", "long_description": "Viser hvor lenge til neste dyre periode starter. State vises i timer (f.eks. 0,5 t) for dashboards, mens attributtet `next_in_minutes` gir minutter (f.eks. 30) for automasjonsbetingelser. Under en aktiv periode viser dette tiden til perioden ETTER den gjeldende. Returnerer 0 under korte overgangsmomenter. Oppdateres hvert minutt.",
"usage_tips": "Forebyggende automatisering: 'Hvis next_in_minutes > 0 OG next_in_minutes < 10, fullfør gjeldende ladesyklus nå før prisene øker'." "usage_tips": "For automasjoner: Bruk attributtet `next_in_minutes` som 'Hvis next_in_minutes > 0 OG next_in_minutes < 10, fullfør gjeldende ladesyklus nå før prisene øker'. Verdi > 0 indikerer alltid at en fremtidig dyr periode er planlagt."
},
"best_price_period_duration": {
"description": "Total varighet av gjeldende eller neste billigperiode i minutter",
"long_description": "Viser den totale varigheten av billigperioden i minutter. Under en aktiv periode viser dette hele varigheten av gjeldende periode. Når ingen periode er aktiv, viser dette varigheten av neste kommende periode. Eksempel: '90 minutter' for en 1,5-timers periode.",
"usage_tips": "Kombiner med remaining_minutes for å planlegge oppgaver: 'Hvis duration = 120 OG remaining_minutes > 90, start vaskemaskin (nok tid til å fullføre)'. Nyttig for å forstå om perioder er lange nok for strømkrevende oppgaver."
},
"peak_price_period_duration": {
"description": "Total varighet av gjeldende eller neste dyrperiode i minutter",
"long_description": "Viser den totale varigheten av dyrperioden i minutter. Under en aktiv periode viser dette hele varigheten av gjeldende periode. Når ingen periode er aktiv, viser dette varigheten av neste kommende periode. Eksempel: '60 minutter' for en 1-times periode.",
"usage_tips": "Bruk til å planlegge energibesparelsestiltak: 'Hvis duration > 120, reduser varmetemperatur mer aggressivt (lang dyr periode)'. Hjelper med å vurdere hvor mye energiforbruk må reduseres."
}, },
"home_type": { "home_type": {
"description": "Type bolig (leilighet, hus osv.)", "description": "Type bolig (leilighet, hus osv.)",
@ -437,6 +450,11 @@
"description": "Dataeksport for dashboardintegrasjoner", "description": "Dataeksport for dashboardintegrasjoner",
"long_description": "Denne sensoren kaller get_chartdata-tjenesten med din konfigurerte YAML-konfigurasjon og eksponerer resultatet som entitetsattributter. Status viser 'ready' når data er tilgjengelig, 'error' ved feil, eller 'pending' før første kall. Perfekt for dashboardintegrasjoner som ApexCharts som trenger å lese prisdata fra entitetsattributter.", "long_description": "Denne sensoren kaller get_chartdata-tjenesten med din konfigurerte YAML-konfigurasjon og eksponerer resultatet som entitetsattributter. Status viser 'ready' når data er tilgjengelig, 'error' ved feil, eller 'pending' før første kall. Perfekt for dashboardintegrasjoner som ApexCharts som trenger å lese prisdata fra entitetsattributter.",
"usage_tips": "Konfigurer YAML-parametrene i integrasjonsinnstillingene for å matche get_chartdata-tjenestekallet ditt. Sensoren vil automatisk oppdatere når prisdata oppdateres (typisk etter midnatt og når morgendagens data ankommer). Få tilgang til tjenesteresponsdataene direkte fra entitetens attributter - strukturen matcher nøyaktig det get_chartdata returnerer." "usage_tips": "Konfigurer YAML-parametrene i integrasjonsinnstillingene for å matche get_chartdata-tjenestekallet ditt. Sensoren vil automatisk oppdatere når prisdata oppdateres (typisk etter midnatt og når morgendagens data ankommer). Få tilgang til tjenesteresponsdataene direkte fra entitetens attributter - strukturen matcher nøyaktig det get_chartdata returnerer."
},
"chart_metadata": {
"description": "Lettvekts metadata for diagramkonfigurasjon",
"long_description": "Gir essensielle diagramkonfigurasjonsverdier som sensorattributter. Nyttig for ethvert diagramkort som trenger Y-aksegrenser. Sensoren kaller get_chartdata med kun-metadata-modus (ingen databehandling) og trekker ut: yaxis_min, yaxis_max (foreslått Y-akseområde for optimal skalering). Status reflekterer tjenestekallresultatet: 'ready' ved suksess, 'error' ved feil, 'pending' under initialisering.",
"usage_tips": "Konfigurer via configuration.yaml under tibber_prices.chart_metadata_config (valgfritt: day, subunit_currency, resolution). Sensoren oppdateres automatisk når prisdata endres. Få tilgang til metadata fra attributter: yaxis_min, yaxis_max. Bruk med config-template-card eller ethvert verktøy som leser entitetsattributter - perfekt for dynamisk diagramkonfigurasjon uten manuelle beregninger."
} }
}, },
"binary_sensor": { "binary_sensor": {
@ -469,11 +487,80 @@
"description": "Om sanntidsforbruksovervåking er aktiv", "description": "Om sanntidsforbruksovervåking er aktiv",
"long_description": "Indikerer om sanntidsovervåking av strømforbruk er aktivert og aktiv for ditt Tibber-hjem. Dette krever kompatibel målehardware (f.eks. Tibber Pulse) og et aktivt abonnement.", "long_description": "Indikerer om sanntidsovervåking av strømforbruk er aktivert og aktiv for ditt Tibber-hjem. Dette krever kompatibel målehardware (f.eks. Tibber Pulse) og et aktivt abonnement.",
"usage_tips": "Bruk dette for å bekrefte at sanntidsforbruksdata er tilgjengelig. Aktiver varsler hvis dette endres til 'av' uventet, noe som indikerer potensielle maskinvare- eller tilkoblingsproblemer." "usage_tips": "Bruk dette for å bekrefte at sanntidsforbruksdata er tilgjengelig. Aktiver varsler hvis dette endres til 'av' uventet, noe som indikerer potensielle maskinvare- eller tilkoblingsproblemer."
}
},
"number": {
"best_price_flex_override": {
"description": "Maksimal prosent over daglig minimumspris som intervaller kan ha og fortsatt kvalifisere som 'beste pris'. Anbefalt: 15-20 med lemping aktivert (standard), eller 25-35 uten lemping. Maksimum: 50 (tak for pålitelig periodedeteksjon).",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Fleksibilitet'-innstillingen fra alternativer-dialogen for beste pris-periodeberegninger.",
"usage_tips": "Aktiver denne entiteten for å dynamisk justere beste pris-deteksjon via automatiseringer, f.eks. høyere fleksibilitet for kritiske laster eller strengere krav for fleksible apparater."
}, },
"chart_data_export": { "best_price_min_distance_override": {
"description": "Dataeksport for dashboardintegrasjoner", "description": "Minimum prosentavstand under daglig gjennomsnitt. Intervaller må være så langt under gjennomsnittet for å kvalifisere som 'beste pris'. Hjelper med å skille ekte lavprisperioder fra gjennomsnittspriser.",
"long_description": "Denne binærsensoren kaller get_chartdata-tjenesten for å eksportere prisdata i formater som er kompatible med ApexCharts og andre dashboardverktøy. Dataeksporten inkluderer historiske og fremtidsrettede prisdata strukturert for visualisering.", "long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Minimumsavstand'-innstillingen fra alternativer-dialogen for beste pris-periodeberegninger.",
"usage_tips": "Konfigurer YAML-parametrene i integrasjonsalternativene. Bruk denne sensoren til å trigge dataeksporthendelser for dashboards. Når den slås på, eksporteres data til en fil eller tjeneste som er konfigurert for integrering med ApexCharts eller tilsvarende visualiseringsverktøy." "usage_tips": "Øk verdien for strengere beste pris-kriterier. Reduser hvis for få perioder blir oppdaget."
},
"best_price_min_period_length_override": {
"description": "Minimum periodelengde i 15-minutters intervaller. Perioder kortere enn dette blir ikke rapportert. Eksempel: 2 = minimum 30 minutter.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Minimum periodelengde'-innstillingen fra alternativer-dialogen for beste pris-periodeberegninger.",
"usage_tips": "Juster til typisk apparatkjøretid: 2 (30 min) for hurtigprogrammer, 4-8 (1-2 timer) for normale sykluser, 8+ for lange ECO-programmer."
},
"best_price_min_periods_override": {
"description": "Minimum antall beste pris-perioder å finne daglig. Når lemping er aktivert, vil systemet automatisk justere kriterier for å oppnå dette antallet.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Minimum perioder'-innstillingen fra alternativer-dialogen for beste pris-periodeberegninger.",
"usage_tips": "Sett dette til antall tidskritiske oppgaver du har daglig. Eksempel: 2 for to vaskemaskinkjøringer."
},
"best_price_relaxation_attempts_override": {
"description": "Antall forsøk på å gradvis lempe kriteriene for å oppnå minimum periodeantall. Hvert forsøk øker fleksibiliteten med 3 prosent. Ved 0 brukes kun basiskriterier.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Lemping forsøk'-innstillingen fra alternativer-dialogen for beste pris-periodeberegninger.",
"usage_tips": "Høyere verdier gjør periodedeteksjon mer adaptiv for dager med stabile priser. Sett til 0 for å tvinge strenge kriterier uten lemping."
},
"best_price_gap_count_override": {
"description": "Maksimalt antall dyrere intervaller som kan tillates mellom billige intervaller mens de fortsatt regnes som en sammenhengende periode. Ved 0 må billige intervaller være påfølgende.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Gaptoleranse'-innstillingen fra alternativer-dialogen for beste pris-periodeberegninger.",
"usage_tips": "Øk dette for apparater med variabel last (f.eks. varmepumper) som kan tåle korte dyrere intervaller. Sett til 0 for kontinuerlige billige perioder."
},
"peak_price_flex_override": {
"description": "Maksimal prosent under daglig maksimumspris som intervaller kan ha og fortsatt kvalifisere som 'topppris'. Samme anbefalinger som for beste pris-fleksibilitet.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Fleksibilitet'-innstillingen fra alternativer-dialogen for topppris-periodeberegninger.",
"usage_tips": "Bruk dette for å justere topppris-terskelen ved kjøretid for automatiseringer som unngår forbruk under dyre timer."
},
"peak_price_min_distance_override": {
"description": "Minimum prosentavstand over daglig gjennomsnitt. Intervaller må være så langt over gjennomsnittet for å kvalifisere som 'topppris'.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Minimumsavstand'-innstillingen fra alternativer-dialogen for topppris-periodeberegninger.",
"usage_tips": "Øk verdien for kun å fange ekstreme pristopper. Reduser for å inkludere flere høypristider."
},
"peak_price_min_period_length_override": {
"description": "Minimum periodelengde i 15-minutters intervaller for topppriser. Kortere pristopper rapporteres ikke som perioder.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Minimum periodelengde'-innstillingen fra alternativer-dialogen for topppris-periodeberegninger.",
"usage_tips": "Kortere verdier fanger korte pristopper. Lengre verdier fokuserer på vedvarende høyprisperioder."
},
"peak_price_min_periods_override": {
"description": "Minimum antall topppris-perioder å finne daglig.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Minimum perioder'-innstillingen fra alternativer-dialogen for topppris-periodeberegninger.",
"usage_tips": "Sett dette basert på hvor mange høyprisperioder du vil fange per dag for automatiseringer."
},
"peak_price_relaxation_attempts_override": {
"description": "Antall forsøk på å lempe kriteriene for å oppnå minimum antall topppris-perioder.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Lemping forsøk'-innstillingen fra alternativer-dialogen for topppris-periodeberegninger.",
"usage_tips": "Øk dette hvis ingen perioder blir funnet på dager med stabile priser. Sett til 0 for å tvinge strenge kriterier."
},
"peak_price_gap_count_override": {
"description": "Maksimalt antall billigere intervaller som kan tillates mellom dyre intervaller mens de fortsatt regnes som en topppris-periode.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Gaptoleranse'-innstillingen fra alternativer-dialogen for topppris-periodeberegninger.",
"usage_tips": "Høyere verdier fanger lengre høyprisperioder selv med korte prisdykk. Sett til 0 for strengt sammenhengende topppriser."
}
},
"switch": {
"best_price_enable_relaxation_override": {
"description": "Når aktivert, lempes kriteriene automatisk for å oppnå minimum periodeantall. Når deaktivert, rapporteres kun perioder som oppfyller strenge kriterier (muligens null perioder på dager med stabile priser).",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Oppnå minimumsantall'-innstillingen fra alternativer-dialogen for beste pris-periodeberegninger.",
"usage_tips": "Aktiver dette for garanterte daglige automatiseringsmuligheter. Deaktiver hvis du kun vil ha virkelig billige perioder, selv om det betyr ingen perioder på noen dager."
},
"peak_price_enable_relaxation_override": {
"description": "Når aktivert, lempes kriteriene automatisk for å oppnå minimum periodeantall. Når deaktivert, rapporteres kun ekte pristopper.",
"long_description": "Når denne entiteten er aktivert, overstyrer verdien 'Oppnå minimumsantall'-innstillingen fra alternativer-dialogen for topppris-periodeberegninger.",
"usage_tips": "Aktiver dette for konsistente topppris-varsler. Deaktiver for kun å fange ekstreme pristopper."
} }
}, },
"home_types": { "home_types": {
@ -482,5 +569,15 @@
"HOUSE": "Hus", "HOUSE": "Hus",
"COTTAGE": "Hytte" "COTTAGE": "Hytte"
}, },
"time_units": {
"day": "{count} dag",
"days": "{count} dager",
"hour": "{count} time",
"hours": "{count} timer",
"minute": "{count} minutt",
"minutes": "{count} minutter",
"ago": "{parts} siden",
"now": "nå"
},
"attribution": "Data levert av Tibber" "attribution": "Data levert av Tibber"
} }

View file

@ -1,27 +1,40 @@
{ {
"apexcharts": { "apexcharts": {
"title_rating_level": "Prijsfasen dagelijkse voortgang", "title_rating_level": "Prijsfasen dagverloop",
"title_level": "Prijsniveau" "title_level": "Prijsniveau",
"hourly_suffix": "(Ø per uur)",
"best_price_period_name": "Beste prijsperiode",
"peak_price_period_name": "Piekprijsperiode",
"notification": {
"metadata_sensor_unavailable": {
"title": "Tibber Prices: ApexCharts YAML gegenereerd met beperkte functionaliteit",
"message": "Je hebt zojuist een ApexCharts-kaartconfiguratie gegenereerd via Ontwikkelaarstools. De grafiek-metadata-sensor is momenteel uitgeschakeld, dus de gegenereerde YAML toont alleen **basisfunctionaliteit** (auto-schaal as, vaste verloop op 50%).\n\n**Voor volledige functionaliteit** (geoptimaliseerde schaling, dynamische verloopkleuren):\n1. [Open Tibber Prices-integratie](https://my.home-assistant.io/redirect/integration/?domain=tibber_prices)\n2. Schakel de 'Chart Metadata'-sensor in\n3. **Genereer de YAML opnieuw** via Ontwikkelaarstools\n4. **Vervang de oude YAML** in je dashboard door de nieuwe versie\n\n⚠ Alleen de sensor inschakelen is niet genoeg - je moet de YAML opnieuw genereren en vervangen!"
},
"missing_cards": {
"title": "Tibber Prices: ApexCharts YAML kan niet worden gebruikt",
"message": "Je hebt zojuist een ApexCharts-kaartconfiguratie gegenereerd via Ontwikkelaarstools, maar de gegenereerde YAML **zal niet werken** omdat vereiste aangepaste kaarten ontbreken.\n\n**Ontbrekende kaarten:**\n{cards}\n\n**Om de gegenereerde YAML te gebruiken:**\n1. Klik op de bovenstaande links om de ontbrekende kaarten te installeren vanuit HACS\n2. Herstart Home Assistant (soms nodig)\n3. **Genereer de YAML opnieuw** via Ontwikkelaarstools\n4. Voeg de YAML toe aan je dashboard\n\n⚠ De huidige YAML-code werkt niet totdat alle kaarten zijn geïnstalleerd!"
}
}
}, },
"sensor": { "sensor": {
"current_interval_price": { "current_interval_price": {
"description": "De huidige elektriciteitsprijs per kWh", "description": "De huidige elektriciteitsprijs per kWh",
"long_description": "Toont de huidige prijs per kWh van uw Tibber-abonnement", "long_description": "Toont de huidige prijs per kWh van je Tibber-abonnement",
"usage_tips": "Gebruik dit om prijzen bij te houden of om automatiseringen te maken die worden uitgevoerd wanneer elektriciteit goedkoop is" "usage_tips": "Gebruik dit om prijzen bij te houden of om automatiseringen te maken die worden uitgevoerd wanneer elektriciteit goedkoop is"
}, },
"current_interval_price_major": { "current_interval_price_base": {
"description": "Huidige elektriciteitsprijs in hoofdvaluta (EUR/kWh, NOK/kWh, enz.) voor Energie-dashboard", "description": "Huidige elektriciteitsprijs in hoofdvaluta (EUR/kWh, NOK/kWh, enz.) voor Energie-dashboard",
"long_description": "Toont de huidige prijs per kWh in hoofdvaluta-eenheden (bijv. EUR/kWh in plaats van ct/kWh, NOK/kWh in plaats van øre/kWh). Deze sensor is speciaal ontworpen voor gebruik met het Energie-dashboard van Home Assistant, dat prijzen in standaard valuta-eenheden vereist.", "long_description": "Toont de huidige prijs per kWh in hoofdvaluta-eenheden (bijv. EUR/kWh in plaats van ct/kWh, NOK/kWh in plaats van øre/kWh). Deze sensor is speciaal ontworpen voor gebruik met het Energie-dashboard van Home Assistant, dat prijzen in standaard valuta-eenheden vereist.",
"usage_tips": "Gebruik deze sensor bij het configureren van het Energie-dashboard onder Instellingen → Dashboards → Energie. Selecteer deze sensor als 'Entiteit met huidige prijs' om automatisch je energiekosten te berekenen. Het Energie-dashboard vermenigvuldigt je energieverbruik (kWh) met deze prijs om totale kosten weer te geven." "usage_tips": "Gebruik deze sensor bij het configureren van het Energie-dashboard onder Instellingen → Dashboards → Energie. Selecteer deze sensor als 'Entiteit met huidige prijs' om automatisch je energiekosten te berekenen. Het Energie-dashboard vermenigvuldigt je energieverbruik (kWh) met deze prijs om totale kosten weer te geven."
}, },
"next_interval_price": { "next_interval_price": {
"description": "De volgende interval elektriciteitsprijs per kWh", "description": "De volgende interval elektriciteitsprijs per kWh",
"long_description": "Toont de prijs voor het volgende 15-minuten interval van uw Tibber-abonnement", "long_description": "Toont de prijs voor het volgende 15-minuten interval van je Tibber-abonnement",
"usage_tips": "Gebruik dit om u voor te bereiden op aanstaande prijswijzigingen of om apparaten te plannen om tijdens goedkopere intervallen te draaien" "usage_tips": "Gebruik dit om je voor te bereiden op aanstaande prijswijzigingen of om apparaten te plannen om tijdens goedkopere intervallen te draaien"
}, },
"previous_interval_price": { "previous_interval_price": {
"description": "De vorige interval elektriciteitsprijs per kWh", "description": "De vorige interval elektriciteitsprijs per kWh",
"long_description": "Toont de prijs voor het vorige 15-minuten interval van uw Tibber-abonnement", "long_description": "Toont de prijs voor het vorige 15-minuten interval van je Tibber-abonnement",
"usage_tips": "Gebruik dit om eerdere prijswijzigingen te bekijken of prijsgeschiedenis bij te houden" "usage_tips": "Gebruik dit om eerdere prijswijzigingen te bekijken of prijsgeschiedenis bij te houden"
}, },
"current_hour_average_price": { "current_hour_average_price": {
@ -36,33 +49,33 @@
}, },
"lowest_price_today": { "lowest_price_today": {
"description": "De laagste elektriciteitsprijs voor vandaag per kWh", "description": "De laagste elektriciteitsprijs voor vandaag per kWh",
"long_description": "Toont de laagste prijs per kWh voor de huidige dag van uw Tibber-abonnement", "long_description": "Toont de laagste prijs per kWh voor de huidige dag van je Tibber-abonnement",
"usage_tips": "Gebruik dit om huidige prijzen te vergelijken met de goedkoopste tijd van de dag" "usage_tips": "Gebruik dit om huidige prijzen te vergelijken met de goedkoopste tijd van de dag"
}, },
"highest_price_today": { "highest_price_today": {
"description": "De hoogste elektriciteitsprijs voor vandaag per kWh", "description": "De hoogste elektriciteitsprijs voor vandaag per kWh",
"long_description": "Toont de hoogste prijs per kWh voor de huidige dag van uw Tibber-abonnement", "long_description": "Toont de hoogste prijs per kWh voor de huidige dag van je Tibber-abonnement",
"usage_tips": "Gebruik dit om te voorkomen dat u apparaten draait tijdens piekprijstijden" "usage_tips": "Gebruik dit om te voorkomen dat je apparaten draait tijdens piekprijstijden"
}, },
"average_price_today": { "average_price_today": {
"description": "De gemiddelde elektriciteitsprijs voor vandaag per kWh", "description": "Typische elektriciteitsprijs voor vandaag per kWh (configureerbare weergave)",
"long_description": "Toont de gemiddelde prijs per kWh voor de huidige dag van uw Tibber-abonnement", "long_description": "Toont de prijs per kWh voor de huidige dag van je Tibber-abonnement. **Standaard toont de status de mediaan** (resistent tegen extreme prijspieken, toont typisch prijsniveau). Je kunt dit wijzigen in de integratie-instellingen om het rekenkundig gemiddelde te tonen. De alternatieve waarde is beschikbaar als attribuut.",
"usage_tips": "Gebruik dit als basislijn voor het vergelijken van huidige prijzen" "usage_tips": "Gebruik dit als basislijn voor het vergelijken van huidige prijzen. Voor berekeningen gebruik: {{ state_attr('sensor.average_price_today', 'price_mean') }}"
}, },
"lowest_price_tomorrow": { "lowest_price_tomorrow": {
"description": "De laagste elektriciteitsprijs voor morgen per kWh", "description": "De laagste elektriciteitsprijs voor morgen per kWh",
"long_description": "Toont de laagste prijs per kWh voor morgen van uw Tibber-abonnement. Deze sensor wordt niet beschikbaar totdat de gegevens van morgen door Tibber worden gepubliceerd (meestal rond 13:00-14:00 CET).", "long_description": "Toont de laagste prijs per kWh voor morgen van je Tibber-abonnement. Deze sensor wordt niet beschikbaar totdat de gegevens van morgen door Tibber worden gepubliceerd (meestal rond 13:00-14:00 CET).",
"usage_tips": "Gebruik dit om energie-intensieve activiteiten te plannen voor de goedkoopste tijd van morgen. Perfect voor vooraf plannen van verwarming, EV-laden of apparaten." "usage_tips": "Gebruik dit om energie-intensieve activiteiten te plannen voor de goedkoopste tijd van morgen. Perfect voor vooraf plannen van verwarming, EV-laden of apparaten."
}, },
"highest_price_tomorrow": { "highest_price_tomorrow": {
"description": "De hoogste elektriciteitsprijs voor morgen per kWh", "description": "De hoogste elektriciteitsprijs voor morgen per kWh",
"long_description": "Toont de hoogste prijs per kWh voor morgen van uw Tibber-abonnement. Deze sensor wordt niet beschikbaar totdat de gegevens van morgen door Tibber worden gepubliceerd (meestal rond 13:00-14:00 CET).", "long_description": "Toont de hoogste prijs per kWh voor morgen van je Tibber-abonnement. Deze sensor wordt niet beschikbaar totdat de gegevens van morgen door Tibber worden gepubliceerd (meestal rond 13:00-14:00 CET).",
"usage_tips": "Gebruik dit om te voorkomen dat u apparaten draait tijdens de piekprijstijden van morgen. Handig voor het plannen rond dure perioden." "usage_tips": "Gebruik dit om te voorkomen dat je apparaten draait tijdens de piekprijstijden van morgen. Handig voor het plannen rond dure perioden."
}, },
"average_price_tomorrow": { "average_price_tomorrow": {
"description": "De gemiddelde elektriciteitsprijs voor morgen per kWh", "description": "Typische elektriciteitsprijs voor morgen per kWh (configureerbare weergave)",
"long_description": "Toont de gemiddelde prijs per kWh voor morgen van uw Tibber-abonnement. Deze sensor wordt niet beschikbaar totdat de gegevens van morgen door Tibber worden gepubliceerd (meestal rond 13:00-14:00 CET).", "long_description": "Toont de prijs per kWh voor morgen van je Tibber-abonnement. **Standaard toont de status de mediaan** (resistent tegen extreme prijspieken). Je kunt dit wijzigen in de integratie-instellingen om het rekenkundig gemiddelde te tonen. De alternatieve waarde is beschikbaar als attribuut. Deze sensor wordt niet beschikbaar totdat de gegevens van morgen door Tibber worden gepubliceerd (meestal rond 13:00-14:00 CET).",
"usage_tips": "Gebruik dit als basislijn voor het vergelijken van prijzen van morgen en het plannen van verbruik. Vergelijk met het gemiddelde van vandaag om te zien of morgen over het algemeen duurder of goedkoper wordt." "usage_tips": "Gebruik dit als basislijn voor het vergelijken van prijzen van morgen en het plannen van verbruik. Vergelijk met de mediaan van vandaag om te zien of morgen over het algemeen duurder of goedkoper wordt."
}, },
"yesterday_price_level": { "yesterday_price_level": {
"description": "Geaggregeerd prijsniveau voor gisteren", "description": "Geaggregeerd prijsniveau voor gisteren",
@ -81,48 +94,48 @@
}, },
"yesterday_price_rating": { "yesterday_price_rating": {
"description": "Geaggregeerde prijsbeoordeling voor gisteren", "description": "Geaggregeerde prijsbeoordeling voor gisteren",
"long_description": "Toont de geaggregeerde prijsbeoordeling (laag/normaal/hoog) voor alle intervallen van gisteren, gebaseerd op uw geconfigureerde drempelwaarden. Gebruikt dezelfde logica als de uursensoren om de totale beoordeling voor de hele dag te bepalen.", "long_description": "Toont de geaggregeerde prijsbeoordeling (laag/normaal/hoog) voor alle intervallen van gisteren, gebaseerd op jouw geconfigureerde drempelwaarden. Gebruikt dezelfde logica als de uursensoren om de totale beoordeling voor de hele dag te bepalen.",
"usage_tips": "Gebruik dit om de prijssituatie van gisteren te begrijpen ten opzichte van uw persoonlijke drempelwaarden. Vergelijk met vandaag voor trendanalyse." "usage_tips": "Gebruik dit om de prijssituatie van gisteren te begrijpen ten opzichte van jouw persoonlijke drempelwaarden. Vergelijk met vandaag voor trendanalyse."
}, },
"today_price_rating": { "today_price_rating": {
"description": "Geaggregeerde prijsbeoordeling voor vandaag", "description": "Geaggregeerde prijsbeoordeling voor vandaag",
"long_description": "Toont de geaggregeerde prijsbeoordeling (laag/normaal/hoog) voor alle intervallen van vandaag, gebaseerd op uw geconfigureerde drempelwaarden. Gebruikt dezelfde logica als de uursensoren om de totale beoordeling voor de hele dag te bepalen.", "long_description": "Toont de geaggregeerde prijsbeoordeling (laag/normaal/hoog) voor alle intervallen van vandaag, gebaseerd op jouw geconfigureerde drempelwaarden. Gebruikt dezelfde logica als de uursensoren om de totale beoordeling voor de hele dag te bepalen.",
"usage_tips": "Gebruik dit om snel de prijssituatie van vandaag te beoordelen ten opzichte van uw persoonlijke drempelwaarden. Helpt bij het nemen van verbruiksbeslissingen voor de huidige dag." "usage_tips": "Gebruik dit om snel de prijssituatie van vandaag te beoordelen ten opzichte van jouw persoonlijke drempelwaarden. Helpt bij het nemen van verbruiksbeslissingen voor de huidige dag."
}, },
"tomorrow_price_rating": { "tomorrow_price_rating": {
"description": "Geaggregeerde prijsbeoordeling voor morgen", "description": "Geaggregeerde prijsbeoordeling voor morgen",
"long_description": "Toont de geaggregeerde prijsbeoordeling (laag/normaal/hoog) voor alle intervallen van morgen, gebaseerd op uw geconfigureerde drempelwaarden. Gebruikt dezelfde logica als de uursensoren om de totale beoordeling voor de hele dag te bepalen. Deze sensor wordt niet beschikbaar totdat de gegevens van morgen door Tibber worden gepubliceerd (meestal rond 13:00-14:00 CET).", "long_description": "Toont de geaggregeerde prijsbeoordeling (laag/normaal/hoog) voor alle intervallen van morgen, gebaseerd op jouw geconfigureerde drempelwaarden. Gebruikt dezelfde logica als de uursensoren om de totale beoordeling voor de hele dag te bepalen. Deze sensor wordt niet beschikbaar totdat de gegevens van morgen door Tibber worden gepubliceerd (meestal rond 13:00-14:00 CET).",
"usage_tips": "Gebruik dit om het energieverbruik van morgen te plannen op basis van uw persoonlijke prijsdrempelwaarden. Vergelijk met vandaag om te beslissen of u verbruik naar morgen moet verschuiven of vandaag energie moet gebruiken." "usage_tips": "Gebruik dit om het energieverbruik van morgen te plannen op basis van jouw persoonlijke prijsdrempelwaarden. Vergelijk met vandaag om te beslissen of je verbruik naar morgen moet verschuiven of vandaag energie moet gebruiken."
}, },
"trailing_price_average": { "trailing_price_average": {
"description": "De gemiddelde elektriciteitsprijs voor de afgelopen 24 uur per kWh", "description": "Typische elektriciteitsprijs voor de afgelopen 24 uur per kWh (configureerbare weergave)",
"long_description": "Toont de gemiddelde prijs per kWh berekend uit de afgelopen 24 uur (voortschrijdend gemiddelde) van uw Tibber-abonnement. Dit biedt een voortschrijdend gemiddelde dat elke 15 minuten wordt bijgewerkt op basis van historische gegevens.", "long_description": "Toont de prijs per kWh berekend uit de afgelopen 24 uur. **Standaard toont de status de mediaan** (resistent tegen extreme prijspieken, toont typisch prijsniveau). Je kunt dit wijzigen in de integratie-instellingen om het rekenkundig gemiddelde te tonen. De alternatieve waarde is beschikbaar als attribuut. Wordt elke 15 minuten bijgewerkt.",
"usage_tips": "Gebruik dit om huidige prijzen te vergelijken met recente trends. Een huidige prijs die aanzienlijk boven dit gemiddelde ligt, kan aangeven dat het een goed moment is om het verbruik te verminderen." "usage_tips": "Gebruik de statuswaarde om het typische huidige prijsniveau te zien. Voor kostenberekeningen gebruik: {{ state_attr('sensor.trailing_price_average', 'price_mean') }}"
}, },
"leading_price_average": { "leading_price_average": {
"description": "De gemiddelde elektriciteitsprijs voor de komende 24 uur per kWh", "description": "Typische elektriciteitsprijs voor de komende 24 uur per kWh (configureerbare weergave)",
"long_description": "Toont de gemiddelde prijs per kWh berekend uit de komende 24 uur (vooruitlopend gemiddelde) van uw Tibber-abonnement. Dit biedt een vooruitkijkend gemiddelde op basis van beschikbare prognosegegevens.", "long_description": "Toont de prijs per kWh berekend uit de komende 24 uur. **Standaard toont de status de mediaan** (resistent tegen extreme prijspieken, toont verwacht prijsniveau). Je kunt dit wijzigen in de integratie-instellingen om het rekenkundig gemiddelde te tonen. De alternatieve waarde is beschikbaar als attribuut.",
"usage_tips": "Gebruik dit om energieverbruik te plannen. Als de huidige prijs onder het vooruitlopende gemiddelde ligt, kan het een goed moment zijn om energie-intensieve apparaten te laten draaien." "usage_tips": "Gebruik de statuswaarde om het typische toekomstige prijsniveau te zien. Voor kostenberekeningen gebruik: {{ state_attr('sensor.leading_price_average', 'price_mean') }}"
}, },
"trailing_price_min": { "trailing_price_min": {
"description": "De minimale elektriciteitsprijs voor de afgelopen 24 uur per kWh", "description": "De minimale elektriciteitsprijs voor de afgelopen 24 uur per kWh",
"long_description": "Toont de minimumprijs per kWh van de afgelopen 24 uur (voortschrijdend minimum) van uw Tibber-abonnement. Dit geeft de laagste prijs die in de afgelopen 24 uur is gezien.", "long_description": "Toont de minimumprijs per kWh van de afgelopen 24 uur (voortschrijdend minimum) van je Tibber-abonnement. Dit geeft de laagste prijs die in de afgelopen 24 uur is gezien.",
"usage_tips": "Gebruik dit om de beste prijsmogelijkheid te zien die u in de afgelopen 24 uur had en vergelijk deze met huidige prijzen." "usage_tips": "Gebruik dit om de beste prijsmogelijkheid te zien die je in de afgelopen 24 uur had en vergelijk deze met huidige prijzen."
}, },
"trailing_price_max": { "trailing_price_max": {
"description": "De maximale elektriciteitsprijs voor de afgelopen 24 uur per kWh", "description": "De maximale elektriciteitsprijs voor de afgelopen 24 uur per kWh",
"long_description": "Toont de maximumprijs per kWh van de afgelopen 24 uur (voortschrijdend maximum) van uw Tibber-abonnement. Dit geeft de hoogste prijs die in de afgelopen 24 uur is gezien.", "long_description": "Toont de maximumprijs per kWh van de afgelopen 24 uur (voortschrijdend maximum) van je Tibber-abonnement. Dit geeft de hoogste prijs die in de afgelopen 24 uur is gezien.",
"usage_tips": "Gebruik dit om de piekprijs in de afgelopen 24 uur te zien en prijsvolatiliteit te beoordelen." "usage_tips": "Gebruik dit om de piekprijs in de afgelopen 24 uur te zien en prijsvolatiliteit te beoordelen."
}, },
"leading_price_min": { "leading_price_min": {
"description": "De minimale elektriciteitsprijs voor de komende 24 uur per kWh", "description": "De minimale elektriciteitsprijs voor de komende 24 uur per kWh",
"long_description": "Toont de minimumprijs per kWh van de komende 24 uur (vooruitlopend minimum) van uw Tibber-abonnement. Dit geeft de laagste prijs die wordt verwacht in de komende 24 uur op basis van prognosegegevens.", "long_description": "Toont de minimumprijs per kWh van de komende 24 uur (vooruitlopend minimum) van je Tibber-abonnement. Dit geeft de laagste prijs die wordt verwacht in de komende 24 uur op basis van prognosegegevens.",
"usage_tips": "Gebruik dit om de beste prijsmogelijkheid te identificeren die eraan komt en plan energie-intensieve taken dienovereenkomstig." "usage_tips": "Gebruik dit om de beste prijsmogelijkheid te identificeren die eraan komt en plan energie-intensieve taken dienovereenkomstig."
}, },
"leading_price_max": { "leading_price_max": {
"description": "De maximale elektriciteitsprijs voor de komende 24 uur per kWh", "description": "De maximale elektriciteitsprijs voor de komende 24 uur per kWh",
"long_description": "Toont de maximumprijs per kWh van de komende 24 uur (vooruitlopend maximum) van uw Tibber-abonnement. Dit geeft de hoogste prijs die wordt verwacht in de komende 24 uur op basis van prognosegegevens.", "long_description": "Toont de maximumprijs per kWh van de komende 24 uur (vooruitlopend maximum) van je Tibber-abonnement. Dit geeft de hoogste prijs die wordt verwacht in de komende 24 uur op basis van prognosegegevens.",
"usage_tips": "Gebruik dit om te voorkomen dat u apparaten draait tijdens aanstaande piekprijsperioden." "usage_tips": "Gebruik dit om te voorkomen dat je apparaten draait tijdens aanstaande piekprijsperioden."
}, },
"current_interval_price_level": { "current_interval_price_level": {
"description": "De huidige prijsniveauclassificatie", "description": "De huidige prijsniveauclassificatie",
@ -142,7 +155,7 @@
"current_hour_price_level": { "current_hour_price_level": {
"description": "Geaggregeerd prijsniveau voor huidig voortschrijdend uur (5 intervallen)", "description": "Geaggregeerd prijsniveau voor huidig voortschrijdend uur (5 intervallen)",
"long_description": "Toont het mediane prijsniveau over 5 intervallen (2 ervoor, huidig, 2 erna) dat ongeveer 75 minuten beslaat. Biedt een stabielere prijsniveauindicator die kortetermijnschommelingen afvlakt.", "long_description": "Toont het mediane prijsniveau over 5 intervallen (2 ervoor, huidig, 2 erna) dat ongeveer 75 minuten beslaat. Biedt een stabielere prijsniveauindicator die kortetermijnschommelingen afvlakt.",
"usage_tips": "Gebruik voor planningsbeslissingen op middellange termijn waarbij u niet wilt reageren op korte prijspieken of -dalingen." "usage_tips": "Gebruik voor planningsbeslissingen op middellange termijn waarbij je niet wilt reageren op korte prijspieken of -dalingen."
}, },
"next_hour_price_level": { "next_hour_price_level": {
"description": "Geaggregeerd prijsniveau voor volgend voortschrijdend uur (5 intervallen vooruit)", "description": "Geaggregeerd prijsniveau voor volgend voortschrijdend uur (5 intervallen vooruit)",
@ -172,22 +185,22 @@
"next_hour_price_rating": { "next_hour_price_rating": {
"description": "Geaggregeerde prijsbeoordeling voor volgend voortschrijdend uur (5 intervallen vooruit)", "description": "Geaggregeerde prijsbeoordeling voor volgend voortschrijdend uur (5 intervallen vooruit)",
"long_description": "Toont de gemiddelde beoordeling voor 5 intervallen gecentreerd één uur vooruit. Helpt te begrijpen of het volgende uur over het algemeen boven of onder gemiddelde prijzen zal liggen.", "long_description": "Toont de gemiddelde beoordeling voor 5 intervallen gecentreerd één uur vooruit. Helpt te begrijpen of het volgende uur over het algemeen boven of onder gemiddelde prijzen zal liggen.",
"usage_tips": "Gebruik om te beslissen of u een uur moet wachten voordat u activiteiten met hoog verbruik start." "usage_tips": "Gebruik om te beslissen of je een uur moet wachten voordat je activiteiten met hoog verbruik start."
}, },
"next_avg_1h": { "next_avg_1h": {
"description": "Gemiddelde prijs voor het volgende 1 uur (alleen vooruit vanaf volgend interval)", "description": "Gemiddelde prijs voor het volgende 1 uur (alleen vooruit vanaf volgend interval)",
"long_description": "Vooruitkijkend gemiddelde: Toont gemiddelde van volgende 4 intervallen (1 uur) vanaf het VOLGENDE 15-minuten interval (niet inclusief huidig). Verschilt van current_hour_average_price die vorige intervallen omvat. Gebruik voor absolute prijsdrempelplanning.", "long_description": "Vooruitkijkend gemiddelde: Toont gemiddelde van volgende 4 intervallen (1 uur) vanaf het VOLGENDE 15-minuten interval (niet inclusief huidig). Verschilt van current_hour_average_price die vorige intervallen omvat. Gebruik voor absolute prijsdrempelplanning.",
"usage_tips": "Absolute prijsdrempel: Start apparaten alleen wanneer het gemiddelde onder uw maximaal acceptabele prijs blijft (bijv. onder 0,25 EUR/kWh). Combineer met trendsensor voor optimale timing. Let op: Dit is GEEN vervanging voor uurprijzen - gebruik current_hour_average_price daarvoor." "usage_tips": "Absolute prijsdrempel: Start apparaten alleen wanneer het gemiddelde onder je maximaal acceptabele prijs blijft (bijv. onder 0,25 EUR/kWh). Combineer met trendsensor voor optimale timing. Let op: Dit is GEEN vervanging voor uurprijzen - gebruik current_hour_average_price daarvoor."
}, },
"next_avg_2h": { "next_avg_2h": {
"description": "Gemiddelde prijs voor de volgende 2 uur", "description": "Gemiddelde prijs voor de volgende 2 uur",
"long_description": "Toont de gemiddelde prijs voor de volgende 8 intervallen (2 uur) vanaf het volgende 15-minuten interval.", "long_description": "Toont de gemiddelde prijs voor de volgende 8 intervallen (2 uur) vanaf het volgende 15-minuten interval.",
"usage_tips": "Absolute prijsdrempel: Stel een maximaal acceptabele gemiddelde prijs in voor standaard apparaten zoals wasmachines. Zorgt ervoor dat u nooit meer betaalt dan uw limiet." "usage_tips": "Absolute prijsdrempel: Stel een maximaal acceptabele gemiddelde prijs in voor standaard apparaten zoals wasmachines. Zorgt ervoor dat je nooit meer betaalt dan je limiet."
}, },
"next_avg_3h": { "next_avg_3h": {
"description": "Gemiddelde prijs voor de volgende 3 uur", "description": "Gemiddelde prijs voor de volgende 3 uur",
"long_description": "Toont de gemiddelde prijs voor de volgende 12 intervallen (3 uur) vanaf het volgende 15-minuten interval.", "long_description": "Toont de gemiddelde prijs voor de volgende 12 intervallen (3 uur) vanaf het volgende 15-minuten interval.",
"usage_tips": "Absolute prijsdrempel: Voor EU Eco-programma's (vaatwassers, 3-4u looptijd). Start alleen wanneer 3u gemiddelde onder uw prijslimiet is. Gebruik met trendsensor om beste moment binnen acceptabel prijsbereik te vinden." "usage_tips": "Absolute prijsdrempel: Voor EU Eco-programma's (vaatwassers, 3-4u looptijd). Start alleen wanneer 3u gemiddelde onder je prijslimiet is. Gebruik met trendsensor om beste moment binnen acceptabel prijsbereik te vinden."
}, },
"next_avg_4h": { "next_avg_4h": {
"description": "Gemiddelde prijs voor de volgende 4 uur", "description": "Gemiddelde prijs voor de volgende 4 uur",
@ -202,32 +215,32 @@
"next_avg_6h": { "next_avg_6h": {
"description": "Gemiddelde prijs voor de volgende 6 uur", "description": "Gemiddelde prijs voor de volgende 6 uur",
"long_description": "Toont de gemiddelde prijs voor de volgende 24 intervallen (6 uur) vanaf het volgende 15-minuten interval.", "long_description": "Toont de gemiddelde prijs voor de volgende 24 intervallen (6 uur) vanaf het volgende 15-minuten interval.",
"usage_tips": "Absolute prijsdrempel: Avondplanning met prijslimieten. Plan taken alleen als 6u gemiddelde onder uw maximaal acceptabele kosten blijft." "usage_tips": "Absolute prijsdrempel: Avondplanning met prijslimieten. Plan taken alleen als 6u gemiddelde onder je maximaal acceptabele kosten blijft."
}, },
"next_avg_8h": { "next_avg_8h": {
"description": "Gemiddelde prijs voor de volgende 8 uur", "description": "Gemiddelde prijs voor de volgende 8 uur",
"long_description": "Toont de gemiddelde prijs voor de volgende 32 intervallen (8 uur) vanaf het volgende 15-minuten interval.", "long_description": "Toont de gemiddelde prijs voor de volgende 32 intervallen (8 uur) vanaf het volgende 15-minuten interval.",
"usage_tips": "Absolute prijsdrempel: Nachtelijke bedieningsbeslissingen. Stel harde prijslimieten in voor nachtelijke belastingen (batterij opladen, thermische opslag). Overschrijd nooit uw budget." "usage_tips": "Absolute prijsdrempel: Nachtelijke bedieningsbeslissingen. Stel harde prijslimieten in voor nachtelijke belastingen (batterij opladen, thermische opslag). Overschrijd nooit je budget."
}, },
"next_avg_12h": { "next_avg_12h": {
"description": "Gemiddelde prijs voor de volgende 12 uur", "description": "Gemiddelde prijs voor de volgende 12 uur",
"long_description": "Toont de gemiddelde prijs voor de volgende 48 intervallen (12 uur) vanaf het volgende 15-minuten interval.", "long_description": "Toont de gemiddelde prijs voor de volgende 48 intervallen (12 uur) vanaf het volgende 15-minuten interval.",
"usage_tips": "Absolute prijsdrempel: Strategische beslissingen met prijslimieten. Ga alleen door als 12u gemiddelde onder uw maximaal acceptabele prijs is. Goed voor uitgestelde grote belastingen." "usage_tips": "Absolute prijsdrempel: Strategische beslissingen met prijslimieten. Ga alleen door als 12u gemiddelde onder je maximaal acceptabele prijs is. Goed voor uitgestelde grote belastingen."
}, },
"price_trend_1h": { "price_trend_1h": {
"description": "Prijstrend voor het volgende uur", "description": "Prijstrend voor het volgende uur",
"long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgend 1 uur (4 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.", "long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgend 1 uur (4 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.",
"usage_tips": "Relatieve optimalisatie: 'dalend' = wacht, prijzen dalen. 'stijgend' = handel nu of u betaalt meer. 'stabiel' = prijs maakt nu niet veel uit. Werkt onafhankelijk van absoluut prijsniveau." "usage_tips": "Relatieve optimalisatie: 'dalend' = wacht, prijzen dalen. 'stijgend' = handel nu of je betaalt meer. 'stabiel' = prijs maakt nu niet veel uit. Werkt onafhankelijk van absoluut prijsniveau."
}, },
"price_trend_2h": { "price_trend_2h": {
"description": "Prijstrend voor de volgende 2 uur", "description": "Prijstrend voor de volgende 2 uur",
"long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgende 2 uur (8 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.", "long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgende 2 uur (8 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.",
"usage_tips": "Relatieve optimalisatie: Ideaal voor apparaten. 'dalend' betekent betere prijzen komen over 2u - stel uit indien mogelijk. Vindt beste timing binnen uw beschikbare venster, ongeacht seizoen." "usage_tips": "Relatieve optimalisatie: Ideaal voor apparaten. 'dalend' betekent betere prijzen komen over 2u - stel uit indien mogelijk. Vindt beste timing binnen je beschikbare venster, ongeacht seizoen."
}, },
"price_trend_3h": { "price_trend_3h": {
"description": "Prijstrend voor de volgende 3 uur", "description": "Prijstrend voor de volgende 3 uur",
"long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgende 3 uur (12 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.", "long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgende 3 uur (12 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.",
"usage_tips": "Relatieve optimalisatie: Voor Eco-programma's. 'dalend' betekent prijzen dalen >5% - het wachten waard. Werkt in elk seizoen. Combineer met avg-sensor voor prijslimiet: alleen wanneer avg < uw limiet EN trend niet 'dalend'." "usage_tips": "Relatieve optimalisatie: Voor Eco-programma's. 'dalend' betekent prijzen dalen >5% - het wachten waard. Werkt in elk seizoen. Combineer met avg-sensor voor prijslimiet: alleen wanneer avg < je limiet EN trend niet 'dalend'."
}, },
"price_trend_4h": { "price_trend_4h": {
"description": "Prijstrend voor de volgende 4 uur", "description": "Prijstrend voor de volgende 4 uur",
@ -237,12 +250,12 @@
"price_trend_5h": { "price_trend_5h": {
"description": "Prijstrend voor de volgende 5 uur", "description": "Prijstrend voor de volgende 5 uur",
"long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgende 5 uur (20 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.", "long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgende 5 uur (20 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.",
"usage_tips": "Relatieve optimalisatie: Uitgebreide operaties. Past zich aan de markt aan - vindt beste relatieve timing in elke prijsomgeving. 'stabiel/stijgend' = goed moment om te starten binnen uw planningsvenster." "usage_tips": "Relatieve optimalisatie: Uitgebreide operaties. Past zich aan de markt aan - vindt beste relatieve timing in elke prijsomgeving. 'stabiel/stijgend' = goed moment om te starten binnen je planningsvenster."
}, },
"price_trend_6h": { "price_trend_6h": {
"description": "Prijstrend voor de volgende 6 uur", "description": "Prijstrend voor de volgende 6 uur",
"long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgende 6 uur (24 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.", "long_description": "Vergelijkt huidige intervalprijs met gemiddelde van volgende 6 uur (24 intervallen). Stijgend als toekomst >5% hoger is, dalend als >5% lager, anders stabiel.",
"usage_tips": "Relatieve optimalisatie: Avandbeslissingen. 'dalend' = prijzen verbeteren aanzienlijk als u wacht. Geen vaste drempels nodig - past automatisch aan winter/zomer prijsniveaus." "usage_tips": "Relatieve optimalisatie: Avandbeslissingen. 'dalend' = prijzen verbeteren aanzienlijk als je wacht. Geen vaste drempels nodig - past automatisch aan winter/zomer prijsniveaus."
}, },
"price_trend_8h": { "price_trend_8h": {
"description": "Prijstrend voor de volgende 8 uur", "description": "Prijstrend voor de volgende 8 uur",
@ -276,27 +289,27 @@
}, },
"data_timestamp": { "data_timestamp": {
"description": "Tijdstempel van het laatst beschikbare prijsgegevensinterval", "description": "Tijdstempel van het laatst beschikbare prijsgegevensinterval",
"long_description": "Toont het tijdstempel van het laatst beschikbare prijsgegevensinterval van uw Tibber-abonnement" "long_description": "Toont het tijdstempel van het laatst beschikbare prijsgegevensinterval van je Tibber-abonnement"
}, },
"today_volatility": { "today_volatility": {
"description": "Prijsvolatiliteitsclassificatie voor vandaag", "description": "Hoeveel de stroomprijzen vandaag schommelen",
"long_description": "Toont hoeveel elektriciteitsprijzen variëren gedurende vandaag op basis van de spreiding (verschil tussen hoogste en laagste prijs). Classificatie: LOW = spreiding < 5ct, MODERATE = 5-15ct, HIGH = 15-30ct, VERY HIGH = >30ct.", "long_description": "Geeft aan of de prijzen vandaag stabiel blijven of grote schommelingen hebben. Lage volatiliteit betekent vrij constante prijzen timing maakt weinig uit. Hoge volatiliteit betekent duidelijke prijsverschillen gedurende de dag goede kans om verbruik naar goedkopere periodes te verschuiven. `price_coefficient_variation_%` toont het percentage, `price_spread` de absolute prijsspanne.",
"usage_tips": "Gebruik dit om te bepalen of prijsgebaseerde optimalisatie de moeite waard is. Bijvoorbeeld, met een balkonbatterij met 15% efficiëntieverlies is optimalisatie alleen zinvol wanneer volatiliteit ten minste MODERATE is. Maak automatiseringen die volatiliteit controleren voordat u laad-/ontlaadcycli plant." "usage_tips": "Gebruik dit om te beslissen of optimaliseren de moeite waard is. Bij lage volatiliteit kun je apparaten op elk moment laten draaien. Bij hoge volatiliteit bespaar je merkbaar door Best Price-periodes te volgen."
}, },
"tomorrow_volatility": { "tomorrow_volatility": {
"description": "Prijsvolatiliteitsclassificatie voor morgen", "description": "Hoeveel de stroomprijzen morgen zullen schommelen",
"long_description": "Toont hoeveel elektriciteitsprijzen zullen variëren gedurende morgen op basis van de spreiding (verschil tussen hoogste en laagste prijs). Wordt onbeschikbaar totdat de gegevens van morgen zijn gepubliceerd (meestal 13:00-14:00 CET).", "long_description": "Geeft aan of de prijzen morgen stabiel blijven of grote schommelingen hebben. Beschikbaar zodra de gegevens voor morgen zijn gepubliceerd (meestal 13:0014:00 CET). Lage volatiliteit betekent vrij constante prijzen timing is niet kritisch. Hoge volatiliteit betekent duidelijke prijsverschillen gedurende de dag goede kans om energie-intensieve taken te plannen. `price_coefficient_variation_%` toont het percentage, `price_spread` de absolute prijsspanne.",
"usage_tips": "Gebruik dit voor vooruitplanning van het energieverbruik van morgen. Als morgen HIGH of VERY HIGH volatiliteit heeft, is het de moeite waard om de timing van energieverbruik te optimaliseren. Bij LOW kunt u apparaten op elk moment gebruiken zonder significante kostenverschillen." "usage_tips": "Gebruik dit om het verbruik van morgen te plannen. Hoge volatiliteit? Plan flexibele lasten in Best Price-periodes. Lage volatiliteit? Laat apparaten draaien wanneer het jou uitkomt."
}, },
"next_24h_volatility": { "next_24h_volatility": {
"description": "Prijsvolatiliteitsclassificatie voor de rollende volgende 24 uur", "description": "Hoeveel de prijzen de komende 24 uur zullen schommelen",
"long_description": "Toont hoeveel elektriciteitsprijzen variëren in de volgende 24 uur vanaf nu (rollend venster). Dit overschrijdt daggrenzen en wordt elke 15 minuten bijgewerkt, wat een vooruitkijkende volatiliteitsbeoordeling biedt onafhankelijk van kalenderdagen.", "long_description": "Geeft de prijsvolatiliteit aan voor een rollend 24-uursvenster vanaf nu (wordt elke 15 minuten bijgewerkt). Lage volatiliteit betekent vrij constante prijzen. Hoge volatiliteit betekent merkbare prijsschommelingen en dus optimalisatiemogelijkheden. In tegenstelling tot vandaag/morgen-sensoren overschrijdt deze daggrenzen en geeft een doorlopende vooruitblik. `price_coefficient_variation_%` toont het percentage, `price_spread` de absolute prijsspanne.",
"usage_tips": "Beste sensor voor realtime optimalisatiebeslissingen. In tegenstelling tot vandaag/morgen-sensoren die om middernacht wisselen, biedt deze een continue 24-uurs volatiliteitsbeoordeling. Gebruik voor batterijlaadstrategieën die over daggrenzen heen gaan." "usage_tips": "Het beste voor beslissingen in real-time. Gebruik bij het plannen van batterijladen of andere flexibele lasten die over middernacht kunnen lopen. Biedt een consistent 24-uurs beeld, los van de kalenderdag."
}, },
"today_tomorrow_volatility": { "today_tomorrow_volatility": {
"description": "Gecombineerde prijsvolatiliteitsclassificatie voor vandaag en morgen", "description": "Gecombineerde prijsvolatiliteit voor vandaag en morgen",
"long_description": "Toont volatiliteit over zowel vandaag als morgen gecombineerd (wanneer de gegevens van morgen beschikbaar zijn). Biedt een uitgebreid overzicht van prijsvariatie over maximaal 48 uur. Valt terug op alleen vandaag wanneer de gegevens van morgen nog niet beschikbaar zijn.", "long_description": "Geeft de totale volatiliteit weer wanneer vandaag en morgen samen worden bekeken (zodra morgengegevens beschikbaar zijn). Toont of er duidelijke prijsverschillen over de daggrens heen zijn. Valt terug naar alleen vandaag als morgengegevens ontbreken. Handig voor meerdaagse optimalisatie. `price_coefficient_variation_%` toont het percentage, `price_spread` de absolute prijsspanne.",
"usage_tips": "Gebruik dit voor meerdaagse planning en om te begrijpen of prijskansen bestaan over de daggrenzen heen. De attributen 'today_volatility' en 'tomorrow_volatility' tonen individuele dagbijdragen. Handig voor het plannen van laadsessies die middernacht kunnen overschrijden." "usage_tips": "Gebruik voor taken die meerdere dagen beslaan. Kijk of de prijsverschillen groot genoeg zijn om plannen op te baseren. De afzonderlijke dag-sensoren tonen per-dag bijdragen als je meer detail wilt."
}, },
"data_lifecycle_status": { "data_lifecycle_status": {
"description": "Huidige status van prijsgegevenslevenscyclus en caching", "description": "Huidige status van prijsgegevenslevenscyclus en caching",
@ -304,39 +317,49 @@
"usage_tips": "Gebruik deze diagnostische sensor om gegevensfrisheid en API-aanroeppatronen te begrijpen. Controleer het 'cache_age'-attribuut om te zien hoe oud de huidige gegevens zijn. Monitor 'next_api_poll' om te weten wanneer de volgende update is gepland. Gebruik 'data_completeness' om te zien of gisteren/vandaag/morgen gegevens beschikbaar zijn. De 'api_calls_today'-teller helpt API-gebruik bij te houden. Perfect voor probleemoplossing of begrip van integratiegedrag." "usage_tips": "Gebruik deze diagnostische sensor om gegevensfrisheid en API-aanroeppatronen te begrijpen. Controleer het 'cache_age'-attribuut om te zien hoe oud de huidige gegevens zijn. Monitor 'next_api_poll' om te weten wanneer de volgende update is gepland. Gebruik 'data_completeness' om te zien of gisteren/vandaag/morgen gegevens beschikbaar zijn. De 'api_calls_today'-teller helpt API-gebruik bij te houden. Perfect voor probleemoplossing of begrip van integratiegedrag."
}, },
"best_price_end_time": { "best_price_end_time": {
"description": "Wanneer de huidige of volgende goedkope periode eindigt", "description": "Totale lengte van huidige of volgende voordelige periode (state in uren, attribuut in minuten)",
"long_description": "Toont het eindtijdstempel van de huidige goedkope periode wanneer actief, of het einde van de volgende periode wanneer geen periode actief is. Toont altijd een nuttige tijdreferentie voor planning. Geeft alleen 'Onbekend' terug wanneer geen periodes zijn geconfigureerd.", "long_description": "Toont hoe lang de voordelige periode duurt. State gebruikt uren (float) voor een leesbare UI; attribuut `period_duration_minutes` behoudt afgeronde minuten voor automatiseringen. Actief → duur van de huidige periode, anders de volgende.",
"usage_tips": "Gebruik dit om een aftelling weer te geven zoals 'Goedkope periode eindigt over 2 uur' (wanneer actief) of 'Volgende goedkope periode eindigt om 14:00' (wanneer inactief). Home Assistant toont automatisch relatieve tijd voor tijdstempelsensoren." "usage_tips": "UI kan 1,5 u tonen terwijl `period_duration_minutes` = 90 voor automatiseringen blijft."
},
"best_price_period_duration": {
"description": "Lengte van huidige/volgende goedkope periode",
"long_description": "Totale duur van huidige of volgende goedkope periode. De state wordt weergegeven in uren (bijv. 1,5 u) voor gemakkelijk aflezen in de UI, terwijl het attribuut `period_duration_minutes` dezelfde waarde in minuten levert (bijv. 90) voor automatiseringen. Deze waarde vertegenwoordigt de **volledige geplande duur** van de periode en is constant gedurende de gehele periode, zelfs als de resterende tijd (remaining_minutes) afneemt.",
"usage_tips": "Combineer met remaining_minutes om te berekenen wanneer langlopende apparaten moeten worden gestopt: Periode is `period_duration_minutes - remaining_minutes` minuten geleden gestart. Dit attribuut ondersteunt energie-optimalisatiestrategieën door te helpen bij het plannen van hoog-verbruiksactiviteiten binnen goedkope periodes."
}, },
"best_price_remaining_minutes": { "best_price_remaining_minutes": {
"description": "Resterende minuten in huidige goedkope periode (0 wanneer inactief)", "description": "Resterende tijd in huidige goedkope periode",
"long_description": "Toont hoeveel minuten er nog over zijn in de huidige goedkope periode. Geeft 0 terug wanneer geen periode actief is. Werkt elke minuut bij. Controleer binary_sensor.best_price_period om te zien of een periode momenteel actief is.", "long_description": "Toont hoeveel tijd er nog overblijft in de huidige goedkope periode. De state wordt weergegeven in uren (bijv. 0,75 u) voor gemakkelijk aflezen in dashboards, terwijl het attribuut `remaining_minutes` dezelfde tijd in minuten levert (bijv. 45) voor automatiseringsvoorwaarden. **Afteltimer**: Deze waarde neemt elke minuut af tijdens een actieve periode. Geeft 0 terug wanneer geen goedkope periode actief is. Werkt elke minuut bij.",
"usage_tips": "Perfect voor automatiseringen: 'Als remaining_minutes > 0 EN remaining_minutes < 30, start wasmachine nu'. De waarde 0 maakt het gemakkelijk om te controleren of een periode actief is (waarde > 0) of niet (waarde = 0)." "usage_tips": "Voor automatiseringen: Gebruik attribuut `remaining_minutes` zoals 'Als remaining_minutes > 60, start vaatwasser nu (genoeg tijd om te voltooien)' of 'Als remaining_minutes < 15, rond huidige cyclus binnenkort af'. UI toont gebruiksvriendelijke uren (bijv. 1,25 u). Waarde 0 geeft aan dat geen goedkope periode actief is."
}, },
"best_price_progress": { "best_price_progress": {
"description": "Voortgang door huidige goedkope periode (0% wanneer inactief)", "description": "Voortgang door huidige goedkope periode (0% wanneer inactief)",
"long_description": "Toont de voortgang door de huidige goedkope periode als 0-100%. Geeft 0% terug wanneer geen periode actief is. Werkt elke minuut bij. 0% betekent periode net gestart, 100% betekent het eindigt bijna.", "long_description": "Toont voortgang door de huidige goedkope periode als 0-100%. Geeft 0% terug wanneer geen periode actief is. Werkt elke minuut bij. 0% betekent periode net gestart, 100% betekent dat deze bijna eindigt.",
"usage_tips": "Geweldig voor visuele voortgangsbalken. Gebruik in automatiseringen: 'Als progress > 0 EN progress > 75, stuur melding dat goedkope periode bijna eindigt'. Waarde 0 geeft aan dat er geen actieve periode is." "usage_tips": "Geweldig voor visuele voortgangsbalken. Gebruik in automatiseringen: 'Als progress > 0 EN progress > 75, stuur melding dat goedkope periode bijna eindigt'. Waarde 0 geeft aan dat geen periode actief is."
}, },
"best_price_next_start_time": { "best_price_next_start_time": {
"description": "Wanneer de volgende goedkope periode begint", "description": "Totale lengte van huidige of volgende dure periode (state in uren, attribuut in minuten)",
"long_description": "Toont wanneer de volgende komende goedkope periode begint. Tijdens een actieve periode toont dit de start van de VOLGENDE periode na de huidige. Geeft alleen 'Onbekend' terug wanneer geen toekomstige periodes zijn geconfigureerd.", "long_description": "Toont hoe lang de dure periode duurt. State gebruikt uren (float) voor de UI; attribuut `period_duration_minutes` behoudt afgeronde minuten voor automatiseringen. Actief → duur van de huidige periode, anders de volgende.",
"usage_tips": "Altijd nuttig voor vooruitplanning: 'Volgende goedkope periode begint over 3 uur' (of je nu in een periode zit of niet). Combineer met automatiseringen: 'Wanneer volgende starttijd over 10 minuten is, stuur melding om wasmachine voor te bereiden'." "usage_tips": "UI kan 0,75 u tonen terwijl `period_duration_minutes` = 45 voor automatiseringen blijft."
}, },
"best_price_next_in_minutes": { "best_price_next_in_minutes": {
"description": "Minuten tot volgende goedkope periode begint (0 bij overgang)", "description": "Resterende tijd in huidige dure periode (state in uren, attribuut in minuten)",
"long_description": "Toont minuten tot de volgende goedkope periode begint. Tijdens een actieve periode toont dit de tijd tot de periode NA de huidige. Geeft 0 terug tijdens korte overgangsmomenten. Werkt elke minuut bij.", "long_description": "Toont hoeveel tijd er nog over is. State gebruikt uren (float); attribuut `remaining_minutes` behoudt afgeronde minuten voor automatiseringen. Geeft 0 terug wanneer er geen periode actief is. Werkt elke minuut bij.",
"usage_tips": "Perfect voor 'wacht tot goedkope periode' automatiseringen: 'Als next_in_minutes > 0 EN next_in_minutes < 15, wacht voordat vaatwasser wordt gestart'. Waarde > 0 geeft altijd aan dat een toekomstige periode is gepland." "usage_tips": "Gebruik `remaining_minutes` voor drempels (bijv. > 60) terwijl de state in uren goed leesbaar blijft."
}, },
"peak_price_end_time": { "peak_price_end_time": {
"description": "Wanneer de huidige of volgende dure periode eindigt", "description": "Tijd tot volgende dure periode (state in uren, attribuut in minuten)",
"long_description": "Toont het eindtijdstempel van de huidige dure periode wanneer actief, of het einde van de volgende periode wanneer geen periode actief is. Toont altijd een nuttige tijdreferentie voor planning. Geeft alleen 'Onbekend' terug wanneer geen periodes zijn geconfigureerd.", "long_description": "Toont hoe lang het duurt tot de volgende dure periode start. State gebruikt uren (float); attribuut `next_in_minutes` behoudt afgeronde minuten voor automatiseringen. Tijdens een actieve periode is dit de tijd tot de periode na de huidige. 0 tijdens korte overgangen. Werkt elke minuut bij.",
"usage_tips": "Gebruik dit om 'Dure periode eindigt over 1 uur' weer te geven (wanneer actief) of 'Volgende dure periode eindigt om 18:00' (wanneer inactief). Combineer met automatiseringen om activiteiten te hervatten na piek." "usage_tips": "Gebruik `next_in_minutes` in automatiseringen (bijv. < 10) terwijl de state in uren leesbaar blijft."
},
"peak_price_period_duration": {
"description": "Totale duur van huidige of volgende dure periode in minuten",
"long_description": "Toont de totale duur van de dure periode in minuten. Tijdens een actieve periode toont dit de volledige lengte van de huidige periode. Wanneer geen periode actief is, toont dit de duur van de volgende komende periode. Voorbeeld: '60 minuten' voor een 1-uur periode.",
"usage_tips": "Gebruik om energiebesparende maatregelen te plannen: 'Als duration > 120, verlaag verwarmingstemperatuur agressiever (lange dure periode)'. Helpt bij het inschatten hoeveel energieverbruik moet worden verminderd."
}, },
"peak_price_remaining_minutes": { "peak_price_remaining_minutes": {
"description": "Resterende minuten in huidige dure periode (0 wanneer inactief)", "description": "Resterende tijd in huidige dure periode",
"long_description": "Toont hoeveel minuten er nog over zijn in de huidige dure periode. Geeft 0 terug wanneer geen periode actief is. Werkt elke minuut bij. Controleer binary_sensor.peak_price_period om te zien of een periode momenteel actief is.", "long_description": "Toont hoeveel tijd er nog overblijft in de huidige dure periode. De state wordt weergegeven in uren (bijv. 0,75 u) voor gemakkelijk aflezen in dashboards, terwijl het attribuut `remaining_minutes` dezelfde tijd in minuten levert (bijv. 45) voor automatiseringsvoorwaarden. **Afteltimer**: Deze waarde neemt elke minuut af tijdens een actieve periode. Geeft 0 terug wanneer geen dure periode actief is. Werkt elke minuut bij.",
"usage_tips": "Gebruik in automatiseringen: 'Als remaining_minutes > 60, annuleer uitgestelde laadronde'. Waarde 0 maakt het gemakkelijk om onderscheid te maken tussen actieve (waarde > 0) en inactieve (waarde = 0) periodes." "usage_tips": "Voor automatiseringen: Gebruik attribuut `remaining_minutes` zoals 'Als remaining_minutes > 60, annuleer uitgestelde laadronde' of 'Als remaining_minutes < 15, hervat normaal gebruik binnenkort'. UI toont gebruiksvriendelijke uren (bijv. 1,0 u). Waarde 0 geeft aan dat geen dure periode actief is."
}, },
"peak_price_progress": { "peak_price_progress": {
"description": "Voortgang door huidige dure periode (0% wanneer inactief)", "description": "Voortgang door huidige dure periode (0% wanneer inactief)",
@ -349,19 +372,9 @@
"usage_tips": "Altijd nuttig voor planning: 'Volgende dure periode begint over 2 uur'. Automatisering: 'Wanneer volgende starttijd over 30 minuten is, verlaag verwarmingstemperatuur preventief'." "usage_tips": "Altijd nuttig voor planning: 'Volgende dure periode begint over 2 uur'. Automatisering: 'Wanneer volgende starttijd over 30 minuten is, verlaag verwarmingstemperatuur preventief'."
}, },
"peak_price_next_in_minutes": { "peak_price_next_in_minutes": {
"description": "Minuten tot volgende dure periode begint (0 bij overgang)", "description": "Tijd tot volgende dure periode",
"long_description": "Toont minuten tot de volgende dure periode begint. Tijdens een actieve periode toont dit de tijd tot de periode NA de huidige. Geeft 0 terug tijdens korte overgangsmomenten. Werkt elke minuut bij.", "long_description": "Toont hoe lang het duurt tot de volgende dure periode. De state wordt weergegeven in uren (bijv. 0,5 u) voor dashboards, terwijl het attribuut `next_in_minutes` minuten levert (bijv. 30) voor automatiseringsvoorwaarden. Tijdens een actieve periode toont dit de tijd tot de periode NA de huidige. Geeft 0 terug tijdens korte overgangsmomenten. Werkt elke minuut bij.",
"usage_tips": "Preventieve automatisering: 'Als next_in_minutes > 0 EN next_in_minutes < 10, voltooi huidige laadcyclus nu voordat prijzen stijgen'." "usage_tips": "Voor automatiseringen: Gebruik attribuut `next_in_minutes` zoals 'Als next_in_minutes > 0 EN next_in_minutes < 10, voltooi huidige laadcyclus nu voordat prijzen stijgen'. Waarde > 0 geeft altijd aan dat een toekomstige dure periode is gepland."
},
"best_price_period_duration": {
"description": "Totale duur van huidige of volgende goedkope periode in minuten",
"long_description": "Toont de totale duur van de goedkope periode in minuten. Tijdens een actieve periode toont dit de volledige lengte van de huidige periode. Wanneer geen periode actief is, toont dit de duur van de volgende komende periode. Voorbeeld: '90 minuten' voor een 1,5-uur periode.",
"usage_tips": "Combineer met remaining_minutes voor taakplanning: 'Als duration = 120 EN remaining_minutes > 90, start wasmachine (genoeg tijd om te voltooien)'. Nuttig om te begrijpen of periodes lang genoeg zijn voor energie-intensieve taken."
},
"peak_price_period_duration": {
"description": "Totale duur van huidige of volgende dure periode in minuten",
"long_description": "Toont de totale duur van de dure periode in minuten. Tijdens een actieve periode toont dit de volledige lengte van de huidige periode. Wanneer geen periode actief is, toont dit de duur van de volgende komende periode. Voorbeeld: '60 minuten' voor een 1-uur periode.",
"usage_tips": "Gebruik om energiebesparende maatregelen te plannen: 'Als duration > 120, verlaag verwarmingstemperatuur agressiever (lange dure periode)'. Helpt bij het inschatten hoeveel energieverbruik moet worden verminderd."
}, },
"home_type": { "home_type": {
"description": "Type woning (appartement, huis enz.)", "description": "Type woning (appartement, huis enz.)",
@ -437,6 +450,11 @@
"description": "Data-export voor dashboard-integraties", "description": "Data-export voor dashboard-integraties",
"long_description": "Deze sensor roept de get_chartdata-service aan met jouw geconfigureerde YAML-configuratie en stelt het resultaat beschikbaar als entiteitsattributen. De status toont 'ready' wanneer data beschikbaar is, 'error' bij fouten, of 'pending' voor de eerste aanroep. Perfekt voor dashboard-integraties zoals ApexCharts die prijsgegevens uit entiteitsattributen moeten lezen.", "long_description": "Deze sensor roept de get_chartdata-service aan met jouw geconfigureerde YAML-configuratie en stelt het resultaat beschikbaar als entiteitsattributen. De status toont 'ready' wanneer data beschikbaar is, 'error' bij fouten, of 'pending' voor de eerste aanroep. Perfekt voor dashboard-integraties zoals ApexCharts die prijsgegevens uit entiteitsattributen moeten lezen.",
"usage_tips": "Configureer de YAML-parameters in de integratie-opties om overeen te komen met jouw get_chartdata-service-aanroep. De sensor wordt automatisch bijgewerkt wanneer prijsgegevens worden bijgewerkt (typisch na middernacht en wanneer gegevens van morgen binnenkomen). Krijg toegang tot de service-responsgegevens direct vanuit de entiteitsattributen - de structuur komt exact overeen met wat get_chartdata retourneert." "usage_tips": "Configureer de YAML-parameters in de integratie-opties om overeen te komen met jouw get_chartdata-service-aanroep. De sensor wordt automatisch bijgewerkt wanneer prijsgegevens worden bijgewerkt (typisch na middernacht en wanneer gegevens van morgen binnenkomen). Krijg toegang tot de service-responsgegevens direct vanuit de entiteitsattributen - de structuur komt exact overeen met wat get_chartdata retourneert."
},
"chart_metadata": {
"description": "Lichtgewicht metadata voor diagramconfiguratie",
"long_description": "Biedt essentiële diagramconfiguratiewaarden als sensorattributen. Nuttig voor elke grafiekkaart die Y-as-grenzen nodig heeft. De sensor roept get_chartdata aan in alleen-metadata-modus (geen dataverwerking) en extraheert: yaxis_min, yaxis_max (gesuggereerd Y-asbereik voor optimale schaling). De status weerspiegelt het service-aanroepresultaat: 'ready' bij succes, 'error' bij fouten, 'pending' tijdens initialisatie.",
"usage_tips": "Configureer via configuration.yaml onder tibber_prices.chart_metadata_config (optioneel: day, subunit_currency, resolution). De sensor wordt automatisch bijgewerkt bij prijsgegevenswijzigingen. Krijg toegang tot metadata vanuit attributen: yaxis_min, yaxis_max. Gebruik met config-template-card of elk hulpmiddel dat entiteitsattributen leest - perfect voor dynamische diagramconfiguratie zonder handmatige berekeningen."
} }
}, },
"binary_sensor": { "binary_sensor": {
@ -448,7 +466,7 @@
"peak_price_period": { "peak_price_period": {
"description": "Of het huidige interval tot de duurste van de dag behoort", "description": "Of het huidige interval tot de duurste van de dag behoort",
"long_description": "Wordt geactiveerd wanneer de huidige prijs in de top 20% van de prijzen van vandaag ligt", "long_description": "Wordt geactiveerd wanneer de huidige prijs in de top 20% van de prijzen van vandaag ligt",
"usage_tips": "Gebruik dit om te voorkomen dat u apparaten met hoog verbruik draait tijdens dure intervallen" "usage_tips": "Gebruik dit om te voorkomen dat je apparaten met hoog verbruik draait tijdens dure intervallen"
}, },
"best_price_period": { "best_price_period": {
"description": "Of het huidige interval tot de goedkoopste van de dag behoort", "description": "Of het huidige interval tot de goedkoopste van de dag behoort",
@ -469,11 +487,80 @@
"description": "Of realtime verbruiksmonitoring actief is", "description": "Of realtime verbruiksmonitoring actief is",
"long_description": "Geeft aan of realtime elektriciteitsverbruikmonitoring is ingeschakeld en actief voor je Tibber-woning. Dit vereist compatibele meethardware (bijv. Tibber Pulse) en een actief abonnement.", "long_description": "Geeft aan of realtime elektriciteitsverbruikmonitoring is ingeschakeld en actief voor je Tibber-woning. Dit vereist compatibele meethardware (bijv. Tibber Pulse) en een actief abonnement.",
"usage_tips": "Gebruik dit om te verifiëren dat realtimeverbruiksgegevens beschikbaar zijn. Schakel meldingen in als dit onverwacht verandert naar 'uit', wat wijst op mogelijke hardware- of verbindingsproblemen." "usage_tips": "Gebruik dit om te verifiëren dat realtimeverbruiksgegevens beschikbaar zijn. Schakel meldingen in als dit onverwacht verandert naar 'uit', wat wijst op mogelijke hardware- of verbindingsproblemen."
}
},
"number": {
"best_price_flex_override": {
"description": "Maximaal percentage boven de dagelijkse minimumprijs dat intervallen kunnen hebben en nog steeds als 'beste prijs' kwalificeren. Aanbevolen: 15-20 met versoepeling ingeschakeld (standaard), of 25-35 zonder versoepeling. Maximum: 50 (harde limiet voor betrouwbare periodedetectie).",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Flexibiliteit'-instelling uit de opties-dialoog voor beste prijs-periodeberekeningen.",
"usage_tips": "Schakel deze entiteit in om beste prijs-detectie dynamisch aan te passen via automatiseringen, bijv. hogere flexibiliteit voor kritieke lasten of strengere eisen voor flexibele apparaten."
}, },
"chart_data_export": { "best_price_min_distance_override": {
"description": "Gegevensexport voor dashboardintegraties", "description": "Minimale procentuele afstand onder het daggemiddelde. Intervallen moeten zo ver onder het gemiddelde liggen om als 'beste prijs' te kwalificeren. Helpt echte lage prijsperioden te onderscheiden van gemiddelde prijzen.",
"long_description": "Deze binaire sensor roept de get_chartdata-service aan om gegevens voor dashboard-widgets te exporteren. Ondersteunt ApexCharts en andere dashboardoplossingen die prijsgegevens willen visualiseren.", "long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Minimale afstand'-instelling uit de opties-dialoog voor beste prijs-periodeberekeningen.",
"usage_tips": "Configureer de YAML-parameters in de integratieopties onder 'Geavanceerd'. Deze sensor biedt meestal geen praktische waarde in automatiseringen - hij dient hoofdzakelijk als servicecontainer voor dashboardgebruik. Raadpleeg de documentatie voor specifieke parameterformat." "usage_tips": "Verhoog de waarde voor strengere beste prijs-criteria. Verlaag als te weinig perioden worden gedetecteerd."
},
"best_price_min_period_length_override": {
"description": "Minimale periodelengte in 15-minuten intervallen. Perioden korter dan dit worden niet gerapporteerd. Voorbeeld: 2 = minimaal 30 minuten.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Minimale periodelengte'-instelling uit de opties-dialoog voor beste prijs-periodeberekeningen.",
"usage_tips": "Pas aan op typische apparaatlooptijd: 2 (30 min) voor snelle programma's, 4-8 (1-2 uur) voor normale cycli, 8+ voor lange ECO-programma's."
},
"best_price_min_periods_override": {
"description": "Minimum aantal beste prijs-perioden om dagelijks te vinden. Wanneer versoepeling is ingeschakeld, past het systeem automatisch de criteria aan om dit aantal te bereiken.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Minimum periodes'-instelling uit de opties-dialoog voor beste prijs-periodeberekeningen.",
"usage_tips": "Stel dit in op het aantal tijdkritieke taken dat je dagelijks hebt. Voorbeeld: 2 voor twee wasladingen."
},
"best_price_relaxation_attempts_override": {
"description": "Aantal pogingen om de criteria geleidelijk te versoepelen om het minimum aantal perioden te bereiken. Elke poging verhoogt de flexibiliteit met 3 procent. Bij 0 worden alleen basiscriteria gebruikt.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Versoepeling pogingen'-instelling uit de opties-dialoog voor beste prijs-periodeberekeningen.",
"usage_tips": "Hogere waarden maken periodedetectie adaptiever voor dagen met stabiele prijzen. Stel in op 0 om strikte criteria af te dwingen zonder versoepeling."
},
"best_price_gap_count_override": {
"description": "Maximum aantal duurdere intervallen dat mag worden toegestaan tussen goedkope intervallen terwijl ze nog steeds als één aaneengesloten periode tellen. Bij 0 moeten goedkope intervallen opeenvolgend zijn.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Gap tolerantie'-instelling uit de opties-dialoog voor beste prijs-periodeberekeningen.",
"usage_tips": "Verhoog dit voor apparaten met variabele belasting (bijv. warmtepompen) die korte duurdere intervallen kunnen tolereren. Stel in op 0 voor continu goedkope perioden."
},
"peak_price_flex_override": {
"description": "Maximaal percentage onder de dagelijkse maximumprijs dat intervallen kunnen hebben en nog steeds als 'piekprijs' kwalificeren. Dezelfde aanbevelingen als voor beste prijs-flexibiliteit.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Flexibiliteit'-instelling uit de opties-dialoog voor piekprijs-periodeberekeningen.",
"usage_tips": "Gebruik dit om de piekprijs-drempel tijdens runtime aan te passen voor automatiseringen die verbruik tijdens dure uren vermijden."
},
"peak_price_min_distance_override": {
"description": "Minimale procentuele afstand boven het daggemiddelde. Intervallen moeten zo ver boven het gemiddelde liggen om als 'piekprijs' te kwalificeren.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Minimale afstand'-instelling uit de opties-dialoog voor piekprijs-periodeberekeningen.",
"usage_tips": "Verhoog de waarde om alleen extreme prijspieken te vangen. Verlaag om meer dure tijden mee te nemen."
},
"peak_price_min_period_length_override": {
"description": "Minimale periodelengte in 15-minuten intervallen voor piekprijzen. Kortere prijspieken worden niet als perioden gerapporteerd.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Minimale periodelengte'-instelling uit de opties-dialoog voor piekprijs-periodeberekeningen.",
"usage_tips": "Kortere waarden vangen korte prijspieken. Langere waarden focussen op aanhoudende dure perioden."
},
"peak_price_min_periods_override": {
"description": "Minimum aantal piekprijs-perioden om dagelijks te vinden.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Minimum periodes'-instelling uit de opties-dialoog voor piekprijs-periodeberekeningen.",
"usage_tips": "Stel dit in op basis van hoeveel dure perioden je per dag wilt vangen voor automatiseringen."
},
"peak_price_relaxation_attempts_override": {
"description": "Aantal pogingen om de criteria te versoepelen om het minimum aantal piekprijs-perioden te bereiken.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Versoepeling pogingen'-instelling uit de opties-dialoog voor piekprijs-periodeberekeningen.",
"usage_tips": "Verhoog dit als geen perioden worden gevonden op dagen met stabiele prijzen. Stel in op 0 om strikte criteria af te dwingen."
},
"peak_price_gap_count_override": {
"description": "Maximum aantal goedkopere intervallen dat mag worden toegestaan tussen dure intervallen terwijl ze nog steeds als één piekprijs-periode tellen.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Gap tolerantie'-instelling uit de opties-dialoog voor piekprijs-periodeberekeningen.",
"usage_tips": "Hogere waarden vangen langere dure perioden zelfs met korte prijsdips. Stel in op 0 voor strikt aaneengesloten piekprijzen."
}
},
"switch": {
"best_price_enable_relaxation_override": {
"description": "Indien ingeschakeld, worden criteria automatisch versoepeld om het minimum aantal perioden te bereiken. Indien uitgeschakeld, worden alleen perioden gerapporteerd die aan strikte criteria voldoen (mogelijk nul perioden op dagen met stabiele prijzen).",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Minimum aantal bereiken'-instelling uit de opties-dialoog voor beste prijs-periodeberekeningen.",
"usage_tips": "Schakel dit in voor gegarandeerde dagelijkse automatiseringsmogelijkheden. Schakel uit als je alleen echt goedkope perioden wilt, ook als dat betekent dat er op sommige dagen geen perioden zijn."
},
"peak_price_enable_relaxation_override": {
"description": "Indien ingeschakeld, worden criteria automatisch versoepeld om het minimum aantal perioden te bereiken. Indien uitgeschakeld, worden alleen echte prijspieken gerapporteerd.",
"long_description": "Wanneer deze entiteit is ingeschakeld, overschrijft de waarde de 'Minimum aantal bereiken'-instelling uit de opties-dialoog voor piekprijs-periodeberekeningen.",
"usage_tips": "Schakel dit in voor consistente piekprijs-waarschuwingen. Schakel uit om alleen extreme prijspieken te vangen."
} }
}, },
"home_types": { "home_types": {
@ -482,5 +569,15 @@
"HOUSE": "Huis", "HOUSE": "Huis",
"COTTAGE": "Huisje" "COTTAGE": "Huisje"
}, },
"time_units": {
"day": "{count} dag",
"days": "{count} dagen",
"hour": "{count} uur",
"hours": "{count} uur",
"minute": "{count} minuut",
"minutes": "{count} minuten",
"ago": "{parts} geleden",
"now": "nu"
},
"attribution": "Gegevens geleverd door Tibber" "attribution": "Gegevens geleverd door Tibber"
} }

View file

@ -1,7 +1,20 @@
{ {
"apexcharts": { "apexcharts": {
"title_rating_level": "Prisfaser daglig framsteg", "title_rating_level": "Prisfaser dagsprogress",
"title_level": "Prisnivå" "title_level": "Prisnivå",
"hourly_suffix": "(Ø per timme)",
"best_price_period_name": "Bästa prisperiod",
"peak_price_period_name": "Toppprisperiod",
"notification": {
"metadata_sensor_unavailable": {
"title": "Tibber Prices: ApexCharts YAML genererad med begränsad funktionalitet",
"message": "Du har precis genererat en ApexCharts-kortkonfiguration via Utvecklarverktyg. Diagram-metadata-sensorn är inaktiverad, så den genererade YAML:en visar bara **grundläggande funktionalitet** (auto-skalning, fast gradient vid 50%).\n\n**För full funktionalitet** (optimerad skalning, dynamiska gradientfärger):\n1. [Öppna Tibber Prices-integrationen](https://my.home-assistant.io/redirect/integration/?domain=tibber_prices)\n2. Aktivera 'Chart Metadata'-sensorn\n3. **Generera YAML:en igen** via Utvecklarverktyg\n4. **Ersätt den gamla YAML:en** i din instrumentpanel med den nya versionen\n\n⚠ Det räcker inte att bara aktivera sensorn - du måste regenerera och ersätta YAML-koden!"
},
"missing_cards": {
"title": "Tibber Prices: ApexCharts YAML kan inte användas",
"message": "Du har precis genererat en ApexCharts-kortkonfiguration via Utvecklarverktyg, men den genererade YAML:en **kommer inte att fungera** eftersom nödvändiga anpassade kort saknas.\n\n**Saknade kort:**\n{cards}\n\n**För att använda den genererade YAML:en:**\n1. Klicka på länkarna ovan för att installera de saknade korten från HACS\n2. Starta om Home Assistant (ibland nödvändigt)\n3. **Generera YAML:en igen** via Utvecklarverktyg\n4. Lägg till YAML:en i din instrumentpanel\n\n⚠ Den nuvarande YAML-koden fungerar inte förrän alla kort är installerade!"
}
}
}, },
"sensor": { "sensor": {
"current_interval_price": { "current_interval_price": {
@ -9,7 +22,7 @@
"long_description": "Visar nuvarande pris per kWh från ditt Tibber-abonnemang", "long_description": "Visar nuvarande pris per kWh från ditt Tibber-abonnemang",
"usage_tips": "Använd detta för att spåra priser eller skapa automationer som körs när el är billig" "usage_tips": "Använd detta för att spåra priser eller skapa automationer som körs när el är billig"
}, },
"current_interval_price_major": { "current_interval_price_base": {
"description": "Nuvarande elpris i huvudvaluta (EUR/kWh, NOK/kWh, osv.) för Energipanelen", "description": "Nuvarande elpris i huvudvaluta (EUR/kWh, NOK/kWh, osv.) för Energipanelen",
"long_description": "Visar nuvarande pris per kWh i huvudvaluta-enheter (t.ex. EUR/kWh istället för ct/kWh, NOK/kWh istället för øre/kWh). Denna sensor är speciellt utformad för användning med Home Assistants Energipanel, som kräver priser i standardvalutaenheter.", "long_description": "Visar nuvarande pris per kWh i huvudvaluta-enheter (t.ex. EUR/kWh istället för ct/kWh, NOK/kWh istället för øre/kWh). Denna sensor är speciellt utformad för användning med Home Assistants Energipanel, som kräver priser i standardvalutaenheter.",
"usage_tips": "Använd denna sensor när du konfigurerar Energipanelen under Inställningar → Instrumentpaneler → Energi. Välj denna sensor som 'Entitet med nuvarande pris' för att automatiskt beräkna dina energikostnader. Energipanelen multiplicerar din energiförbrukning (kWh) med detta pris för att visa totala kostnader." "usage_tips": "Använd denna sensor när du konfigurerar Energipanelen under Inställningar → Instrumentpaneler → Energi. Välj denna sensor som 'Entitet med nuvarande pris' för att automatiskt beräkna dina energikostnader. Energipanelen multiplicerar din energiförbrukning (kWh) med detta pris för att visa totala kostnader."
@ -45,9 +58,9 @@
"usage_tips": "Använd detta för att undvika att köra apparater under topppristider" "usage_tips": "Använd detta för att undvika att köra apparater under topppristider"
}, },
"average_price_today": { "average_price_today": {
"description": "Det genomsnittliga elpriset för idag per kWh", "description": "Typiskt elpris för idag per kWh (konfigurerbart visningsformat)",
"long_description": "Visar genomsnittspriset per kWh för nuvarande dag från ditt Tibber-abonnemang", "long_description": "Visar priset per kWh för nuvarande dag från ditt Tibber-abonnemang. **Som standard visar statusen medianen** (motståndskraftig mot extrema prispikar, visar typisk prisnåvå). Du kan ändra detta i integrationsinstllningarna för att visa det aritmetiska medelvärdet istället. Det alternativa värdet är tillgängligt som attribut.",
"usage_tips": "Använd detta som baslinje för att jämföra nuvarande priser" "usage_tips": "Använd detta som baslinje för att jämföra nuvarande priser. För beräkningar använd: {{ state_attr('sensor.average_price_today', 'price_mean') }}"
}, },
"lowest_price_tomorrow": { "lowest_price_tomorrow": {
"description": "Det lägsta elpriset för imorgon per kWh", "description": "Det lägsta elpriset för imorgon per kWh",
@ -60,9 +73,9 @@
"usage_tips": "Använd detta för att undvika att köra apparater under morgondagens topppristider. Användbart för att planera runt dyra perioder." "usage_tips": "Använd detta för att undvika att köra apparater under morgondagens topppristider. Användbart för att planera runt dyra perioder."
}, },
"average_price_tomorrow": { "average_price_tomorrow": {
"description": "Det genomsnittliga elpriset för imorgon per kWh", "description": "Typiskt elpris för imorgon per kWh (konfigurerbart visningsformat)",
"long_description": "Visar genomsnittspriset per kWh för morgondagen från ditt Tibber-abonnemang. Denna sensor blir otillgänglig tills morgondagens data publiceras av Tibber (vanligtvis runt 13:00-14:00 CET).", "long_description": "Visar priset per kWh för morgondagen från ditt Tibber-abonnemang. **Som standard visar statusen medianen** (motståndskraftig mot extrema prispikar). Du kan ändra detta i integrationsinstllningarna för att visa det aritmetiska medelvärdet istället. Det alternativa värdet är tillgängligt som attribut. Denna sensor blir otillgänglig tills morgondagens data publiceras av Tibber (vanligtvis runt 13:00-14:00 CET).",
"usage_tips": "Använd detta som baslinje för att jämföra morgondagens priser och planera konsumtion. Jämför med dagens genomsnitt för att se om morgondagen kommer att bli dyrare eller billigare totalt sett." "usage_tips": "Använd detta som baslinje för att jämföra morgondagens priser och planera konsumtion. Jämför med dagens median för att se om morgondagen kommer att bli dyrare eller billigare totalt sett."
}, },
"yesterday_price_level": { "yesterday_price_level": {
"description": "Aggregerad prisnivå för igår", "description": "Aggregerad prisnivå för igår",
@ -95,14 +108,14 @@
"usage_tips": "Använd detta för att planera imorgonens energiförbrukning baserat på dina personliga priströskelvärden. Jämför med idag för att avgöra om du ska skjuta upp förbrukning till imorgon eller använda energi idag." "usage_tips": "Använd detta för att planera imorgonens energiförbrukning baserat på dina personliga priströskelvärden. Jämför med idag för att avgöra om du ska skjuta upp förbrukning till imorgon eller använda energi idag."
}, },
"trailing_price_average": { "trailing_price_average": {
"description": "Det genomsnittliga elpriset för de senaste 24 timmarna per kWh", "description": "Typiskt elpris för de senaste 24 timmarna per kWh (konfigurerbart visningsformat)",
"long_description": "Visar genomsnittspriset per kWh beräknat från de senaste 24 timmarna (rullande genomsnitt) från ditt Tibber-abonnemang. Detta ger ett rullande genomsnitt som uppdateras var 15:e minut baserat på historiska data.", "long_description": "Visar priset per kWh beräknat från de senaste 24 timmarna. **Som standard visar statusen medianen** (motståndskraftig mot extrema prispikar, visar typisk prisnåvå). Du kan ändra detta i integrationsinstllningarna för att visa det aritmetiska medelvärdet istället. Det alternativa värdet är tillgängligt som attribut. Uppdateras var 15:e minut.",
"usage_tips": "Använd detta för att jämföra nuvarande priser mot senaste trender. Ett nuvarande pris som ligger väsentligt över detta genomsnitt kan indikera ett bra tillfälle att minska konsumtionen." "usage_tips": "Använd statusvärdet för att se den typiska nuvarande prisnåvån. För kostnadsberäkningar använd: {{ state_attr('sensor.trailing_price_average', 'price_mean') }}"
}, },
"leading_price_average": { "leading_price_average": {
"description": "Det genomsnittliga elpriset för nästa 24 timmar per kWh", "description": "Typiskt elpris för nästa 24 timmar per kWh (konfigurerbart visningsformat)",
"long_description": "Visar genomsnittspriset per kWh beräknat från nästa 24 timmar (framåtblickande genomsnitt) från ditt Tibber-abonnemang. Detta ger ett framåtblickande genomsnitt baserat på tillgängliga prognosdata.", "long_description": "Visar priset per kWh beräknat från nästa 24 timmar. **Som standard visar statusen medianen** (motståndskraftig mot extrema prispikar, visar förväntad prisnåvå). Du kan ändra detta i integrationsinstllningarna för att visa det aritmetiska medelvärdet istället. Det alternativa värdet är tillgängligt som attribut.",
"usage_tips": "Använd detta för att planera energianvändning. Om nuvarande pris är under det framåtblickande genomsnittet kan det vara ett bra tillfälle att köra energikrävande apparater." "usage_tips": "Använd statusvärdet för att se den typiska kommande prisnåvån. För kostnadsberäkningar använd: {{ state_attr('sensor.leading_price_average', 'price_mean') }}"
}, },
"trailing_price_min": { "trailing_price_min": {
"description": "Det minsta elpriset för de senaste 24 timmarna per kWh", "description": "Det minsta elpriset för de senaste 24 timmarna per kWh",
@ -279,64 +292,74 @@
"long_description": "Visar tidsstämpeln för det senaste tillgängliga prisdataintervallet från ditt Tibber-abonnemang" "long_description": "Visar tidsstämpeln för det senaste tillgängliga prisdataintervallet från ditt Tibber-abonnemang"
}, },
"today_volatility": { "today_volatility": {
"description": "Prisvolatilitetsklassificering för idag", "description": "Hur mycket elpriserna varierar idag",
"long_description": "Visar hur mycket elpriserna varierar under dagen baserat på spridningen (skillnaden mellan högsta och lägsta pris). Klassificering: LÅG = spridning < 5 öre, MÅTTLIG = 5-15 öre, HÖG = 15-30 öre, MYCKET HÖG = >30 öre.", "long_description": "Visar om dagens priser är stabila eller har stora svängningar. Låg volatilitet innebär ganska jämna priser timing spelar liten roll. Hög volatilitet innebär tydliga prisskillnader under dagen bra tillfälle att flytta förbrukning till billigare perioder. `price_coefficient_variation_%` visar procentvärdet, `price_spread` visar den absoluta prisspannet.",
"usage_tips": "Använd detta för att avgöra om prisbaserad optimering är värt besväret. Till exempel, med ett balkongbatteri som har 15% effektivitetsförlust är optimering endast meningsfull när volatiliteten är åtminstone MÅTTLIG. Skapa automationer som kontrollerar volatiliteten innan laddnings-/urladdningscykler planeras." "usage_tips": "Använd detta för att avgöra om optimering är värt besväret. Vid låg volatilitet kan du köra enheter när som helst. Vid hög volatilitet sparar du märkbart genom att följa Best Price-perioder."
}, },
"tomorrow_volatility": { "tomorrow_volatility": {
"description": "Prisvolatilitetsklassificering för imorgon", "description": "Hur mycket elpriserna kommer att variera i morgon",
"long_description": "Visar hur mycket elpriserna kommer att variera under morgondagen baserat på spridningen (skillnaden mellan högsta och lägsta pris). Blir otillgänglig tills morgondagens data publiceras (vanligtvis 13:00-14:00 CET).", "long_description": "Visar om priserna i morgon blir stabila eller får stora svängningar. Tillgänglig när morgondagens data är publicerad (vanligen 13:0014:00 CET). Låg volatilitet innebär ganska jämna priser timing är inte kritisk. Hög volatilitet innebär tydliga prisskillnader under dagen bra läge att planera energikrävande uppgifter. `price_coefficient_variation_%` visar procentvärdet, `price_spread` visar den absoluta prisspannet.",
"usage_tips": "Använd detta för förhandsplanering av morgondagens energianvändning. Om morgondagen har HÖG eller MYCKET HÖG volatilitet är det värt att optimera energiförbrukningstiming. Vid LÅG volatilitet kan du köra enheter när som helst utan betydande kostnadsskillnader." "usage_tips": "Använd för att planera morgondagens förbrukning. Hög volatilitet? Planera flexibla laster i Best Price-perioder. Låg volatilitet? Kör enheter när det passar dig."
}, },
"next_24h_volatility": { "next_24h_volatility": {
"description": "Prisvolatilitetsklassificering för rullande nästa 24 timmar", "description": "Hur mycket priserna varierar de kommande 24 timmarna",
"long_description": "Visar hur mycket elpriserna varierar under de nästa 24 timmarna från nu (rullande fönster). Detta korsar daggränser och uppdateras var 15:e minut, vilket ger en framåtblickande volatilitetsbedömning oberoende av kalenderdagar.", "long_description": "Visar prisvolatilitet för ett rullande 24-timmarsfönster från nu (uppdateras var 15:e minut). Låg volatilitet innebär ganska jämna priser. Hög volatilitet innebär märkbara prissvängningar och därmed optimeringsmöjligheter. Till skillnad från idag/i morgon-sensorer korsar den här dagsgränser och ger en kontinuerlig framåtblickande bedömning. `price_coefficient_variation_%` visar procentvärdet, `price_spread` visar den absoluta prisspannet.",
"usage_tips": "Bästa sensorn för realtidsoptimeringsbeslut. Till skillnad från idag/imorgon-sensorer som växlar vid midnatt ger detta en kontinuerlig 24t volatilitetsbedömning. Använd för batteriladningsstrategier som sträcker sig över daggränser." "usage_tips": "Bäst för beslut i realtid. Använd vid planering av batteriladdning eller andra flexibla laster som kan gå över midnatt. Ger en konsekvent 24h-bild oberoende av kalenderdag."
}, },
"today_tomorrow_volatility": { "today_tomorrow_volatility": {
"description": "Kombinerad prisvolatilitetsklassificering för idag och imorgon", "description": "Kombinerad prisvolatilitet för idag och imorgon",
"long_description": "Visar volatilitet över både idag och imorgon kombinerat (när morgondagens data är tillgänglig). Ger en utökad vy av prisvariation över upp till 48 timmar. Faller tillbaka till endast idag när morgondagens data inte är tillgänglig ännu.", "long_description": "Visar den samlade volatiliteten när idag och imorgon ses tillsammans (när morgondatan finns). Visar om det finns tydliga prisskillnader över dagsgränsen. Faller tillbaka till endast idag om morgondatan saknas. Nyttig för flerdagarsoptimering. `price_coefficient_variation_%` visar procentvärdet, `price_spread` visar den absoluta prisspannet.",
"usage_tips": "Använd detta för flerdagarsplanering och för att förstå om prismöjligheter existerar över dagsgränsen. Attributen 'today_volatility' och 'tomorrow_volatility' visar individuella dagsbidrag. Användbart för planering av laddningssessioner som kan sträcka sig över midnatt." "usage_tips": "Använd för uppgifter som sträcker sig över flera dagar. Kontrollera om prisskillnaderna är stora nog för att planera efter. De enskilda dag-sensorerna visar bidrag per dag om du behöver mer detaljer."
}, },
"data_lifecycle_status": { "data_lifecycle_status": {
"description": "Aktuell status för prisdatalivscykel och cachning", "description": "Gjeldende tilstand for prisdatalivssyklus og hurtigbufring",
"long_description": "Visar om integrationen använder cachad data eller färsk data från API:et. Visar aktuell livscykelstatus: 'cached' (använder lagrad data), 'fresh' (nyss hämtad från API), 'refreshing' (hämtar för närvarande), 'searching_tomorrow' (söker aktivt efter morgondagens data efter 13:00), 'turnover_pending' (inom 15 minuter före midnatt, 23:45-00:00), eller 'error' (hämtning misslyckades). Inkluderar omfattande attribut som cache-ålder, nästa API-polling, datafullständighet och API-anropsstatistik.", "long_description": "Viser om integrasjonen bruker hurtigbufrede data eller ferske data fra API-et. Viser gjeldende livssyklustilstand: 'cached' (bruker lagrede data), 'fresh' (nettopp hentet fra API), 'refreshing' (henter for øyeblikket), 'searching_tomorrow' (søker aktivt etter morgendagens data etter 13:00), 'turnover_pending' (innen 15 minutter før midnatt, 23:45-00:00), eller 'error' (henting mislyktes). Inkluderer omfattende attributter som cache-alder, neste API-spørring, datafullstendighet og API-anropsstatistikk.",
"usage_tips": "Använd denna diagnostiksensor för att förstå datafärskhet och API-anropsmönster. Kontrollera 'cache_age'-attributet för att se hur gammal den aktuella datan är. Övervaka 'next_api_poll' för att veta när nästa uppdatering är schemalagd. Använd 'data_completeness' för att se om data för igår/idag/imorgon är tillgänglig. Räknaren 'api_calls_today' hjälper till att spåra API-användning. Perfekt för felsökning eller förståelse av integrationens beteende." "usage_tips": "Bruk denne diagnosesensoren for å forstå dataferskhet og API-anropsmønstre. Sjekk 'cache_age'-attributtet for å se hvor gamle de nåværende dataene er. Overvåk 'next_api_poll' for å vite når neste oppdatering er planlagt. Bruk 'data_completeness' for å se om data for i går/i dag/i morgen er tilgjengelig. 'api_calls_today'-telleren hjelper med å spore API-bruk. Perfekt for feilsøking eller forståelse av integrasjonens oppførsel."
}, },
"best_price_end_time": { "best_price_end_time": {
"description": "När nuvarande eller nästa billigperiod slutar", "description": "Total längd för nuvarande eller nästa billigperiod (state i timmar, attribut i minuter)",
"long_description": "Visar sluttidsstämpeln för nuvarande billigperiod när aktiv, eller slutet av nästa period när ingen period är aktiv. Visar alltid en användbar tidsreferens för planering. Returnerar 'Okänt' endast när inga perioder är konfigurerade.", "long_description": "Visar hur länge billigperioden varar. State använder timmar (decimal) för en läsbar UI; attributet `period_duration_minutes` behåller avrundade minuter för automationer. Aktiv → varaktighet för aktuell period, annars nästa.",
"usage_tips": "Använd detta för att visa en nedräkning som 'Billigperiod slutar om 2 timmar' (när aktiv) eller 'Nästa billigperiod slutar kl 14:00' (när inaktiv). Home Assistant visar automatiskt relativ tid för tidsstämpelsensorer." "usage_tips": "UI kan visa 1,5 h medan `period_duration_minutes` = 90 för automationer."
},
"best_price_period_duration": {
"description": "Längd på nuvarande/nästa billigperiod",
"long_description": "Total längd av nuvarande eller nästa billigperiod. State visas i timmar (t.ex. 1,5 h) för enkel avläsning i UI, medan attributet `period_duration_minutes` ger samma värde i minuter (t.ex. 90) för automationer. Detta värde representerar den **fullständigt planerade längden** av perioden och är konstant under hela perioden, även när återstående tid (remaining_minutes) minskar.",
"usage_tips": "Kombinera med remaining_minutes för att beräkna när långvariga enheter ska stoppas: Perioden startade för `period_duration_minutes - remaining_minutes` minuter sedan. Detta attribut stöder energioptimeringsstrategier genom att hjälpa till med att planera högförbruksaktiviteter inom billiga perioder."
}, },
"best_price_remaining_minutes": { "best_price_remaining_minutes": {
"description": "Återstående minuter i nuvarande billigperiod (0 när inaktiv)", "description": "Tid kvar i nuvarande billigperiod",
"long_description": "Visar hur många minuter som återstår i nuvarande billigperiod. Returnerar 0 när ingen period är aktiv. Uppdateras varje minut. Kontrollera binary_sensor.best_price_period för att se om en period är aktiv.", "long_description": "Visar hur mycket tid som återstår i nuvarande billigperiod. State visas i timmar (t.ex. 0,75 h) för enkel avläsning i instrumentpaneler, medan attributet `remaining_minutes` ger samma tid i minuter (t.ex. 45) för automationsvillkor. **Nedräkningstimer**: Detta värde minskar varje minut under en aktiv period. Returnerar 0 när ingen billigperiod är aktiv. Uppdateras varje minut.",
"usage_tips": "Perfekt för automationer: 'Om remaining_minutes > 0 OCH remaining_minutes < 30, starta tvättmaskin nu'. Värdet 0 gör det enkelt att kontrollera om en period är aktiv (värde > 0) eller inte (värde = 0)." "usage_tips": "För automationer: Använd attribut `remaining_minutes` som 'Om remaining_minutes > 60, starta diskmaskin nu (tillräckligt med tid för att slutföra)' eller 'Om remaining_minutes < 15, avsluta nuvarande cykel snart'. UI visar användarvänliga timmar (t.ex. 1,25 h). Värde 0 indikerar ingen aktiv billigperiod."
}, },
"best_price_progress": { "best_price_progress": {
"description": "Framsteg genom nuvarande billigperiod (0% när inaktiv)", "description": "Framsteg genom nuvarande billigperiod (0% när inaktiv)",
"long_description": "Visar framsteg genom nuvarande billigperiod som 0-100%. Returnerar 0% när ingen period är aktiv. Uppdateras varje minut. 0% betyder period just startad, 100% betyder den snart slutar.", "long_description": "Visar framsteg genom nuvarande billigperiod som 0-100%. Returnerar 0% när ingen period är aktiv. Uppdateras varje minut. 0% betyder att perioden just startade, 100% betyder att den snart slutar.",
"usage_tips": "Bra för visuella framstegsstaplar. Använd i automationer: 'Om progress > 0 OCH progress > 75, skicka meddelande att billigperiod snart slutar'. Värde 0 indikerar ingen aktiv period." "usage_tips": "Perfekt för visuella framstegsindikatorer. Använd i automationer: 'Om progress > 0 OCH progress > 75, skicka avisering om att billigperioden snart slutar'. Värde 0 indikerar ingen aktiv period."
}, },
"best_price_next_start_time": { "best_price_next_start_time": {
"description": "När nästa billigperiod startar", "description": "Total längd för nuvarande eller nästa dyrperiod (state i timmar, attribut i minuter)",
"long_description": "Visar när nästa kommande billigperiod startar. Under en aktiv period visar detta starten av NÄSTA period efter den nuvarande. Returnerar 'Okänt' endast när inga framtida perioder är konfigurerade.", "long_description": "Visar hur länge den dyra perioden varar. State använder timmar (decimal) för UI; attributet `period_duration_minutes` behåller avrundade minuter för automationer. Aktiv → varaktighet för aktuell period, annars nästa.",
"usage_tips": "Alltid användbart för framåtplanering: 'Nästa billigperiod startar om 3 timmar' (oavsett om du är i en period nu eller inte). Kombinera med automationer: 'När nästa starttid är om 10 minuter, skicka meddelande för att förbereda tvättmaskin'." "usage_tips": "UI kan visa 0,75 h medan `period_duration_minutes` = 45 för automationer."
}, },
"best_price_next_in_minutes": { "best_price_next_in_minutes": {
"description": "Minuter tills nästa billigperiod startar (0 vid övergång)", "description": "Tid kvar i nuvarande dyrperiod (state i timmar, attribut i minuter)",
"long_description": "Visar minuter tills nästa billigperiod startar. Under en aktiv period visar detta tiden till perioden EFTER den nuvarande. Returnerar 0 under korta övergångsmoment. Uppdateras varje minut.", "long_description": "Visar hur mycket tid som återstår. State använder timmar (decimal); attributet `remaining_minutes` behåller avrundade minuter för automationer. Returnerar 0 när ingen period är aktiv. Uppdateras varje minut.",
"usage_tips": "Perfekt för 'vänta tills billigperiod' automationer: 'Om next_in_minutes > 0 OCH next_in_minutes < 15, vänta innan diskmaskin startas'. Värde > 0 indikerar alltid att en framtida period är planerad." "usage_tips": "Använd `remaining_minutes` för trösklar (t.ex. > 60) medan state är lätt att läsa i timmar."
}, },
"peak_price_end_time": { "peak_price_end_time": {
"description": "När nuvarande eller nästa dyrperiod slutar", "description": "Tid tills nästa dyrperiod startar (state i timmar, attribut i minuter)",
"long_description": "Visar sluttidsstämpeln för nuvarande dyrperiod när aktiv, eller slutet av nästa period när ingen period är aktiv. Visar alltid en användbar tidsreferens för planering. Returnerar 'Okänt' endast när inga perioder är konfigurerade.", "long_description": "Visar hur länge tills nästa dyrperiod startar. State använder timmar (decimal); attributet `next_in_minutes` behåller avrundade minuter för automationer. Under en aktiv period visar detta tiden till perioden efter den aktuella. 0 under korta övergångar. Uppdateras varje minut.",
"usage_tips": "Använd detta för att visa 'Dyrperiod slutar om 1 timme' (när aktiv) eller 'Nästa dyrperiod slutar kl 18:00' (när inaktiv). Kombinera med automationer för att återuppta drift efter topp." "usage_tips": "Använd `next_in_minutes` i automationer (t.ex. < 10) medan state är lätt att läsa i timmar."
},
"peak_price_period_duration": {
"description": "Längd på nuvarande/nästa dyrperiod",
"long_description": "Total längd av nuvarande eller nästa dyrperiod. State visas i timmar (t.ex. 1,5 h) för enkel avläsning i UI, medan attributet `period_duration_minutes` ger samma värde i minuter (t.ex. 90) för automationer. Detta värde representerar den **fullständigt planerade längden** av perioden och är konstant under hela perioden, även när återstående tid (remaining_minutes) minskar.",
"usage_tips": "Kombinera med remaining_minutes för att beräkna när långvariga enheter ska stoppas: Perioden startade för `period_duration_minutes - remaining_minutes` minuter sedan. Detta attribut stöder energibesparingsstrategier genom att hjälpa till med att planera högförbruksaktiviteter utanför dyra perioder."
}, },
"peak_price_remaining_minutes": { "peak_price_remaining_minutes": {
"description": "Återstående minuter i nuvarande dyrperiod (0 när inaktiv)", "description": "Tid kvar i nuvarande dyrperiod",
"long_description": "Visar hur många minuter som återstår i nuvarande dyrperiod. Returnerar 0 när ingen period är aktiv. Uppdateras varje minut. Kontrollera binary_sensor.peak_price_period för att se om en period är aktiv.", "long_description": "Visar hur mycket tid som återstår i nuvarande dyrperiod. State visas i timmar (t.ex. 0,75 h) för enkel avläsning i instrumentpaneler, medan attributet `remaining_minutes` ger samma tid i minuter (t.ex. 45) för automationsvillkor. **Nedräkningstimer**: Detta värde minskar varje minut under en aktiv period. Returnerar 0 när ingen dyrperiod är aktiv. Uppdateras varje minut.",
"usage_tips": "Använd i automationer: 'Om remaining_minutes > 60, avbryt uppskjuten laddningssession'. Värde 0 gör det enkelt att skilja mellan aktiva (värde > 0) och inaktiva (värde = 0) perioder." "usage_tips": "För automationer: Använd attribut `remaining_minutes` som 'Om remaining_minutes > 60, avbryt uppskjuten laddningssession' eller 'Om remaining_minutes < 15, återuppta normal drift snart'. UI visar användarvänliga timmar (t.ex. 1,0 h). Värde 0 indikerar ingen aktiv dyrperiod."
}, },
"peak_price_progress": { "peak_price_progress": {
"description": "Framsteg genom nuvarande dyrperiod (0% när inaktiv)", "description": "Framsteg genom nuvarande dyrperiod (0% när inaktiv)",
@ -349,19 +372,9 @@
"usage_tips": "Alltid användbart för planering: 'Nästa dyrperiod startar om 2 timmar'. Automation: 'När nästa starttid är om 30 minuter, minska värmetemperatur förebyggande'." "usage_tips": "Alltid användbart för planering: 'Nästa dyrperiod startar om 2 timmar'. Automation: 'När nästa starttid är om 30 minuter, minska värmetemperatur förebyggande'."
}, },
"peak_price_next_in_minutes": { "peak_price_next_in_minutes": {
"description": "Minuter tills nästa dyrperiod startar (0 vid övergång)", "description": "Tid till nästa dyrperiod",
"long_description": "Visar minuter tills nästa dyrperiod startar. Under en aktiv period visar detta tiden till perioden EFTER den nuvarande. Returnerar 0 under korta övergångsmoment. Uppdateras varje minut.", "long_description": "Visar hur länge till nästa dyrperiod. State visas i timmar (t.ex. 0,5 h) för instrumentpaneler, medan attributet `next_in_minutes` ger minuter (t.ex. 30) för automationsvillkor. Under en aktiv period visar detta tiden till perioden EFTER den nuvarande. Returnerar 0 under korta övergångsmoment. Uppdateras varje minut.",
"usage_tips": "Förebyggande automation: 'Om next_in_minutes > 0 OCH next_in_minutes < 10, slutför nuvarande laddcykel nu innan priserna ökar'." "usage_tips": "För automationer: Använd attribut `next_in_minutes` som 'Om next_in_minutes > 0 OCH next_in_minutes < 10, slutför nuvarande laddcykel nu innan priserna ökar'. Värde > 0 indikerar alltid att en framtida dyrperiod är planerad."
},
"best_price_period_duration": {
"description": "Total längd på nuvarande eller nästa billigperiod i minuter",
"long_description": "Visar den totala längden på billigperioden i minuter. Under en aktiv period visar detta hela längden av nuvarande period. När ingen period är aktiv visar detta längden på nästa kommande period. Exempel: '90 minuter' för en 1,5-timmars period.",
"usage_tips": "Kombinera med remaining_minutes för att planera uppgifter: 'Om duration = 120 OCH remaining_minutes > 90, starta tvättmaskin (tillräckligt med tid för att slutföra)'. Användbart för att förstå om perioder är tillräckligt långa för energikrävande uppgifter."
},
"peak_price_period_duration": {
"description": "Total längd på nuvarande eller nästa dyrperiod i minuter",
"long_description": "Visar den totala längden på dyrperioden i minuter. Under en aktiv period visar detta hela längden av nuvarande period. När ingen period är aktiv visar detta längden på nästa kommande period. Exempel: '60 minuter' för en 1-timmars period.",
"usage_tips": "Använd för att planera energisparåtgärder: 'Om duration > 120, minska värmetemperatur mer aggressivt (lång dyr period)'. Hjälper till att bedöma hur mycket energiförbrukning måste minskas."
}, },
"home_type": { "home_type": {
"description": "Bostadstyp (lägenhet, hus osv.)", "description": "Bostadstyp (lägenhet, hus osv.)",
@ -434,9 +447,14 @@
"usage_tips": "Använd detta för att övervaka din abonnemangsstatus. Ställ in varningar om statusen ändras från 'Aktiv' för att säkerställa oavbruten service." "usage_tips": "Använd detta för att övervaka din abonnemangsstatus. Ställ in varningar om statusen ändras från 'Aktiv' för att säkerställa oavbruten service."
}, },
"chart_data_export": { "chart_data_export": {
"description": "Dataexport för instrumentpanelsintegrationer", "description": "Dataexport för dashboard-integrationer",
"long_description": "Denna sensor anropar get_chartdata-tjänsten med din konfigurerade YAML-konfiguration och exponerar resultatet som entitetsattribut. Statusen visar 'ready' när data är tillgänglig, 'error' vid fel, eller 'pending' före första anropet. Perfekt för instrumentpanelsintegrationer som ApexCharts som behöver läsa prisdata från entitetsattribut.", "long_description": "Denna sensor anropar get_chartdata-tjänsten med din konfigurerade YAML-konfiguration och exponerar resultatet som entitetsattribut. Statusen visar 'ready' när data är tillgänglig, 'error' vid fel, eller 'pending' före första anropet. Perfekt för dashboard-integrationer som ApexCharts som behöver läsa prisdata från entitetsattribut.",
"usage_tips": "Konfigurera YAML-parametrarna i integrationsinställningarna för att matcha ditt get_chartdata-tjänsteanrop. Sensorn uppdateras automatiskt när prisdata uppdateras (vanligtvis efter midnatt och när morgondagens data anländer). Få åtkomst till tjänstesvarsdata direkt från entitetens attribut - strukturen matchar exakt vad get_chartdata returnerar." "usage_tips": "Konfigurera YAML-parametrarna i integrationsalternativen för att matcha ditt get_chartdata-tjänstanrop. Sensorn uppdateras automatiskt när prisdata uppdateras (vanligtvis efter midnatt och när morgondagens data anländer). Få tillgång till tjänstesvarsdata direkt från entitetens attribut - strukturen matchar exakt vad get_chartdata returnerar."
},
"chart_metadata": {
"description": "Lättviktig metadata för diagramkonfiguration",
"long_description": "Tillhandahåller väsentliga diagramkonfigurationsvärden som sensorattribut. Användbart för vilket diagramkort som helst som behöver Y-axelgränser. Sensorn anropar get_chartdata med endast-metadata-läge (ingen databehandling) och extraherar: yaxis_min, yaxis_max (föreslagen Y-axelomfång för optimal skalning). Statusen återspeglar tjänstanropsresultatet: 'ready' vid framgång, 'error' vid fel, 'pending' under initialisering.",
"usage_tips": "Konfigurera via configuration.yaml under tibber_prices.chart_metadata_config (valfritt: day, subunit_currency, resolution). Sensorn uppdateras automatiskt vid pris dataändringar. Få tillgång till metadata från attribut: yaxis_min, yaxis_max. Använd med config-template-card eller vilket verktyg som helst som läser entitetsattribut - perfekt för dynamisk diagramkonfiguration utan manuella beräkningar."
} }
}, },
"binary_sensor": { "binary_sensor": {
@ -469,11 +487,80 @@
"description": "Om realtidsförbrukningsövervakning är aktiv", "description": "Om realtidsförbrukningsövervakning är aktiv",
"long_description": "Indikerar om realtidsövervakning av elförbrukning är aktiverad och aktiv för ditt Tibber-hem. Detta kräver kompatibel mätutrustning (t.ex. Tibber Pulse) och en aktiv prenumeration.", "long_description": "Indikerar om realtidsövervakning av elförbrukning är aktiverad och aktiv för ditt Tibber-hem. Detta kräver kompatibel mätutrustning (t.ex. Tibber Pulse) och en aktiv prenumeration.",
"usage_tips": "Använd detta för att verifiera att realtidsförbrukningen är tillgänglig. Aktivera meddelanden om detta oväntat ändras till 'av', vilket indikerar potentiella hårdvaru- eller anslutningsproblem." "usage_tips": "Använd detta för att verifiera att realtidsförbrukningen är tillgänglig. Aktivera meddelanden om detta oväntat ändras till 'av', vilket indikerar potentiella hårdvaru- eller anslutningsproblem."
}
},
"number": {
"best_price_flex_override": {
"description": "Maximal procent över daglig minimumpris som intervaller kan ha och fortfarande kvalificera som 'bästa pris'. Rekommenderas: 15-20 med lättnad aktiverad (standard), eller 25-35 utan lättnad. Maximum: 50 (hårt tak för tillförlitlig perioddetektering).",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Flexibilitet'-inställningen från alternativ-dialogen för bästa pris-periodberäkningar.",
"usage_tips": "Aktivera denna entitet för att dynamiskt justera bästa pris-detektering via automatiseringar, t.ex. högre flexibilitet för kritiska laster eller striktare krav för flexibla apparater."
}, },
"chart_data_export": { "best_price_min_distance_override": {
"description": "Dataexport för instrumentpanelsintegrationer", "description": "Minsta procentuella avstånd under dagligt genomsnitt. Intervaller måste vara så långt under genomsnittet för att kvalificera som 'bästa pris'. Hjälper att skilja äkta lågprisperioder från genomsnittspriser.",
"long_description": "Denna binär sensor anropar tjänsten get_chartdata för att exportera prissensordata i format som är kompatibelt med ApexCharts och andra instrumentpanelskomponenter. Använd denna tillsammans med custom:apexcharts-card för att visa prissensorer på din instrumentpanel.", "long_description": "När denna entitet är aktiverad överskriver värdet 'Minimiavstånd'-inställningen från alternativ-dialogen för bästa pris-periodberäkningar.",
"usage_tips": "Konfigurera YAML-parametrarna i integrationens alternativ under 'ApexCharts-datakonfiguration'. Tjänsten kräver en giltig sensorenhet och returnerar formaterad data för kartrendring. Se dokumentationen för tillgängliga parametrar och anpassningsalternativ." "usage_tips": "Öka värdet för striktare bästa pris-kriterier. Minska om för få perioder detekteras."
},
"best_price_min_period_length_override": {
"description": "Minsta periodlängd i 15-minuters intervaller. Perioder kortare än detta rapporteras inte. Exempel: 2 = minst 30 minuter.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Minsta periodlängd'-inställningen från alternativ-dialogen för bästa pris-periodberäkningar.",
"usage_tips": "Anpassa till typisk apparatkörtid: 2 (30 min) för snabbprogram, 4-8 (1-2 timmar) för normala cykler, 8+ för långa ECO-program."
},
"best_price_min_periods_override": {
"description": "Minsta antal bästa pris-perioder att hitta dagligen. När lättnad är aktiverad kommer systemet automatiskt att justera kriterierna för att uppnå detta antal.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Minsta antal perioder'-inställningen från alternativ-dialogen för bästa pris-periodberäkningar.",
"usage_tips": "Ställ in detta på antalet tidskritiska uppgifter du har dagligen. Exempel: 2 för två tvattmaskinskörningar."
},
"best_price_relaxation_attempts_override": {
"description": "Antal försök att gradvis lätta på kriterierna för att uppnå minsta periodantal. Varje försök ökar flexibiliteten med 3 procent. Vid 0 används endast baskriterier.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Lättnadsförsök'-inställningen från alternativ-dialogen för bästa pris-periodberäkningar.",
"usage_tips": "Högre värden gör perioddetektering mer adaptiv för dagar med stabila priser. Ställ in på 0 för att tvinga strikta kriterier utan lättnad."
},
"best_price_gap_count_override": {
"description": "Maximalt antal dyrare intervaller som kan tillåtas mellan billiga intervaller medan de fortfarande räknas som en sammanhängande period. Vid 0 måste billiga intervaller vara påföljande.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Glaptolerans'-inställningen från alternativ-dialogen för bästa pris-periodberäkningar.",
"usage_tips": "Öka detta för apparater med variabel last (t.ex. värmepumpar) som kan tolerera korta dyrare intervaller. Ställ in på 0 för kontinuerligt billiga perioder."
},
"peak_price_flex_override": {
"description": "Maximal procent under daglig maximumpris som intervaller kan ha och fortfarande kvalificera som 'topppris'. Samma rekommendationer som för bästa pris-flexibilitet.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Flexibilitet'-inställningen från alternativ-dialogen för topppris-periodberäkningar.",
"usage_tips": "Använd detta för att justera topppris-tröskeln vid körtid för automatiseringar som undviker förbrukning under dyra timmar."
},
"peak_price_min_distance_override": {
"description": "Minsta procentuella avstånd över dagligt genomsnitt. Intervaller måste vara så långt över genomsnittet för att kvalificera som 'topppris'.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Minimiavstånd'-inställningen från alternativ-dialogen för topppris-periodberäkningar.",
"usage_tips": "Öka värdet för att endast fånga extrema pristoppar. Minska för att inkludera fler högpristider."
},
"peak_price_min_period_length_override": {
"description": "Minsta periodlängd i 15-minuters intervaller för topppriser. Kortare pristoppar rapporteras inte som perioder.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Minsta periodlängd'-inställningen från alternativ-dialogen för topppris-periodberäkningar.",
"usage_tips": "Kortare värden fångar korta pristoppar. Längre värden fokuserar på ihållande högprisperioder."
},
"peak_price_min_periods_override": {
"description": "Minsta antal topppris-perioder att hitta dagligen.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Minsta antal perioder'-inställningen från alternativ-dialogen för topppris-periodberäkningar.",
"usage_tips": "Ställ in detta baserat på hur många högprisperioder du vill fånga per dag för automatiseringar."
},
"peak_price_relaxation_attempts_override": {
"description": "Antal försök att lätta på kriterierna för att uppnå minsta antal topppris-perioder.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Lättnadsförsök'-inställningen från alternativ-dialogen för topppris-periodberäkningar.",
"usage_tips": "Öka detta om inga perioder hittas på dagar med stabila priser. Ställ in på 0 för att tvinga strikta kriterier."
},
"peak_price_gap_count_override": {
"description": "Maximalt antal billigare intervaller som kan tillåtas mellan dyra intervaller medan de fortfarande räknas som en topppris-period.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Glaptolerans'-inställningen från alternativ-dialogen för topppris-periodberäkningar.",
"usage_tips": "Högre värden fångar längre högprisperioder även med korta prisdipp. Ställ in på 0 för strikt sammanhängande topppriser."
}
},
"switch": {
"best_price_enable_relaxation_override": {
"description": "När aktiverad lättas kriterierna automatiskt för att uppnå minsta periodantal. När inaktiverad rapporteras endast perioder som uppfyller strikta kriterier (möjligen noll perioder på dagar med stabila priser).",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Uppnå minimiantal'-inställningen från alternativ-dialogen för bästa pris-periodberäkningar.",
"usage_tips": "Aktivera detta för garanterade dagliga automatiseringsmöjligheter. Inaktivera om du endast vill ha riktigt billiga perioder, även om det innebär inga perioder vissa dagar."
},
"peak_price_enable_relaxation_override": {
"description": "När aktiverad lättas kriterierna automatiskt för att uppnå minsta periodantal. När inaktiverad rapporteras endast äkta pristoppar.",
"long_description": "När denna entitet är aktiverad överskriver värdet 'Uppnå minimiantal'-inställningen från alternativ-dialogen för topppris-periodberäkningar.",
"usage_tips": "Aktivera detta för konsekventa topppris-varningar. Inaktivera för att endast fånga extrema pristoppar."
} }
}, },
"home_types": { "home_types": {
@ -482,5 +569,15 @@
"HOUSE": "Hus", "HOUSE": "Hus",
"COTTAGE": "Stuga" "COTTAGE": "Stuga"
}, },
"time_units": {
"day": "{count} dag",
"days": "{count} dagar",
"hour": "{count} timme",
"hours": "{count} timmar",
"minute": "{count} minut",
"minutes": "{count} minuter",
"ago": "{parts} sedan",
"now": "nu"
},
"attribution": "Data tillhandahålls av Tibber" "attribution": "Data tillhandahålls av Tibber"
} }

View file

@ -11,6 +11,7 @@ if TYPE_CHECKING:
from .api import TibberPricesApiClient from .api import TibberPricesApiClient
from .coordinator import TibberPricesDataUpdateCoordinator from .coordinator import TibberPricesDataUpdateCoordinator
from .interval_pool import TibberPricesIntervalPool
@dataclass @dataclass
@ -20,6 +21,7 @@ class TibberPricesData:
client: TibberPricesApiClient client: TibberPricesApiClient
coordinator: TibberPricesDataUpdateCoordinator coordinator: TibberPricesDataUpdateCoordinator
integration: Integration integration: Integration
interval_pool: TibberPricesIntervalPool # Shared interval pool per config entry
if TYPE_CHECKING: if TYPE_CHECKING:

View file

@ -22,6 +22,9 @@ async def async_get_config_entry_diagnostics(
"""Return diagnostics for a config entry.""" """Return diagnostics for a config entry."""
coordinator = entry.runtime_data.coordinator coordinator = entry.runtime_data.coordinator
# Get period metadata from coordinator data
price_periods = coordinator.data.get("pricePeriods", {}) if coordinator.data else {}
return { return {
"entry": { "entry": {
"entry_id": entry.entry_id, "entry_id": entry.entry_id,
@ -30,16 +33,46 @@ async def async_get_config_entry_diagnostics(
"domain": entry.domain, "domain": entry.domain,
"title": entry.title, "title": entry.title,
"state": str(entry.state), "state": str(entry.state),
"home_id": entry.data.get("home_id", ""),
}, },
"coordinator": { "coordinator": {
"last_update_success": coordinator.last_update_success, "last_update_success": coordinator.last_update_success,
"update_interval": str(coordinator.update_interval), "update_interval": str(coordinator.update_interval),
"is_main_entry": coordinator.is_main_entry(),
"data": coordinator.data, "data": coordinator.data,
"update_timestamps": { "update_timestamps": {
"price": coordinator._last_price_update.isoformat() if coordinator._last_price_update else None, # noqa: SLF001 "price": coordinator._last_price_update.isoformat() if coordinator._last_price_update else None, # noqa: SLF001
"user": coordinator._last_user_update.isoformat() if coordinator._last_user_update else None, # noqa: SLF001 "user": coordinator._last_user_update.isoformat() if coordinator._last_user_update else None, # noqa: SLF001
"last_coordinator_update": coordinator._last_coordinator_update.isoformat() # noqa: SLF001
if coordinator._last_coordinator_update # noqa: SLF001
else None,
}, },
"lifecycle": {
"state": coordinator._lifecycle_state, # noqa: SLF001
"is_fetching": coordinator._is_fetching, # noqa: SLF001
"api_calls_today": coordinator._api_calls_today, # noqa: SLF001
"last_api_call_date": coordinator._last_api_call_date.isoformat() # noqa: SLF001
if coordinator._last_api_call_date # noqa: SLF001
else None,
},
},
"periods": {
"best_price": {
"count": len(price_periods.get("best_price", {}).get("periods", [])),
"metadata": price_periods.get("best_price", {}).get("metadata", {}),
},
"peak_price": {
"count": len(price_periods.get("peak_price", {}).get("periods", [])),
"metadata": price_periods.get("peak_price", {}).get("metadata", {}),
},
},
"config": {
"options": dict(entry.options),
},
"cache_status": {
"user_data_cached": coordinator._cached_user_data is not None, # noqa: SLF001
"has_price_data": coordinator.data is not None and "priceInfo" in (coordinator.data or {}),
"transformer_cache_valid": coordinator._data_transformer._cached_transformed_data is not None, # noqa: SLF001
"period_calculator_cache_valid": coordinator._period_calculator._cached_periods is not None, # noqa: SLF001
}, },
"error": { "error": {
"last_exception": str(coordinator.last_exception) if coordinator.last_exception else None, "last_exception": str(coordinator.last_exception) if coordinator.last_exception else None,

View file

@ -44,6 +44,22 @@ class TibberPricesEntity(CoordinatorEntity[TibberPricesDataUpdateCoordinator]):
configuration_url="https://developer.tibber.com/explorer", configuration_url="https://developer.tibber.com/explorer",
) )
@property
def available(self) -> bool:
"""
Return if entity is available.
Entity is unavailable when:
- Coordinator has not completed first update (no data yet)
- Coordinator has encountered an error (last_update_success = False)
Note: Auth failures are handled by coordinator's update method,
which raises ConfigEntryAuthFailed and triggers reauth flow.
"""
# Return False if coordinator not ready or has errors
# Return True if coordinator has data (bool conversion handles None/empty)
return self.coordinator.last_update_success and bool(self.coordinator.data)
def _get_device_info(self) -> tuple[str, str | None, str | None]: def _get_device_info(self) -> tuple[str, str | None, str | None]:
"""Get device name, ID and type.""" """Get device name, ID and type."""
user_profile = self.coordinator.get_user_profile() user_profile = self.coordinator.get_user_profile()
@ -102,8 +118,10 @@ class TibberPricesEntity(CoordinatorEntity[TibberPricesDataUpdateCoordinator]):
return "Tibber Home", None return "Tibber Home", None
try: try:
address1 = str(self.coordinator.data.get("address", {}).get("address1", "")) # Use 'or {}' to handle None values (API may return None during maintenance)
city = str(self.coordinator.data.get("address", {}).get("city", "")) address = self.coordinator.data.get("address") or {}
address1 = str(address.get("address1", ""))
city = str(address.get("city", ""))
app_nickname = str(self.coordinator.data.get("appNickname", "")) app_nickname = str(self.coordinator.data.get("appNickname", ""))
home_type = str(self.coordinator.data.get("type", "")) home_type = str(self.coordinator.data.get("type", ""))

View file

@ -49,7 +49,7 @@ def build_period_attributes(period_data: dict) -> dict:
} }
def add_description_attributes( # noqa: PLR0913 def add_description_attributes( # noqa: PLR0913, PLR0912
attributes: dict, attributes: dict,
platform: str, platform: str,
translation_key: str | None, translation_key: str | None,
@ -61,8 +61,13 @@ def add_description_attributes( # noqa: PLR0913
""" """
Add description attributes from custom translations to an existing attributes dict. Add description attributes from custom translations to an existing attributes dict.
Adds description (always), and optionally long_description and usage_tips if The 'description' attribute is always present, but its content changes based on
CONF_EXTENDED_DESCRIPTIONS is enabled in config. CONF_EXTENDED_DESCRIPTIONS setting:
- When disabled: Uses short 'description' from translations
- When enabled: Uses 'long_description' from translations (falls back to short if not available)
Additionally, when CONF_EXTENDED_DESCRIPTIONS is enabled, 'usage_tips' is added as
a separate attribute.
This function modifies the attributes dict in-place. By default, descriptions are This function modifies the attributes dict in-place. By default, descriptions are
added at the END of the dict (after all other attributes). For special cases like added at the END of the dict (after all other attributes). For special cases like
@ -95,20 +100,27 @@ def add_description_attributes( # noqa: PLR0913
# Build description dict # Build description dict
desc_attrs: dict[str, str] = {} desc_attrs: dict[str, str] = {}
description = get_entity_description(platform, translation_key, language, "description")
if description:
desc_attrs["description"] = description
extended_descriptions = config_entry.options.get( extended_descriptions = config_entry.options.get(
CONF_EXTENDED_DESCRIPTIONS, CONF_EXTENDED_DESCRIPTIONS,
config_entry.data.get(CONF_EXTENDED_DESCRIPTIONS, DEFAULT_EXTENDED_DESCRIPTIONS), config_entry.data.get(CONF_EXTENDED_DESCRIPTIONS, DEFAULT_EXTENDED_DESCRIPTIONS),
) )
# Choose description based on extended_descriptions setting
if extended_descriptions: if extended_descriptions:
long_desc = get_entity_description(platform, translation_key, language, "long_description") # Use long_description as description content (if available)
if long_desc: description = get_entity_description(platform, translation_key, language, "long_description")
desc_attrs["long_description"] = long_desc if not description:
# Fallback to short description if long_description not available
description = get_entity_description(platform, translation_key, language, "description")
else:
# Use short description
description = get_entity_description(platform, translation_key, language, "description")
if description:
desc_attrs["description"] = description
# Add usage_tips as separate attribute if extended_descriptions enabled
if extended_descriptions:
usage_tips = get_entity_description(platform, translation_key, language, "usage_tips") usage_tips = get_entity_description(platform, translation_key, language, "usage_tips")
if usage_tips: if usage_tips:
desc_attrs["usage_tips"] = usage_tips desc_attrs["usage_tips"] = usage_tips
@ -140,7 +152,7 @@ def add_description_attributes( # noqa: PLR0913
attributes[key] = value attributes[key] = value
async def async_add_description_attributes( # noqa: PLR0913 async def async_add_description_attributes( # noqa: PLR0913, PLR0912
attributes: dict, attributes: dict,
platform: str, platform: str,
translation_key: str | None, translation_key: str | None,
@ -179,32 +191,45 @@ async def async_add_description_attributes( # noqa: PLR0913
# Build description dict # Build description dict
desc_attrs: dict[str, str] = {} desc_attrs: dict[str, str] = {}
description = await async_get_entity_description(
hass,
platform,
translation_key,
language,
"description",
)
if description:
desc_attrs["description"] = description
extended_descriptions = config_entry.options.get( extended_descriptions = config_entry.options.get(
CONF_EXTENDED_DESCRIPTIONS, CONF_EXTENDED_DESCRIPTIONS,
config_entry.data.get(CONF_EXTENDED_DESCRIPTIONS, DEFAULT_EXTENDED_DESCRIPTIONS), config_entry.data.get(CONF_EXTENDED_DESCRIPTIONS, DEFAULT_EXTENDED_DESCRIPTIONS),
) )
# Choose description based on extended_descriptions setting
if extended_descriptions: if extended_descriptions:
long_desc = await async_get_entity_description( # Use long_description as description content (if available)
description = await async_get_entity_description(
hass, hass,
platform, platform,
translation_key, translation_key,
language, language,
"long_description", "long_description",
) )
if long_desc: if not description:
desc_attrs["long_description"] = long_desc # Fallback to short description if long_description not available
description = await async_get_entity_description(
hass,
platform,
translation_key,
language,
"description",
)
else:
# Use short description
description = await async_get_entity_description(
hass,
platform,
translation_key,
language,
"description",
)
if description:
desc_attrs["description"] = description
# Add usage_tips as separate attribute if extended_descriptions enabled
if extended_descriptions:
usage_tips = await async_get_entity_description( usage_tips = await async_get_entity_description(
hass, hass,
platform, platform,

View file

@ -2,7 +2,7 @@
Common helper functions for entities across platforms. Common helper functions for entities across platforms.
This module provides utility functions used by both sensor and binary_sensor platforms: This module provides utility functions used by both sensor and binary_sensor platforms:
- Price value conversion (major/minor currency units) - Price value conversion (major/subunit currency units)
- Translation helpers (price levels, ratings) - Translation helpers (price levels, ratings)
- Time-based calculations (rolling hour center index) - Time-based calculations (rolling hour center index)
@ -14,28 +14,52 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.const import get_price_level_translation from custom_components.tibber_prices.const import get_display_unit_factor, get_price_level_translation
if TYPE_CHECKING: if TYPE_CHECKING:
from datetime import datetime from datetime import datetime
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from custom_components.tibber_prices.data import TibberPricesConfigEntry
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant from homeassistant.core import HomeAssistant
def get_price_value(price: float, *, in_euro: bool) -> float: def get_price_value(
price: float,
*,
in_euro: bool | None = None,
config_entry: ConfigEntry | TibberPricesConfigEntry | None = None,
) -> float:
""" """
Convert price based on unit. Convert price based on unit.
NOTE: This function supports two modes for backward compatibility:
1. Legacy mode: in_euro=True/False (hardcoded conversion)
2. New mode: config_entry (config-driven conversion)
New code should use get_display_unit_factor(config_entry) directly.
Args: Args:
price: Price value to convert price: Price value to convert.
in_euro: If True, return price in euros; if False, return in cents/øre in_euro: (Legacy) If True, return in base currency; if False, in subunit currency.
config_entry: (New) Config entry to get display unit configuration.
Returns: Returns:
Price in requested unit (euros or minor currency units) Price in requested unit (major or subunit currency units).
""" """
return price if in_euro else round((price * 100), 2) # Legacy mode: use in_euro parameter
if in_euro is not None:
return price if in_euro else round(price * 100, 2)
# New mode: use config_entry
if config_entry is not None:
factor = get_display_unit_factor(config_entry)
return round(price * factor, 2)
# Fallback: default to subunit currency (backward compatibility)
return round(price * 100, 2)
def translate_level(hass: HomeAssistant, level: str) -> str: def translate_level(hass: HomeAssistant, level: str) -> str:

View file

@ -17,6 +17,7 @@ from custom_components.tibber_prices.const import (
PRICE_RATING_ICON_MAPPING, PRICE_RATING_ICON_MAPPING,
VOLATILITY_ICON_MAPPING, VOLATILITY_ICON_MAPPING,
) )
from custom_components.tibber_prices.coordinator.helpers import get_intervals_for_day_offsets
from custom_components.tibber_prices.entity_utils.helpers import find_rolling_hour_center_index from custom_components.tibber_prices.entity_utils.helpers import find_rolling_hour_center_index
from custom_components.tibber_prices.sensor.helpers import aggregate_level_data from custom_components.tibber_prices.sensor.helpers import aggregate_level_data
from custom_components.tibber_prices.utils.price import find_price_data_for_interval from custom_components.tibber_prices.utils.price import find_price_data_for_interval
@ -84,19 +85,25 @@ def get_dynamic_icon(
def get_trend_icon(key: str, value: Any) -> str | None: def get_trend_icon(key: str, value: Any) -> str | None:
"""Get icon for trend sensors.""" """Get icon for trend sensors using 5-level trend scale."""
# Handle next_price_trend_change TIMESTAMP sensor differently # Handle next_price_trend_change TIMESTAMP sensor differently
# (icon based on attributes, not value which is a timestamp) # (icon based on attributes, not value which is a timestamp)
if key == "next_price_trend_change": if key == "next_price_trend_change":
return None # Will be handled by sensor's icon property using attributes return None # Will be handled by sensor's icon property using attributes
if not key.startswith("price_trend_") or not isinstance(value, str): if not key.startswith("price_trend_") and key != "current_price_trend":
return None return None
if not isinstance(value, str):
return None
# 5-level trend icons: strongly uses double arrows, normal uses single
trend_icons = { trend_icons = {
"rising": "mdi:trending-up", "strongly_rising": "mdi:chevron-double-up", # Strong upward movement
"falling": "mdi:trending-down", "rising": "mdi:trending-up", # Normal upward trend
"stable": "mdi:trending-neutral", "stable": "mdi:trending-neutral", # No significant change
"falling": "mdi:trending-down", # Normal downward trend
"strongly_falling": "mdi:chevron-double-down", # Strong downward movement
} }
return trend_icons.get(value) return trend_icons.get(value)
@ -196,7 +203,7 @@ def get_price_sensor_icon(
return None return None
# Only current price sensors get dynamic icons # Only current price sensors get dynamic icons
if key == "current_interval_price": if key in ("current_interval_price", "current_interval_price_base"):
level = get_price_level_for_icon(coordinator_data, interval_offset=0, time=time) level = get_price_level_for_icon(coordinator_data, interval_offset=0, time=time)
if level: if level:
return PRICE_LEVEL_CASH_ICON_MAPPING.get(level.upper()) return PRICE_LEVEL_CASH_ICON_MAPPING.get(level.upper())
@ -319,12 +326,11 @@ def get_price_level_for_icon(
if not coordinator_data or interval_offset is None: if not coordinator_data or interval_offset is None:
return None return None
price_info = coordinator_data.get("priceInfo", {})
now = time.now() now = time.now()
# Interval-based lookup # Interval-based lookup
target_time = now + timedelta(minutes=_INTERVAL_MINUTES * interval_offset) target_time = now + timedelta(minutes=_INTERVAL_MINUTES * interval_offset)
interval_data = find_price_data_for_interval(price_info, target_time, time=time) interval_data = find_price_data_for_interval(coordinator_data, target_time, time=time)
if not interval_data or "level" not in interval_data: if not interval_data or "level" not in interval_data:
return None return None
@ -358,8 +364,8 @@ def get_rolling_hour_price_level_for_icon(
if not coordinator_data: if not coordinator_data:
return None return None
price_info = coordinator_data.get("priceInfo", {}) # Get all intervals (yesterday, today, tomorrow) via helper
all_prices = price_info.get("yesterday", []) + price_info.get("today", []) + price_info.get("tomorrow", []) all_prices = get_intervals_for_day_offsets(coordinator_data, [-1, 0, 1])
if not all_prices: if not all_prices:
return None return None

View file

@ -0,0 +1,33 @@
{
"services": {
"get_price": {
"service": "mdi:table-search"
},
"get_chartdata": {
"service": "mdi:chart-bar",
"sections": {
"general": "mdi:identifier",
"selection": "mdi:calendar-range",
"filters": "mdi:filter-variant",
"transformation": "mdi:tune",
"format": "mdi:file-table",
"arrays_of_objects": "mdi:code-json",
"arrays_of_arrays": "mdi:code-brackets"
}
},
"get_apexcharts_yaml": {
"service": "mdi:chart-line",
"sections": {
"entry_id": "mdi:identifier",
"day": "mdi:calendar-range",
"level_type": "mdi:format-list-bulleted-type",
"resolution": "mdi:timer-sand",
"highlight_best_price": "mdi:battery-charging-low",
"highlight_peak_price": "mdi:battery-alert"
}
},
"refresh_user_data": {
"service": "mdi:refresh"
}
}
}

View file

@ -0,0 +1,21 @@
"""Interval Pool - Intelligent interval caching and routing."""
from .manager import TibberPricesIntervalPool
from .routing import get_price_intervals_for_range
from .storage import (
INTERVAL_POOL_STORAGE_VERSION,
async_load_pool_state,
async_remove_pool_storage,
async_save_pool_state,
get_storage_key,
)
__all__ = [
"INTERVAL_POOL_STORAGE_VERSION",
"TibberPricesIntervalPool",
"async_load_pool_state",
"async_remove_pool_storage",
"async_save_pool_state",
"get_price_intervals_for_range",
"get_storage_key",
]

View file

@ -0,0 +1,206 @@
"""Fetch group cache for price intervals."""
from __future__ import annotations
import logging
from datetime import datetime, timedelta
from typing import TYPE_CHECKING, Any
from homeassistant.util import dt as dt_utils
if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.time_service import (
TibberPricesTimeService,
)
_LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
# Protected date range: day-before-yesterday to tomorrow (4 days total)
PROTECTED_DAYS_BEFORE = 2 # day-before-yesterday + yesterday
PROTECTED_DAYS_AFTER = 1 # tomorrow
class TibberPricesIntervalPoolFetchGroupCache:
"""
Storage for fetch groups with protected range management.
A fetch group is a collection of intervals fetched at the same time,
stored together with their fetch timestamp for GC purposes.
Structure:
{
"fetched_at": datetime, # When this group was fetched
"intervals": [dict, ...] # List of interval dicts
}
Protected Range:
Intervals within day-before-yesterday to tomorrow are protected
and never evicted from cache. This range shifts daily automatically.
Example (today = 2025-11-25):
Protected: 2025-11-23 00:00 to 2025-11-27 00:00
"""
def __init__(self, *, time_service: TibberPricesTimeService | None = None) -> None:
"""Initialize empty fetch group cache with optional TimeService."""
self._fetch_groups: list[dict[str, Any]] = []
self._time_service = time_service
# Protected range cache (invalidated daily)
self._protected_range_cache: tuple[str, str] | None = None
self._protected_range_cache_date: str | None = None
def add_fetch_group(
self,
intervals: list[dict[str, Any]],
fetched_at: datetime,
) -> int:
"""
Add new fetch group to cache.
Args:
intervals: List of interval dicts (sorted by startsAt).
fetched_at: Timestamp when intervals were fetched.
Returns:
Index of the newly added fetch group.
"""
fetch_group = {
"fetched_at": fetched_at,
"intervals": intervals,
}
fetch_group_index = len(self._fetch_groups)
self._fetch_groups.append(fetch_group)
_LOGGER_DETAILS.debug(
"Added fetch group %d: %d intervals (fetched at %s)",
fetch_group_index,
len(intervals),
fetched_at.isoformat(),
)
return fetch_group_index
def get_fetch_groups(self) -> list[dict[str, Any]]:
"""Get all fetch groups (read-only access)."""
return self._fetch_groups
def set_fetch_groups(self, fetch_groups: list[dict[str, Any]]) -> None:
"""Replace all fetch groups (used during GC)."""
self._fetch_groups = fetch_groups
def get_protected_range(self) -> tuple[str, str]:
"""
Get protected date range as ISO strings.
Protected range: day-before-yesterday 00:00 to day-after-tomorrow 00:00.
This range shifts daily automatically.
Time Machine Support:
If time_service was provided at init, uses time_service.now() for
"today" calculation. This protects the correct date range when
simulating a different date.
Returns:
Tuple of (start_iso, end_iso) for protected range.
Start is inclusive, end is exclusive.
Example (today = 2025-11-25):
Returns: ("2025-11-23T00:00:00+01:00", "2025-11-27T00:00:00+01:00")
Protected days: 2025-11-23, 2025-11-24, 2025-11-25, 2025-11-26
"""
# Use TimeService if available (Time Machine support), else real time
now = self._time_service.now() if self._time_service else dt_utils.now()
today_date_str = now.date().isoformat()
# Check cache validity (invalidate daily)
if self._protected_range_cache_date == today_date_str and self._protected_range_cache:
return self._protected_range_cache
# Calculate new protected range
today_midnight = now.replace(hour=0, minute=0, second=0, microsecond=0)
# Start: day-before-yesterday at 00:00
start_dt = today_midnight - timedelta(days=PROTECTED_DAYS_BEFORE)
# End: day after tomorrow at 00:00 (exclusive, so tomorrow is included)
end_dt = today_midnight + timedelta(days=PROTECTED_DAYS_AFTER + 1)
# Convert to ISO strings and cache
start_iso = start_dt.isoformat()
end_iso = end_dt.isoformat()
self._protected_range_cache = (start_iso, end_iso)
self._protected_range_cache_date = today_date_str
return start_iso, end_iso
def is_interval_protected(self, interval: dict[str, Any]) -> bool:
"""
Check if interval is within protected date range.
Protected intervals are never evicted from cache.
Args:
interval: Interval dict with "startsAt" ISO timestamp.
Returns:
True if interval is protected (within protected range).
"""
starts_at_iso = interval["startsAt"]
start_protected_iso, end_protected_iso = self.get_protected_range()
# Fast string comparison (ISO timestamps are lexicographically sortable)
return start_protected_iso <= starts_at_iso < end_protected_iso
def count_total_intervals(self) -> int:
"""Count total intervals across all fetch groups."""
return sum(len(group["intervals"]) for group in self._fetch_groups)
def to_dict(self) -> dict[str, Any]:
"""
Serialize fetch groups for storage.
Returns:
Dict with serializable fetch groups.
"""
return {
"fetch_groups": [
{
"fetched_at": group["fetched_at"].isoformat(),
"intervals": group["intervals"],
}
for group in self._fetch_groups
],
}
@classmethod
def from_dict(cls, data: dict[str, Any]) -> TibberPricesIntervalPoolFetchGroupCache:
"""
Restore fetch groups from storage.
Args:
data: Dict with "fetch_groups" list.
Returns:
TibberPricesIntervalPoolFetchGroupCache instance with restored data.
"""
cache = cls()
fetch_groups_data = data.get("fetch_groups", [])
cache._fetch_groups = [
{
"fetched_at": datetime.fromisoformat(group["fetched_at"]),
"intervals": group["intervals"],
}
for group in fetch_groups_data
]
return cache

View file

@ -0,0 +1,321 @@
"""Interval fetcher - coverage check and API coordination for interval pool."""
from __future__ import annotations
import logging
from datetime import UTC, datetime, timedelta
from typing import TYPE_CHECKING, Any
from homeassistant.util import dt as dt_utils
if TYPE_CHECKING:
from collections.abc import Callable
from custom_components.tibber_prices.api import TibberPricesApiClient
from .cache import TibberPricesIntervalPoolFetchGroupCache
from .index import TibberPricesIntervalPoolTimestampIndex
_LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
# Resolution change date (hourly before, quarter-hourly after)
# Use UTC for constant - timezone adjusted at runtime when comparing
RESOLUTION_CHANGE_DATETIME = datetime(2025, 10, 1, tzinfo=UTC)
RESOLUTION_CHANGE_ISO = "2025-10-01T00:00:00"
# Interval lengths in minutes
INTERVAL_HOURLY = 60
INTERVAL_QUARTER_HOURLY = 15
# Minimum gap sizes in seconds
MIN_GAP_HOURLY = 3600 # 1 hour
MIN_GAP_QUARTER_HOURLY = 900 # 15 minutes
# Tolerance for time comparisons (±1 second for floating point/timezone issues)
TIME_TOLERANCE_SECONDS = 1
TIME_TOLERANCE_MINUTES = 1
class TibberPricesIntervalPoolFetcher:
"""Fetch missing intervals from API based on coverage check."""
def __init__(
self,
api: TibberPricesApiClient,
cache: TibberPricesIntervalPoolFetchGroupCache,
index: TibberPricesIntervalPoolTimestampIndex,
home_id: str,
) -> None:
"""
Initialize fetcher.
Args:
api: API client for Tibber GraphQL queries.
cache: Fetch group cache for storage operations.
index: Timestamp index for gap detection.
home_id: Tibber home ID for API calls.
"""
self._api = api
self._cache = cache
self._index = index
self._home_id = home_id
def check_coverage(
self,
cached_intervals: list[dict[str, Any]],
start_time_iso: str,
end_time_iso: str,
) -> list[tuple[str, str]]:
"""
Check cache coverage and find missing time ranges.
This method minimizes API calls by:
1. Finding all gaps in cached intervals
2. Treating each cached interval as a discrete timestamp
3. Gaps are time ranges between consecutive cached timestamps
Handles both resolutions:
- Pre-2025-10-01: Hourly intervals (:00:00 only)
- Post-2025-10-01: Quarter-hourly intervals (:00:00, :15:00, :30:00, :45:00)
- DST transitions (23h/25h days)
The API requires an interval count (first: X parameter).
For historical data (pre-2025-10-01), Tibber only stored hourly prices.
The API returns whatever intervals exist for the requested period.
Args:
cached_intervals: List of cached intervals (may be empty).
start_time_iso: ISO timestamp string (inclusive).
end_time_iso: ISO timestamp string (exclusive).
Returns:
List of (start_iso, end_iso) tuples representing missing ranges.
Each tuple represents a continuous time span that needs fetching.
Ranges are automatically split at resolution change boundary.
Example:
Requested: 2025-11-13T00:00:00 to 2025-11-13T02:00:00
Cached: [00:00, 00:15, 01:30, 01:45]
Gaps: [(00:15, 01:30)] - missing intervals between groups
"""
if not cached_intervals:
# No cache → fetch entire range
return [(start_time_iso, end_time_iso)]
# Filter and sort cached intervals within requested range
in_range_intervals = [
interval for interval in cached_intervals if start_time_iso <= interval["startsAt"] < end_time_iso
]
sorted_intervals = sorted(in_range_intervals, key=lambda x: x["startsAt"])
if not sorted_intervals:
# All cached intervals are outside requested range
return [(start_time_iso, end_time_iso)]
missing_ranges = []
# Parse start/end times once
start_time_dt = datetime.fromisoformat(start_time_iso)
end_time_dt = datetime.fromisoformat(end_time_iso)
# Get first cached interval datetime for resolution logic
first_cached_dt = datetime.fromisoformat(sorted_intervals[0]["startsAt"])
resolution_change_dt = RESOLUTION_CHANGE_DATETIME.replace(tzinfo=first_cached_dt.tzinfo)
# Check gap before first cached interval
time_diff_before_first = (first_cached_dt - start_time_dt).total_seconds()
if time_diff_before_first > TIME_TOLERANCE_SECONDS:
missing_ranges.append((start_time_iso, sorted_intervals[0]["startsAt"]))
_LOGGER_DETAILS.debug(
"Missing range before first cached interval: %s to %s (%.1f seconds)",
start_time_iso,
sorted_intervals[0]["startsAt"],
time_diff_before_first,
)
# Check gaps between consecutive cached intervals
for i in range(len(sorted_intervals) - 1):
current_interval = sorted_intervals[i]
next_interval = sorted_intervals[i + 1]
current_start = current_interval["startsAt"]
next_start = next_interval["startsAt"]
# Parse to datetime for accurate time difference
current_dt = datetime.fromisoformat(current_start)
next_dt = datetime.fromisoformat(next_start)
# Calculate time difference in minutes
time_diff_minutes = (next_dt - current_dt).total_seconds() / 60
# Determine expected interval length based on date
expected_interval_minutes = (
INTERVAL_HOURLY if current_dt < resolution_change_dt else INTERVAL_QUARTER_HOURLY
)
# Only create gap if intervals are NOT consecutive
if time_diff_minutes > expected_interval_minutes + TIME_TOLERANCE_MINUTES:
# Gap exists - missing intervals between them
# Missing range starts AFTER current interval ends
current_interval_end = current_dt + timedelta(minutes=expected_interval_minutes)
missing_ranges.append((current_interval_end.isoformat(), next_start))
_LOGGER_DETAILS.debug(
"Missing range between cached intervals: %s (ends at %s) to %s (%.1f min, expected %d min)",
current_start,
current_interval_end.isoformat(),
next_start,
time_diff_minutes,
expected_interval_minutes,
)
# Check gap after last cached interval
# An interval's startsAt time represents the START of that interval.
# The interval covers [startsAt, startsAt + interval_length).
# So the last interval ENDS at (startsAt + interval_length), not at startsAt!
last_cached_dt = datetime.fromisoformat(sorted_intervals[-1]["startsAt"])
# Calculate when the last interval ENDS
interval_minutes = INTERVAL_QUARTER_HOURLY if last_cached_dt >= resolution_change_dt else INTERVAL_HOURLY
last_interval_end_dt = last_cached_dt + timedelta(minutes=interval_minutes)
# Only create gap if there's uncovered time AFTER the last interval ends
time_diff_after_last = (end_time_dt - last_interval_end_dt).total_seconds()
# Need at least one full interval of gap
min_gap_seconds = MIN_GAP_QUARTER_HOURLY if last_cached_dt >= resolution_change_dt else MIN_GAP_HOURLY
if time_diff_after_last >= min_gap_seconds:
# Missing range starts AFTER the last cached interval ends
missing_ranges.append((last_interval_end_dt.isoformat(), end_time_iso))
_LOGGER_DETAILS.debug(
"Missing range after last cached interval: %s (ends at %s) to %s (%.1f seconds, need >= %d)",
sorted_intervals[-1]["startsAt"],
last_interval_end_dt.isoformat(),
end_time_iso,
time_diff_after_last,
min_gap_seconds,
)
if not missing_ranges:
_LOGGER.debug(
"Full coverage - all intervals cached for range %s to %s",
start_time_iso,
end_time_iso,
)
return missing_ranges
# Split ranges at resolution change boundary (2025-10-01 00:00:00)
# This simplifies interval count calculation in API calls:
# - Pre-2025-10-01: Always hourly (1 interval/hour)
# - Post-2025-10-01: Always quarter-hourly (4 intervals/hour)
return self._split_at_resolution_boundary(missing_ranges)
def _split_at_resolution_boundary(self, ranges: list[tuple[str, str]]) -> list[tuple[str, str]]:
"""
Split time ranges at resolution change boundary.
Args:
ranges: List of (start_iso, end_iso) tuples.
Returns:
List of ranges split at 2025-10-01T00:00:00 boundary.
"""
split_ranges = []
for start_iso, end_iso in ranges:
# Check if range crosses the boundary
if start_iso < RESOLUTION_CHANGE_ISO < end_iso:
# Split into two ranges: before and after boundary
split_ranges.append((start_iso, RESOLUTION_CHANGE_ISO))
split_ranges.append((RESOLUTION_CHANGE_ISO, end_iso))
_LOGGER_DETAILS.debug(
"Split range at resolution boundary: (%s, %s) → (%s, %s) + (%s, %s)",
start_iso,
end_iso,
start_iso,
RESOLUTION_CHANGE_ISO,
RESOLUTION_CHANGE_ISO,
end_iso,
)
else:
# Range doesn't cross boundary - keep as is
split_ranges.append((start_iso, end_iso))
return split_ranges
async def fetch_missing_ranges(
self,
api_client: TibberPricesApiClient,
user_data: dict[str, Any],
missing_ranges: list[tuple[str, str]],
*,
on_intervals_fetched: Callable[[list[dict[str, Any]], str], None] | None = None,
) -> list[list[dict[str, Any]]]:
"""
Fetch missing intervals from API.
Makes one API call per missing range. Uses routing logic to select
the optimal endpoint (PRICE_INFO vs PRICE_INFO_RANGE).
Args:
api_client: TibberPricesApiClient instance for API calls.
user_data: User data dict containing home metadata.
missing_ranges: List of (start_iso, end_iso) tuples to fetch.
on_intervals_fetched: Optional callback for each fetch result.
Receives (intervals, fetch_time_iso).
Returns:
List of interval lists (one per missing range).
Each sublist contains intervals from one API call.
Raises:
TibberPricesApiClientError: If API calls fail.
"""
# Import here to avoid circular dependency
from custom_components.tibber_prices.interval_pool.routing import ( # noqa: PLC0415
get_price_intervals_for_range,
)
fetch_time_iso = dt_utils.now().isoformat()
all_fetched_intervals = []
for idx, (missing_start_iso, missing_end_iso) in enumerate(missing_ranges, start=1):
_LOGGER_DETAILS.debug(
"Fetching from Tibber API (%d/%d) for home %s: range %s to %s",
idx,
len(missing_ranges),
self._home_id,
missing_start_iso,
missing_end_iso,
)
# Parse ISO strings back to datetime for API call
missing_start_dt = datetime.fromisoformat(missing_start_iso)
missing_end_dt = datetime.fromisoformat(missing_end_iso)
# Fetch intervals from API - routing returns ALL intervals (unfiltered)
fetched_intervals = await get_price_intervals_for_range(
api_client=api_client,
home_id=self._home_id,
user_data=user_data,
start_time=missing_start_dt,
end_time=missing_end_dt,
)
all_fetched_intervals.append(fetched_intervals)
_LOGGER_DETAILS.debug(
"Received %d intervals from Tibber API for home %s",
len(fetched_intervals),
self._home_id,
)
# Notify callback if provided (for immediate caching)
if on_intervals_fetched:
on_intervals_fetched(fetched_intervals, fetch_time_iso)
return all_fetched_intervals

View file

@ -0,0 +1,283 @@
"""Garbage collector for interval cache eviction."""
from __future__ import annotations
import logging
from datetime import datetime
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from .cache import TibberPricesIntervalPoolFetchGroupCache
from .index import TibberPricesIntervalPoolTimestampIndex
_LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
# Maximum number of intervals to cache
# 1 days @ 15min resolution = 10 * 96 = 960 intervals
MAX_CACHE_SIZE = 960
def _normalize_starts_at(starts_at: datetime | str) -> str:
"""Normalize startsAt to consistent format (YYYY-MM-DDTHH:MM:SS)."""
if isinstance(starts_at, datetime):
return starts_at.strftime("%Y-%m-%dT%H:%M:%S")
return starts_at[:19]
class TibberPricesIntervalPoolGarbageCollector:
"""
Manages cache eviction and dead interval cleanup.
Eviction Strategy:
- Evict oldest fetch groups first (by fetched_at timestamp)
- Protected intervals (day-before-yesterday to tomorrow) are NEVER evicted
- Evict complete fetch groups, not individual intervals
Dead Interval Cleanup:
When intervals are "touched" (re-fetched), they move to a new fetch group
but remain in the old group. This creates "dead intervals" that occupy
memory but are no longer referenced by the index.
"""
def __init__(
self,
cache: TibberPricesIntervalPoolFetchGroupCache,
index: TibberPricesIntervalPoolTimestampIndex,
home_id: str,
) -> None:
"""
Initialize garbage collector.
Args:
home_id: Home ID for logging purposes.
cache: Fetch group cache to manage.
index: Timestamp index for living interval detection.
"""
self._home_id = home_id
self._cache = cache
self._index = index
def run_gc(self) -> bool:
"""
Run garbage collection if needed.
Process:
1. Clean up dead intervals from all fetch groups
2. Count total intervals
3. If > MAX_CACHE_SIZE, evict oldest fetch groups
4. Rebuild index after eviction
Returns:
True if any cleanup or eviction happened, False otherwise.
"""
fetch_groups = self._cache.get_fetch_groups()
# Phase 1: Clean up dead intervals
dead_count = self._cleanup_dead_intervals(fetch_groups)
if dead_count > 0:
_LOGGER_DETAILS.debug(
"GC cleaned %d dead intervals (home %s)",
dead_count,
self._home_id,
)
# Phase 1.5: Remove empty fetch groups (after dead interval cleanup)
empty_removed = self._remove_empty_groups(fetch_groups)
if empty_removed > 0:
_LOGGER_DETAILS.debug(
"GC removed %d empty fetch groups (home %s)",
empty_removed,
self._home_id,
)
# Phase 2: Count total intervals after cleanup
total_intervals = self._cache.count_total_intervals()
if total_intervals <= MAX_CACHE_SIZE:
_LOGGER_DETAILS.debug(
"GC cleanup only for home %s: %d intervals <= %d limit (no eviction needed)",
self._home_id,
total_intervals,
MAX_CACHE_SIZE,
)
return dead_count > 0
# Phase 3: Evict old fetch groups
evicted_indices = self._evict_old_groups(fetch_groups, total_intervals)
if not evicted_indices:
# All intervals are protected, cannot evict
return dead_count > 0 or empty_removed > 0
# Phase 4: Rebuild cache and index
new_fetch_groups = [group for idx, group in enumerate(fetch_groups) if idx not in evicted_indices]
self._cache.set_fetch_groups(new_fetch_groups)
self._index.rebuild(new_fetch_groups)
_LOGGER_DETAILS.debug(
"GC evicted %d fetch groups (home %s): %d intervals remaining",
len(evicted_indices),
self._home_id,
self._cache.count_total_intervals(),
)
return True
def _remove_empty_groups(self, fetch_groups: list[dict[str, Any]]) -> int:
"""
Remove fetch groups with no intervals.
After dead interval cleanup, some groups may be completely empty.
These should be removed to prevent memory accumulation.
Note: This modifies the cache's internal list in-place and rebuilds
the index to maintain consistency.
Args:
fetch_groups: List of fetch groups (will be modified).
Returns:
Number of empty groups removed.
"""
# Find non-empty groups
non_empty_groups = [group for group in fetch_groups if group["intervals"]]
removed_count = len(fetch_groups) - len(non_empty_groups)
if removed_count > 0:
# Update cache with filtered list
self._cache.set_fetch_groups(non_empty_groups)
# Rebuild index since group indices changed
self._index.rebuild(non_empty_groups)
return removed_count
def _cleanup_dead_intervals(self, fetch_groups: list[dict[str, Any]]) -> int:
"""
Remove dead intervals from all fetch groups.
Dead intervals are no longer referenced by the index (they were touched
and moved to a newer fetch group).
Args:
fetch_groups: List of fetch groups to clean.
Returns:
Total number of dead intervals removed.
"""
total_dead = 0
for group_idx, group in enumerate(fetch_groups):
old_intervals = group["intervals"]
if not old_intervals:
continue
# Find living intervals (still in index at correct position)
living_intervals = []
for interval_idx, interval in enumerate(old_intervals):
starts_at_normalized = _normalize_starts_at(interval["startsAt"])
index_entry = self._index.get(starts_at_normalized)
if index_entry is not None:
# Check if index points to THIS position
if index_entry["fetch_group_index"] == group_idx and index_entry["interval_index"] == interval_idx:
living_intervals.append(interval)
else:
# Dead: index points elsewhere
total_dead += 1
else:
# Dead: not in index
total_dead += 1
# Replace with cleaned list if any dead intervals found
if len(living_intervals) < len(old_intervals):
group["intervals"] = living_intervals
dead_count = len(old_intervals) - len(living_intervals)
_LOGGER_DETAILS.debug(
"GC cleaned %d dead intervals from fetch group %d (home %s)",
dead_count,
group_idx,
self._home_id,
)
return total_dead
def _evict_old_groups(
self,
fetch_groups: list[dict[str, Any]],
total_intervals: int,
) -> set[int]:
"""
Determine which fetch groups to evict to stay under MAX_CACHE_SIZE.
Only evicts groups without protected intervals.
Groups evicted oldest-first (by fetched_at).
Args:
fetch_groups: List of fetch groups.
total_intervals: Total interval count.
Returns:
Set of fetch group indices to evict.
"""
start_protected_iso, end_protected_iso = self._cache.get_protected_range()
_LOGGER_DETAILS.debug(
"Protected range: %s to %s",
start_protected_iso[:10],
end_protected_iso[:10],
)
# Classify: protected vs evictable
evictable_groups = []
for idx, group in enumerate(fetch_groups):
has_protected = any(self._cache.is_interval_protected(interval) for interval in group["intervals"])
if not has_protected:
evictable_groups.append((idx, group))
# Sort by fetched_at (oldest first)
evictable_groups.sort(key=lambda x: x[1]["fetched_at"])
_LOGGER_DETAILS.debug(
"GC: %d protected groups, %d evictable groups",
len(fetch_groups) - len(evictable_groups),
len(evictable_groups),
)
# Evict until under limit
evicted_indices = set()
remaining = total_intervals
for idx, group in evictable_groups:
if remaining <= MAX_CACHE_SIZE:
break
group_count = len(group["intervals"])
evicted_indices.add(idx)
remaining -= group_count
_LOGGER_DETAILS.debug(
"GC evicting group %d (fetched %s): %d intervals, %d remaining",
idx,
group["fetched_at"].isoformat(),
group_count,
remaining,
)
if not evicted_indices:
_LOGGER.warning(
"GC cannot evict any groups (home %s): all %d intervals are protected",
self._home_id,
total_intervals,
)
return evicted_indices

View file

@ -0,0 +1,173 @@
"""Timestamp index for O(1) interval lookups."""
from __future__ import annotations
import logging
from typing import Any
_LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
class TibberPricesIntervalPoolTimestampIndex:
"""
Fast O(1) timestamp-based interval lookup.
Maps normalized ISO timestamp strings to fetch group + interval indices.
Structure:
{
"2025-11-25T00:00:00": {
"fetch_group_index": 0, # Index in fetch groups list
"interval_index": 2 # Index within that group's intervals
},
...
}
Normalization:
Timestamps are normalized to 19 characters (YYYY-MM-DDTHH:MM:SS)
by truncating microseconds and timezone info for fast string comparison.
"""
def __init__(self) -> None:
"""Initialize empty timestamp index."""
self._index: dict[str, dict[str, int]] = {}
def add(
self,
interval: dict[str, Any],
fetch_group_index: int,
interval_index: int,
) -> None:
"""
Add interval to index.
Args:
interval: Interval dict with "startsAt" ISO timestamp.
fetch_group_index: Index of fetch group containing this interval.
interval_index: Index within that fetch group's intervals list.
"""
starts_at_normalized = self._normalize_timestamp(interval["startsAt"])
self._index[starts_at_normalized] = {
"fetch_group_index": fetch_group_index,
"interval_index": interval_index,
}
def get(self, timestamp: str) -> dict[str, int] | None:
"""
Look up interval location by timestamp.
Args:
timestamp: ISO timestamp string (will be normalized).
Returns:
Dict with fetch_group_index and interval_index, or None if not found.
"""
starts_at_normalized = self._normalize_timestamp(timestamp)
return self._index.get(starts_at_normalized)
def contains(self, timestamp: str) -> bool:
"""
Check if timestamp exists in index.
Args:
timestamp: ISO timestamp string (will be normalized).
Returns:
True if timestamp is in index.
"""
starts_at_normalized = self._normalize_timestamp(timestamp)
return starts_at_normalized in self._index
def remove(self, timestamp: str) -> None:
"""
Remove timestamp from index.
Args:
timestamp: ISO timestamp string (will be normalized).
"""
starts_at_normalized = self._normalize_timestamp(timestamp)
self._index.pop(starts_at_normalized, None)
def update_batch(
self,
updates: list[tuple[str, int, int]],
) -> None:
"""
Update multiple index entries efficiently in a single operation.
More efficient than calling remove() + add() for each entry,
as it avoids repeated dict operations and normalization.
Args:
updates: List of (timestamp, fetch_group_index, interval_index) tuples.
Timestamps will be normalized automatically.
"""
for timestamp, fetch_group_index, interval_index in updates:
starts_at_normalized = self._normalize_timestamp(timestamp)
self._index[starts_at_normalized] = {
"fetch_group_index": fetch_group_index,
"interval_index": interval_index,
}
def clear(self) -> None:
"""Clear entire index."""
self._index.clear()
def rebuild(self, fetch_groups: list[dict[str, Any]]) -> None:
"""
Rebuild index from fetch groups.
Used after GC operations that modify fetch group structure.
Args:
fetch_groups: List of fetch group dicts.
"""
self._index.clear()
for fetch_group_idx, group in enumerate(fetch_groups):
for interval_idx, interval in enumerate(group["intervals"]):
starts_at_normalized = self._normalize_timestamp(interval["startsAt"])
self._index[starts_at_normalized] = {
"fetch_group_index": fetch_group_idx,
"interval_index": interval_idx,
}
_LOGGER_DETAILS.debug(
"Rebuilt index: %d timestamps indexed",
len(self._index),
)
def get_raw_index(self) -> dict[str, dict[str, int]]:
"""Get raw index dict (for serialization)."""
return self._index
def count(self) -> int:
"""Count total indexed timestamps."""
return len(self._index)
@staticmethod
def _normalize_timestamp(timestamp: str) -> str:
"""
Normalize ISO timestamp for indexing.
Truncates to 19 characters (YYYY-MM-DDTHH:MM:SS) to remove
microseconds and timezone info for consistent string comparison.
Args:
timestamp: Full ISO timestamp string.
Returns:
Normalized timestamp (19 chars).
Example:
"2025-11-25T00:00:00.000+01:00" "2025-11-25T00:00:00"
"""
return timestamp[:19]

View file

@ -0,0 +1,830 @@
"""Interval pool manager - main coordinator for interval caching."""
from __future__ import annotations
import asyncio
import contextlib
import logging
from datetime import datetime, timedelta
from typing import TYPE_CHECKING, Any
from zoneinfo import ZoneInfo
from custom_components.tibber_prices.api.exceptions import TibberPricesApiClientError
from homeassistant.util import dt as dt_utils
from .cache import TibberPricesIntervalPoolFetchGroupCache
from .fetcher import TibberPricesIntervalPoolFetcher
from .garbage_collector import MAX_CACHE_SIZE, TibberPricesIntervalPoolGarbageCollector
from .index import TibberPricesIntervalPoolTimestampIndex
from .storage import async_save_pool_state
if TYPE_CHECKING:
from custom_components.tibber_prices.api.client import TibberPricesApiClient
from custom_components.tibber_prices.coordinator.time_service import (
TibberPricesTimeService,
)
_LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
# Interval lengths in minutes
INTERVAL_HOURLY = 60
INTERVAL_QUARTER_HOURLY = 15
# Debounce delay for auto-save (seconds)
DEBOUNCE_DELAY_SECONDS = 3.0
def _normalize_starts_at(starts_at: datetime | str) -> str:
"""Normalize startsAt to consistent format (YYYY-MM-DDTHH:MM:SS)."""
if isinstance(starts_at, datetime):
return starts_at.strftime("%Y-%m-%dT%H:%M:%S")
return starts_at[:19]
class TibberPricesIntervalPool:
"""
High-performance interval cache manager for a single Tibber home.
Coordinates all interval pool components:
- TibberPricesIntervalPoolFetchGroupCache: Stores fetch groups and manages protected ranges
- TibberPricesIntervalPoolTimestampIndex: Provides O(1) timestamp lookups
- TibberPricesIntervalPoolGarbageCollector: Evicts old fetch groups when cache exceeds limits
- TibberPricesIntervalPoolFetcher: Detects gaps and fetches missing intervals from API
Architecture:
- Each manager handles exactly ONE home (1:1 with config entry)
- home_id is immutable after initialization
- All operations are thread-safe via asyncio locks
Features:
- Fetch-time based eviction (oldest fetch groups removed first)
- Protected date range (day-before-yesterday to tomorrow never evicted)
- Fast O(1) lookups by timestamp
- Automatic gap detection and API fetching
- Debounced auto-save to prevent excessive I/O
Example:
manager = TibberPricesIntervalPool(home_id="abc123", hass=hass, entry_id=entry.entry_id)
intervals = await manager.get_intervals(
api_client=client,
user_data=data,
start_time=datetime(...),
end_time=datetime(...),
)
"""
def __init__(
self,
*,
home_id: str,
api: TibberPricesApiClient,
hass: Any | None = None,
entry_id: str | None = None,
time_service: TibberPricesTimeService | None = None,
) -> None:
"""
Initialize interval pool manager.
Args:
home_id: Tibber home ID (required, immutable).
api: API client for fetching intervals.
hass: HomeAssistant instance for auto-save (optional).
entry_id: Config entry ID for auto-save (optional).
time_service: TimeService for time-travel support (optional).
If None, uses real time (dt_utils.now()).
"""
self._home_id = home_id
self._time_service = time_service
# Initialize components with dependency injection
self._cache = TibberPricesIntervalPoolFetchGroupCache(time_service=time_service)
self._index = TibberPricesIntervalPoolTimestampIndex()
self._gc = TibberPricesIntervalPoolGarbageCollector(self._cache, self._index, home_id)
self._fetcher = TibberPricesIntervalPoolFetcher(api, self._cache, self._index, home_id)
# Auto-save support
self._hass = hass
self._entry_id = entry_id
self._background_tasks: set[asyncio.Task] = set()
self._save_debounce_task: asyncio.Task | None = None
self._save_lock = asyncio.Lock()
async def get_intervals(
self,
api_client: TibberPricesApiClient,
user_data: dict[str, Any],
start_time: datetime,
end_time: datetime,
) -> tuple[list[dict[str, Any]], bool]:
"""
Get price intervals for time range (cached + fetch missing).
Main entry point for retrieving intervals. Coordinates:
1. Check cache for existing intervals
2. Detect missing time ranges
3. Fetch missing ranges from API
4. Add new intervals to cache (may trigger GC)
5. Return complete interval list
User receives ALL requested intervals even if cache exceeds limits.
Cache only keeps the most recent intervals (FIFO eviction).
Args:
api_client: TibberPricesApiClient instance for API calls.
user_data: User data dict containing home metadata.
start_time: Start of range (inclusive, timezone-aware).
end_time: End of range (exclusive, timezone-aware).
Returns:
Tuple of (intervals, api_called):
- intervals: List of price interval dicts, sorted by startsAt.
Contains ALL intervals in requested range (cached + fetched).
- api_called: True if API was called to fetch missing data, False if all from cache.
Raises:
TibberPricesApiClientError: If API calls fail or validation errors.
"""
# Validate inputs
if not user_data:
msg = "User data required for timezone-aware price fetching"
raise TibberPricesApiClientError(msg)
if start_time >= end_time:
msg = f"Invalid time range: start_time ({start_time}) must be before end_time ({end_time})"
raise TibberPricesApiClientError(msg)
# Convert to ISO strings for cache operations
start_time_iso = start_time.isoformat()
end_time_iso = end_time.isoformat()
_LOGGER_DETAILS.debug(
"Interval pool request for home %s: range %s to %s",
self._home_id,
start_time_iso,
end_time_iso,
)
# Get cached intervals using index
cached_intervals = self._get_cached_intervals(start_time_iso, end_time_iso)
# Check coverage - find ranges not in cache
missing_ranges = self._fetcher.check_coverage(cached_intervals, start_time_iso, end_time_iso)
if missing_ranges:
_LOGGER_DETAILS.debug(
"Coverage check for home %s: %d range(s) missing - will fetch from API",
self._home_id,
len(missing_ranges),
)
else:
_LOGGER_DETAILS.debug(
"Coverage check for home %s: full coverage in cache - no API calls needed",
self._home_id,
)
# Fetch missing ranges from API
if missing_ranges:
fetch_time_iso = dt_utils.now().isoformat()
# Fetch with callback for immediate caching
await self._fetcher.fetch_missing_ranges(
api_client=api_client,
user_data=user_data,
missing_ranges=missing_ranges,
on_intervals_fetched=lambda intervals, _: self._add_intervals(intervals, fetch_time_iso),
)
# After caching all API responses, read from cache again to get final result
# This ensures we return exactly what user requested, filtering out extra intervals
final_result = self._get_cached_intervals(start_time_iso, end_time_iso)
# Track if API was called (True if any missing ranges were fetched)
api_called = len(missing_ranges) > 0
_LOGGER_DETAILS.debug(
"Pool returning %d intervals for home %s (from cache: %d, fetched from API: %d ranges, api_called=%s)",
len(final_result),
self._home_id,
len(cached_intervals),
len(missing_ranges),
api_called,
)
return final_result, api_called
async def get_sensor_data(
self,
api_client: TibberPricesApiClient,
user_data: dict[str, Any],
home_timezone: str | None = None,
*,
include_tomorrow: bool = True,
) -> tuple[list[dict[str, Any]], bool]:
"""
Get price intervals for sensor data (day-before-yesterday to end-of-tomorrow).
Convenience method for coordinator/sensors that need the standard 4-day window:
- Day before yesterday (for trailing 24h averages at midnight)
- Yesterday (for trailing 24h averages)
- Today (current prices)
- Tomorrow (if available in cache)
IMPORTANT - Two distinct behaviors:
1. API FETCH: Controlled by include_tomorrow flag
- include_tomorrow=False Only fetch up to end of today (prevents API spam before 13:00)
- include_tomorrow=True Fetch including tomorrow data
2. RETURN DATA: Always returns full protected range (including tomorrow if cached)
- This ensures cached tomorrow data is used even if include_tomorrow=False
The separation prevents the following bug:
- If include_tomorrow affected both fetch AND return, cached tomorrow data
would be lost when include_tomorrow=False, causing infinite refresh loops.
Args:
api_client: TibberPricesApiClient instance for API calls.
user_data: User data dict containing home metadata.
home_timezone: Optional timezone string (e.g., "Europe/Berlin").
include_tomorrow: If True, fetch tomorrow's data from API. If False,
only fetch up to end of today. Default True.
DOES NOT affect returned data - always returns full range.
Returns:
Tuple of (intervals, api_called):
- intervals: List of price interval dicts for the 4-day window (including any cached
tomorrow data), sorted by startsAt.
- api_called: True if API was called to fetch missing data, False if all from cache.
"""
# Determine timezone
tz_str = home_timezone
if not tz_str:
tz_str = self._extract_timezone_from_user_data(user_data)
# Calculate range in home's timezone
tz = ZoneInfo(tz_str) if tz_str else None
now = self._time_service.now() if self._time_service else dt_utils.now()
now_local = now.astimezone(tz) if tz else now
# Day before yesterday 00:00 (start) - same for both fetch and return
day_before_yesterday = (now_local - timedelta(days=2)).replace(hour=0, minute=0, second=0, microsecond=0)
# End of tomorrow (full protected range) - used for RETURN data
end_of_tomorrow = (now_local + timedelta(days=2)).replace(hour=0, minute=0, second=0, microsecond=0)
# API fetch range depends on include_tomorrow flag
if include_tomorrow:
fetch_end_time = end_of_tomorrow
fetch_desc = "end-of-tomorrow"
else:
# Only fetch up to end of today (prevents API spam before 13:00)
fetch_end_time = (now_local + timedelta(days=1)).replace(hour=0, minute=0, second=0, microsecond=0)
fetch_desc = "end-of-today"
_LOGGER.debug(
"Sensor data request for home %s: fetch %s to %s (%s), return up to %s",
self._home_id,
day_before_yesterday.isoformat(),
fetch_end_time.isoformat(),
fetch_desc,
end_of_tomorrow.isoformat(),
)
# Fetch data (may be partial if include_tomorrow=False)
_intervals, api_called = await self.get_intervals(
api_client=api_client,
user_data=user_data,
start_time=day_before_yesterday,
end_time=fetch_end_time,
)
# Return FULL protected range (including any cached tomorrow data)
# This ensures cached tomorrow data is available even when include_tomorrow=False
final_intervals = self._get_cached_intervals(
day_before_yesterday.isoformat(),
end_of_tomorrow.isoformat(),
)
return final_intervals, api_called
def get_pool_stats(self) -> dict[str, Any]:
"""
Get statistics about the interval pool.
Returns comprehensive statistics for diagnostic sensors, separated into:
- Sensor intervals (protected range: day-before-yesterday to tomorrow)
- Cache statistics (entire pool including service-requested data)
Protected Range:
The protected range covers 4 days at 15-min resolution = 384 intervals.
These intervals are never evicted by garbage collection.
Cache Fill Level:
Shows how full the cache is relative to MAX_CACHE_SIZE (960).
100% is not bad - just means we're using the available space.
GC will evict oldest non-protected intervals when limit is reached.
Returns:
Dict with sensor intervals, cache stats, and timestamps.
"""
fetch_groups = self._cache.get_fetch_groups()
# === Sensor Intervals (Protected Range) ===
sensor_stats = self._get_sensor_interval_stats()
# === Cache Statistics (Entire Pool) ===
cache_total = self._index.count()
cache_limit = MAX_CACHE_SIZE
cache_fill_percent = round((cache_total / cache_limit) * 100, 1) if cache_limit > 0 else 0
cache_extra = max(0, cache_total - sensor_stats["count"]) # Intervals outside protected range
# === Timestamps ===
# Last sensor fetch (for protected range data)
last_sensor_fetch: str | None = None
oldest_interval: str | None = None
newest_interval: str | None = None
if fetch_groups:
# Find newest fetch group (most recent API call)
newest_group = max(fetch_groups, key=lambda g: g["fetched_at"])
last_sensor_fetch = newest_group["fetched_at"].isoformat()
# Find oldest and newest intervals across all fetch groups
all_timestamps = list(self._index.get_raw_index().keys())
if all_timestamps:
oldest_interval = min(all_timestamps)
newest_interval = max(all_timestamps)
return {
# Sensor intervals (protected range)
"sensor_intervals_count": sensor_stats["count"],
"sensor_intervals_expected": sensor_stats["expected"],
"sensor_intervals_has_gaps": sensor_stats["has_gaps"],
# Cache statistics
"cache_intervals_total": cache_total,
"cache_intervals_limit": cache_limit,
"cache_fill_percent": cache_fill_percent,
"cache_intervals_extra": cache_extra,
# Timestamps
"last_sensor_fetch": last_sensor_fetch,
"cache_oldest_interval": oldest_interval,
"cache_newest_interval": newest_interval,
# Fetch groups (API calls)
"fetch_groups_count": len(fetch_groups),
}
def _get_sensor_interval_stats(self) -> dict[str, Any]:
"""
Get statistics for sensor intervals (protected range).
Protected range: day-before-yesterday 00:00 to day-after-tomorrow 00:00.
Expected: 4 days * 24 hours * 4 intervals = 384 intervals.
Returns:
Dict with count, expected, and has_gaps.
"""
start_iso, end_iso = self._cache.get_protected_range()
start_dt = datetime.fromisoformat(start_iso)
end_dt = datetime.fromisoformat(end_iso)
# Count expected intervals (15-min resolution)
expected_count = int((end_dt - start_dt).total_seconds() / (15 * 60))
# Count actual intervals in range
actual_count = 0
current_dt = start_dt
while current_dt < end_dt:
current_key = current_dt.isoformat()[:19]
if self._index.contains(current_key):
actual_count += 1
current_dt += timedelta(minutes=15)
return {
"count": actual_count,
"expected": expected_count,
"has_gaps": actual_count < expected_count,
}
def _has_gaps_in_protected_range(self) -> bool:
"""
Check if there are gaps in the protected date range.
Delegates to _get_sensor_interval_stats() for consistency.
Returns:
True if any gaps exist, False if protected range is complete.
"""
return self._get_sensor_interval_stats()["has_gaps"]
def _extract_timezone_from_user_data(self, user_data: dict[str, Any]) -> str | None:
"""Extract timezone for this home from user_data."""
if not user_data:
return None
viewer = user_data.get("viewer", {})
homes = viewer.get("homes", [])
for home in homes:
if home.get("id") == self._home_id:
return home.get("timeZone")
return None
def _get_cached_intervals(
self,
start_time_iso: str,
end_time_iso: str,
) -> list[dict[str, Any]]:
"""
Get cached intervals for time range using timestamp index.
Uses timestamp_index for O(1) lookups per timestamp.
IMPORTANT: Returns shallow copies of interval dicts to prevent external
mutations (e.g., by parse_all_timestamps()) from affecting cached data.
The Pool cache must remain immutable to ensure consistent behavior.
Args:
start_time_iso: ISO timestamp string (inclusive).
end_time_iso: ISO timestamp string (exclusive).
Returns:
List of cached interval dicts in time range (may be empty or incomplete).
Sorted by startsAt timestamp. Each dict is a shallow copy.
"""
# Parse query range once
start_time_dt = datetime.fromisoformat(start_time_iso)
end_time_dt = datetime.fromisoformat(end_time_iso)
# CRITICAL: Use NAIVE local timestamps for iteration.
#
# Index keys are naive local timestamps (timezone stripped via [:19]).
# When start and end span a DST transition, they have different UTC offsets
# (e.g., start=+01:00 CET, end=+02:00 CEST). Using fixed-offset datetimes
# from fromisoformat() causes the loop to compare UTC values for the end
# boundary, ending 1 hour early on spring-forward days (or 1 hour late on
# fall-back days).
#
# By iterating in naive local time, we match the index key format exactly
# and the end boundary comparison works correctly regardless of DST.
current_naive = start_time_dt.replace(tzinfo=None)
end_naive = end_time_dt.replace(tzinfo=None)
# Use index to find intervals: iterate through expected timestamps
result = []
# Determine interval step (15 min post-2025-10-01, 60 min pre)
resolution_change_naive = datetime(2025, 10, 1) # noqa: DTZ001
interval_minutes = INTERVAL_QUARTER_HOURLY if current_naive >= resolution_change_naive else INTERVAL_HOURLY
while current_naive < end_naive:
# Check if this timestamp exists in index (O(1) lookup)
current_dt_key = current_naive.isoformat()[:19]
location = self._index.get(current_dt_key)
if location is not None:
# Get interval from fetch group
fetch_groups = self._cache.get_fetch_groups()
fetch_group = fetch_groups[location["fetch_group_index"]]
interval = fetch_group["intervals"][location["interval_index"]]
# CRITICAL: Return shallow copy to prevent external mutations
# (e.g., parse_all_timestamps() converts startsAt to datetime in-place)
result.append(dict(interval))
# Move to next expected interval
current_naive += timedelta(minutes=interval_minutes)
# Handle resolution change boundary
if interval_minutes == INTERVAL_HOURLY and current_naive >= resolution_change_naive:
interval_minutes = INTERVAL_QUARTER_HOURLY
_LOGGER_DETAILS.debug(
"Retrieved %d intervals from cache for home %s (range %s to %s)",
len(result),
self._home_id,
start_time_iso,
end_time_iso,
)
return result
def _add_intervals(
self,
intervals: list[dict[str, Any]],
fetch_time_iso: str,
) -> None:
"""
Add intervals as new fetch group to cache with GC.
Strategy:
1. Filter out duplicates (intervals already in cache)
2. Handle "touch" (move cached intervals to new fetch group)
3. Add new fetch group to cache
4. Update timestamp index
5. Run GC if needed
6. Schedule debounced auto-save
Args:
intervals: List of interval dicts from API.
fetch_time_iso: ISO timestamp string when intervals were fetched.
"""
if not intervals:
return
fetch_time_dt = datetime.fromisoformat(fetch_time_iso)
# Classify intervals: new vs already cached
new_intervals = []
intervals_to_touch = []
for interval in intervals:
starts_at_normalized = _normalize_starts_at(interval["startsAt"])
if not self._index.contains(starts_at_normalized):
new_intervals.append(interval)
else:
intervals_to_touch.append((starts_at_normalized, interval))
_LOGGER_DETAILS.debug(
"Interval %s already cached for home %s, will touch (update fetch time)",
interval["startsAt"],
self._home_id,
)
# Handle touched intervals: move to new fetch group
if intervals_to_touch:
self._touch_intervals(intervals_to_touch, fetch_time_dt)
if not new_intervals:
if intervals_to_touch:
_LOGGER_DETAILS.debug(
"All %d intervals already cached for home %s (touched only)",
len(intervals),
self._home_id,
)
return
# Sort new intervals by startsAt
new_intervals.sort(key=lambda x: x["startsAt"])
# Add new fetch group to cache
fetch_group_index = self._cache.add_fetch_group(new_intervals, fetch_time_dt)
# Update timestamp index for all new intervals
for interval_index, interval in enumerate(new_intervals):
starts_at_normalized = _normalize_starts_at(interval["startsAt"])
self._index.add(interval, fetch_group_index, interval_index)
_LOGGER_DETAILS.debug(
"Added fetch group %d to home %s cache: %d new intervals (fetched at %s)",
fetch_group_index,
self._home_id,
len(new_intervals),
fetch_time_iso,
)
# Run GC to evict old fetch groups if needed
gc_changed_data = self._gc.run_gc()
# Schedule debounced auto-save if data changed
data_changed = len(new_intervals) > 0 or len(intervals_to_touch) > 0 or gc_changed_data
if data_changed and self._hass is not None and self._entry_id is not None:
self._schedule_debounced_save()
def _touch_intervals(
self,
intervals_to_touch: list[tuple[str, dict[str, Any]]],
fetch_time_dt: datetime,
) -> None:
"""
Move cached intervals to new fetch group (update fetch time).
Creates a new fetch group containing references to existing intervals.
Updates the index to point to the new fetch group.
Args:
intervals_to_touch: List of (normalized_timestamp, interval_dict) tuples.
fetch_time_dt: Datetime when intervals were fetched.
"""
fetch_groups = self._cache.get_fetch_groups()
# Create touch fetch group with existing interval references
touch_intervals = []
for starts_at_normalized, _interval in intervals_to_touch:
# Get existing interval from old fetch group
location = self._index.get(starts_at_normalized)
if location is None:
continue # Should not happen, but be defensive
old_group = fetch_groups[location["fetch_group_index"]]
existing_interval = old_group["intervals"][location["interval_index"]]
touch_intervals.append(existing_interval)
# Add touch group to cache
touch_group_index = self._cache.add_fetch_group(touch_intervals, fetch_time_dt)
# Update index to point to new fetch group using batch operation
# This is more efficient than individual remove+add calls
index_updates = [
(starts_at_normalized, touch_group_index, interval_index)
for interval_index, (starts_at_normalized, _) in enumerate(intervals_to_touch)
]
self._index.update_batch(index_updates)
_LOGGER.debug(
"Touched %d cached intervals for home %s (moved to fetch group %d, fetched at %s)",
len(intervals_to_touch),
self._home_id,
touch_group_index,
fetch_time_dt.isoformat(),
)
def _schedule_debounced_save(self) -> None:
"""
Schedule debounced save with configurable delay.
Cancels existing timer and starts new one if already scheduled.
This prevents multiple saves during rapid successive changes.
"""
# Cancel existing debounce timer if running
if self._save_debounce_task is not None and not self._save_debounce_task.done():
self._save_debounce_task.cancel()
_LOGGER.debug("Cancelled pending auto-save (new changes detected, resetting timer)")
# Schedule new debounced save
task = asyncio.create_task(
self._debounced_save_worker(),
name=f"interval_pool_debounce_{self._entry_id}",
)
self._save_debounce_task = task
self._background_tasks.add(task)
task.add_done_callback(self._background_tasks.discard)
async def _debounced_save_worker(self) -> None:
"""Debounce worker: waits configured delay, then saves if not cancelled."""
try:
await asyncio.sleep(DEBOUNCE_DELAY_SECONDS)
await self._auto_save_pool_state()
except asyncio.CancelledError:
_LOGGER.debug("Auto-save timer cancelled (expected - new changes arrived)")
raise
async def async_shutdown(self) -> None:
"""
Clean shutdown - cancel pending background tasks.
Should be called when the config entry is unloaded to prevent
orphaned tasks and ensure clean resource cleanup.
"""
_LOGGER.debug("Shutting down interval pool for home %s", self._home_id)
# Cancel debounce task if running
if self._save_debounce_task is not None and not self._save_debounce_task.done():
self._save_debounce_task.cancel()
with contextlib.suppress(asyncio.CancelledError):
await self._save_debounce_task
_LOGGER.debug("Cancelled pending auto-save task")
# Cancel any other background tasks
if self._background_tasks:
for task in list(self._background_tasks):
if not task.done():
task.cancel()
# Wait for all tasks to complete cancellation
if self._background_tasks:
await asyncio.gather(*self._background_tasks, return_exceptions=True)
_LOGGER.debug("Cancelled %d background tasks", len(self._background_tasks))
self._background_tasks.clear()
_LOGGER.debug("Interval pool shutdown complete for home %s", self._home_id)
async def _auto_save_pool_state(self) -> None:
"""Auto-save pool state to storage with lock protection."""
if self._hass is None or self._entry_id is None:
return
async with self._save_lock:
try:
pool_state = self.to_dict()
await async_save_pool_state(self._hass, self._entry_id, pool_state)
_LOGGER.debug("Auto-saved interval pool for entry %s", self._entry_id)
except Exception:
_LOGGER.exception("Failed to auto-save interval pool for entry %s", self._entry_id)
def to_dict(self) -> dict[str, Any]:
"""
Serialize interval pool state for storage.
Filters out dead intervals (no longer referenced by index).
Returns:
Dictionary containing serialized pool state (only living intervals).
"""
fetch_groups = self._cache.get_fetch_groups()
# Serialize fetch groups (only living intervals)
serialized_fetch_groups = []
for group_idx, fetch_group in enumerate(fetch_groups):
living_intervals = []
for interval_idx, interval in enumerate(fetch_group["intervals"]):
starts_at_normalized = _normalize_starts_at(interval["startsAt"])
# Check if interval is still referenced in index
location = self._index.get(starts_at_normalized)
# Only keep if index points to THIS position in THIS group
if (
location is not None
and location["fetch_group_index"] == group_idx
and location["interval_index"] == interval_idx
):
living_intervals.append(interval)
# Only serialize groups with living intervals
if living_intervals:
serialized_fetch_groups.append(
{
"fetched_at": fetch_group["fetched_at"].isoformat(),
"intervals": living_intervals,
}
)
return {
"version": 1,
"home_id": self._home_id,
"fetch_groups": serialized_fetch_groups,
}
@classmethod
def from_dict(
cls,
data: dict[str, Any],
*,
api: TibberPricesApiClient,
hass: Any | None = None,
entry_id: str | None = None,
time_service: TibberPricesTimeService | None = None,
) -> TibberPricesIntervalPool | None:
"""
Restore interval pool manager from storage.
Expects single-home format: {"version": 1, "home_id": "...", "fetch_groups": [...]}
Old multi-home format is treated as corrupted and returns None.
Args:
data: Dictionary containing serialized pool state.
api: API client for fetching intervals.
hass: HomeAssistant instance for auto-save (optional).
entry_id: Config entry ID for auto-save (optional).
time_service: TimeService for time-travel support (optional).
Returns:
Restored TibberPricesIntervalPool instance, or None if format unknown/corrupted.
"""
# Validate format
if not data or "home_id" not in data or "fetch_groups" not in data:
if "homes" in data:
_LOGGER.info(
"Interval pool storage uses old multi-home format (pre-2025-11-25). "
"Treating as corrupted. Pool will rebuild from API."
)
else:
_LOGGER.warning("Interval pool storage format unknown or corrupted. Pool will rebuild from API.")
return None
home_id = data["home_id"]
# Create manager with home_id from storage
manager = cls(home_id=home_id, api=api, hass=hass, entry_id=entry_id, time_service=time_service)
# Restore fetch groups to cache
for serialized_group in data.get("fetch_groups", []):
fetched_at_dt = datetime.fromisoformat(serialized_group["fetched_at"])
intervals = serialized_group["intervals"]
fetch_group_index = manager._cache.add_fetch_group(intervals, fetched_at_dt)
# Rebuild index for this fetch group
for interval_index, interval in enumerate(intervals):
manager._index.add(interval, fetch_group_index, interval_index)
total_intervals = sum(len(group["intervals"]) for group in manager._cache.get_fetch_groups())
_LOGGER.debug(
"Interval pool restored from storage (home %s, %d intervals)",
home_id,
total_intervals,
)
return manager

View file

@ -0,0 +1,180 @@
"""
Routing Module - API endpoint selection for price intervals.
This module handles intelligent routing between different Tibber API endpoints:
- PRICE_INFO: Recent data (from "day before yesterday midnight" onwards)
- PRICE_INFO_RANGE: Historical data (before "day before yesterday midnight")
- Automatic splitting and merging when range spans the boundary
CRITICAL: Uses REAL TIME (dt_utils.now()) for API boundary calculation,
NOT TimeService.now() which may be shifted for internal simulation.
"""
from __future__ import annotations
import logging
from typing import TYPE_CHECKING, Any
from custom_components.tibber_prices.api.exceptions import TibberPricesApiClientError
from homeassistant.util import dt as dt_utils
if TYPE_CHECKING:
from datetime import datetime
from custom_components.tibber_prices.api.client import TibberPricesApiClient
_LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
async def get_price_intervals_for_range(
api_client: TibberPricesApiClient,
home_id: str,
user_data: dict[str, Any],
start_time: datetime,
end_time: datetime,
) -> list[dict[str, Any]]:
"""
Get price intervals for a specific time range with automatic routing.
Automatically routes to the correct API endpoint based on the time range:
- PRICE_INFO_RANGE: For intervals exclusively before "day before yesterday midnight" (real time)
- PRICE_INFO: For intervals from "day before yesterday midnight" onwards
- Both: If range spans across the boundary, splits the request
CRITICAL: Uses REAL TIME (dt_utils.now()) for API boundary calculation,
NOT TimeService.now() which may be shifted for internal simulation.
This ensures predictable API responses.
CACHING STRATEGY: Returns ALL intervals from API response, NOT filtered.
The caller (pool.py) will cache everything and then filter to user request.
This maximizes cache efficiency - one API call can populate cache for
multiple subsequent queries.
Args:
api_client: TibberPricesApiClient instance for API calls.
home_id: Home ID to fetch price data for.
user_data: User data dict containing home metadata (including timezone).
start_time: Start of the range (inclusive, timezone-aware).
end_time: End of the range (exclusive, timezone-aware).
Returns:
List of ALL price interval dicts from API (unfiltered).
- PRICE_INFO: Returns ~384 intervals (day-before-yesterday to tomorrow)
- PRICE_INFO_RANGE: Returns intervals for requested historical range
- Both: Returns all intervals from both endpoints
Raises:
TibberPricesApiClientError: If arguments invalid or requests fail.
"""
if not user_data:
msg = "User data required for timezone-aware price fetching - fetch user data first"
raise TibberPricesApiClientError(msg)
if not home_id:
msg = "Home ID is required"
raise TibberPricesApiClientError(msg)
if start_time >= end_time:
msg = f"Invalid time range: start_time ({start_time}) must be before end_time ({end_time})"
raise TibberPricesApiClientError(msg)
# Calculate boundary: day before yesterday midnight (REAL TIME, not TimeService)
boundary = _calculate_boundary(api_client, user_data, home_id)
_LOGGER_DETAILS.debug(
"Routing price interval request for home %s: range %s to %s, boundary %s",
home_id,
start_time,
end_time,
boundary,
)
# Route based on time range
if end_time <= boundary:
# Entire range is historical (before day before yesterday) → use PRICE_INFO_RANGE
_LOGGER_DETAILS.debug("Range is fully historical, using PRICE_INFO_RANGE")
result = await api_client.async_get_price_info_range(
home_id=home_id,
user_data=user_data,
start_time=start_time,
end_time=end_time,
)
return result["price_info"]
if start_time >= boundary:
# Entire range is recent (from day before yesterday onwards) → use PRICE_INFO
_LOGGER_DETAILS.debug("Range is fully recent, using PRICE_INFO")
result = await api_client.async_get_price_info(home_id, user_data)
# Return ALL intervals (unfiltered) for maximum cache efficiency
# Pool will cache everything, then filter to user request
return result["price_info"]
# Range spans boundary → split request
_LOGGER_DETAILS.debug("Range spans boundary, splitting request")
# Fetch historical part (start_time to boundary)
historical_result = await api_client.async_get_price_info_range(
home_id=home_id,
user_data=user_data,
start_time=start_time,
end_time=boundary,
)
# Fetch recent part (boundary onwards)
recent_result = await api_client.async_get_price_info(home_id, user_data)
# Return ALL intervals (unfiltered) for maximum cache efficiency
# Pool will cache everything, then filter to user request
return historical_result["price_info"] + recent_result["price_info"]
def _calculate_boundary(
api_client: TibberPricesApiClient,
user_data: dict[str, Any],
home_id: str,
) -> datetime:
"""
Calculate the API boundary (day before yesterday midnight).
Uses the API client's helper method to extract timezone and calculate boundary.
Args:
api_client: TibberPricesApiClient instance.
user_data: User data dict containing home metadata.
home_id: Home ID to get timezone for.
Returns:
Timezone-aware datetime for day before yesterday midnight.
"""
# Extract timezone for this home
home_timezones = api_client._extract_home_timezones(user_data) # noqa: SLF001
home_tz = home_timezones.get(home_id)
# Calculate boundary using API client's method
return api_client._calculate_day_before_yesterday_midnight(home_tz) # noqa: SLF001
def _parse_timestamp(timestamp_str: str) -> datetime:
"""
Parse ISO timestamp string to timezone-aware datetime.
Args:
timestamp_str: ISO format timestamp string.
Returns:
Timezone-aware datetime object.
Raises:
ValueError: If timestamp string cannot be parsed.
"""
result = dt_utils.parse_datetime(timestamp_str)
if result is None:
msg = f"Failed to parse timestamp: {timestamp_str}"
raise ValueError(msg)
return result

View file

@ -0,0 +1,165 @@
"""Storage management for interval pool."""
from __future__ import annotations
import errno
import logging
from typing import TYPE_CHECKING, Any
from homeassistant.helpers.storage import Store
if TYPE_CHECKING:
from homeassistant.core import HomeAssistant
_LOGGER = logging.getLogger(__name__)
_LOGGER_DETAILS = logging.getLogger(__name__ + ".details")
# Storage version - increment when changing data structure
INTERVAL_POOL_STORAGE_VERSION = 1
def get_storage_key(entry_id: str) -> str:
"""
Get storage key for interval pool based on config entry ID.
Args:
entry_id: Home Assistant config entry ID
Returns:
Storage key string
"""
return f"tibber_prices.interval_pool.{entry_id}"
async def async_load_pool_state(
hass: HomeAssistant,
entry_id: str,
) -> dict[str, Any] | None:
"""
Load interval pool state from storage.
Args:
hass: Home Assistant instance
entry_id: Config entry ID
Returns:
Pool state dict or None if no cache exists
"""
storage_key = get_storage_key(entry_id)
store: Store = Store(hass, INTERVAL_POOL_STORAGE_VERSION, storage_key)
try:
stored = await store.async_load()
except Exception:
# Corrupted storage file, JSON parse error, or other exception
_LOGGER.exception(
"Failed to load interval pool storage for entry %s (corrupted file?), starting with empty pool",
entry_id,
)
return None
if stored is None:
_LOGGER.debug("No interval pool cache found for entry %s (first run)", entry_id)
return None
# Validate storage structure (single-home format)
if not isinstance(stored, dict):
_LOGGER.warning(
"Invalid interval pool storage structure for entry %s (not a dict), ignoring",
entry_id,
)
return None
# Check for new single-home format (version 1, home_id, fetch_groups)
if "home_id" in stored and "fetch_groups" in stored:
_LOGGER.debug(
"Interval pool state loaded for entry %s (single-home format, %d fetch groups)",
entry_id,
len(stored.get("fetch_groups", [])),
)
return stored
# Check for old multi-home format (homes dict) - treat as incompatible
if "homes" in stored:
_LOGGER.info(
"Interval pool storage for entry %s uses old multi-home format (pre-2025-11-25). "
"Treating as incompatible. Pool will rebuild from API.",
entry_id,
)
return None
# Unknown format
_LOGGER.warning(
"Invalid interval pool storage structure for entry %s (missing required keys), ignoring",
entry_id,
)
return None
async def async_save_pool_state(
hass: HomeAssistant,
entry_id: str,
pool_state: dict[str, Any],
) -> None:
"""
Save interval pool state to storage.
Args:
hass: Home Assistant instance
entry_id: Config entry ID
pool_state: Pool state dict to save
"""
storage_key = get_storage_key(entry_id)
store: Store = Store(hass, INTERVAL_POOL_STORAGE_VERSION, storage_key)
try:
await store.async_save(pool_state)
_LOGGER_DETAILS.debug(
"Interval pool state saved for entry %s (%d fetch groups)",
entry_id,
len(pool_state.get("fetch_groups", [])),
)
except OSError as err:
# Provide specific error messages based on errno
if err.errno == errno.ENOSPC: # Disk full
_LOGGER.exception(
"Cannot save interval pool storage for entry %s: Disk full!",
entry_id,
)
elif err.errno == errno.EACCES: # Permission denied
_LOGGER.exception(
"Cannot save interval pool storage for entry %s: Permission denied!",
entry_id,
)
else:
_LOGGER.exception(
"Failed to save interval pool storage for entry %s",
entry_id,
)
async def async_remove_pool_storage(
hass: HomeAssistant,
entry_id: str,
) -> None:
"""
Remove interval pool storage file.
Used when config entry is removed.
Args:
hass: Home Assistant instance
entry_id: Config entry ID
"""
storage_key = get_storage_key(entry_id)
store: Store = Store(hass, INTERVAL_POOL_STORAGE_VERSION, storage_key)
try:
await store.async_remove()
_LOGGER.debug("Interval pool storage removed for entry %s", entry_id)
except OSError as ex:
_LOGGER.warning("Failed to remove interval pool storage for entry %s: %s", entry_id, ex)

View file

@ -6,11 +6,10 @@
], ],
"config_flow": true, "config_flow": true,
"documentation": "https://github.com/jpawlowski/hass.tibber_prices", "documentation": "https://github.com/jpawlowski/hass.tibber_prices",
"integration_type": "hub",
"iot_class": "cloud_polling", "iot_class": "cloud_polling",
"issue_tracker": "https://github.com/jpawlowski/hass.tibber_prices/issues", "issue_tracker": "https://github.com/jpawlowski/hass.tibber_prices/issues",
"requirements": [ "requirements": [
"aiofiles>=23.2.1" "aiofiles>=23.2.1"
], ],
"version": "0.12.0" "version": "0.27.0"
} }

View file

@ -0,0 +1,39 @@
"""
Number platform for Tibber Prices integration.
Provides configurable number entities for runtime overrides of Best Price
and Peak Price period calculation settings. These entities allow automation
of configuration parameters without using the options flow.
When enabled, these entities take precedence over the options flow settings.
When disabled (default), the options flow settings are used.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from .core import TibberPricesConfigNumber
from .definitions import NUMBER_ENTITY_DESCRIPTIONS
if TYPE_CHECKING:
from custom_components.tibber_prices.data import TibberPricesConfigEntry
from homeassistant.core import HomeAssistant
from homeassistant.helpers.entity_platform import AddEntitiesCallback
async def async_setup_entry(
_hass: HomeAssistant,
entry: TibberPricesConfigEntry,
async_add_entities: AddEntitiesCallback,
) -> None:
"""Set up Tibber Prices number entities based on a config entry."""
coordinator = entry.runtime_data.coordinator
async_add_entities(
TibberPricesConfigNumber(
coordinator=coordinator,
entity_description=entity_description,
)
for entity_description in NUMBER_ENTITY_DESCRIPTIONS
)

View file

@ -0,0 +1,242 @@
"""
Number entity implementation for Tibber Prices configuration overrides.
These entities allow runtime configuration of period calculation settings.
When a config entity is enabled, its value takes precedence over the
options flow setting for period calculations.
"""
from __future__ import annotations
import logging
from typing import TYPE_CHECKING, Any
from custom_components.tibber_prices.const import (
DOMAIN,
get_home_type_translation,
get_translation,
)
from homeassistant.components.number import NumberEntity, RestoreNumber
from homeassistant.core import callback
from homeassistant.helpers.device_registry import DeviceEntryType, DeviceInfo
if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator import (
TibberPricesDataUpdateCoordinator,
)
from .definitions import TibberPricesNumberEntityDescription
_LOGGER = logging.getLogger(__name__)
class TibberPricesConfigNumber(RestoreNumber, NumberEntity):
"""
A number entity for configuring period calculation settings at runtime.
When this entity is enabled, its value overrides the corresponding
options flow setting. When disabled (default), the options flow
setting is used for period calculations.
The entity restores its value after Home Assistant restart.
"""
_attr_has_entity_name = True
entity_description: TibberPricesNumberEntityDescription
# Exclude all attributes from recorder history - config entities don't need history
_unrecorded_attributes = frozenset(
{
"description",
"long_description",
"usage_tips",
"friendly_name",
"icon",
"unit_of_measurement",
"mode",
"min",
"max",
"step",
}
)
def __init__(
self,
coordinator: TibberPricesDataUpdateCoordinator,
entity_description: TibberPricesNumberEntityDescription,
) -> None:
"""Initialize the config number entity."""
self.coordinator = coordinator
self.entity_description = entity_description
# Set unique ID
self._attr_unique_id = (
f"{coordinator.config_entry.unique_id or coordinator.config_entry.entry_id}_{entity_description.key}"
)
# Initialize with None - will be set in async_added_to_hass
self._attr_native_value: float | None = None
# Setup device info
self._setup_device_info()
def _setup_device_info(self) -> None:
"""Set up device information."""
home_name, home_id, home_type = self._get_device_info()
language = self.coordinator.hass.config.language or "en"
translated_model = get_home_type_translation(home_type, language) if home_type else "Unknown"
self._attr_device_info = DeviceInfo(
entry_type=DeviceEntryType.SERVICE,
identifiers={
(
DOMAIN,
self.coordinator.config_entry.unique_id or self.coordinator.config_entry.entry_id,
)
},
name=home_name,
manufacturer="Tibber",
model=translated_model,
serial_number=home_id if home_id else None,
configuration_url="https://developer.tibber.com/explorer",
)
def _get_device_info(self) -> tuple[str, str | None, str | None]:
"""Get device name, ID and type."""
user_profile = self.coordinator.get_user_profile()
is_subentry = bool(self.coordinator.config_entry.data.get("home_id"))
home_id = self.coordinator.config_entry.unique_id
home_type = None
if is_subentry:
home_data = self.coordinator.config_entry.data.get("home_data", {})
home_id = self.coordinator.config_entry.data.get("home_id")
address = home_data.get("address", {})
address1 = address.get("address1", "")
city = address.get("city", "")
app_nickname = home_data.get("appNickname", "")
home_type = home_data.get("type", "")
if app_nickname and app_nickname.strip():
home_name = app_nickname.strip()
elif address1:
home_name = address1
if city:
home_name = f"{home_name}, {city}"
else:
home_name = f"Tibber Home {home_id[:8]}" if home_id else "Tibber Home"
elif user_profile:
home_name = user_profile.get("name") or "Tibber Home"
else:
home_name = "Tibber Home"
return home_name, home_id, home_type
async def async_added_to_hass(self) -> None:
"""Handle entity which was added to Home Assistant."""
await super().async_added_to_hass()
# Try to restore previous state
last_number_data = await self.async_get_last_number_data()
if last_number_data is not None and last_number_data.native_value is not None:
self._attr_native_value = last_number_data.native_value
_LOGGER.debug(
"Restored %s value: %s",
self.entity_description.key,
self._attr_native_value,
)
else:
# Initialize with value from options flow (or default)
self._attr_native_value = self._get_value_from_options()
_LOGGER.debug(
"Initialized %s from options: %s",
self.entity_description.key,
self._attr_native_value,
)
# Register override with coordinator if entity is enabled
# This happens during add, so check entity registry
await self._sync_override_state()
async def async_will_remove_from_hass(self) -> None:
"""Handle entity removal from Home Assistant."""
# Remove override when entity is removed
self.coordinator.remove_config_override(
self.entity_description.config_key,
self.entity_description.config_section,
)
await super().async_will_remove_from_hass()
def _get_value_from_options(self) -> float:
"""Get the current value from options flow or default."""
options = self.coordinator.config_entry.options
section = options.get(self.entity_description.config_section, {})
value = section.get(
self.entity_description.config_key,
self.entity_description.default_value,
)
return float(value)
async def _sync_override_state(self) -> None:
"""Sync the override state with the coordinator based on entity enabled state."""
# Check if entity is enabled in registry
if self.registry_entry is not None and not self.registry_entry.disabled:
# Entity is enabled - register the override
if self._attr_native_value is not None:
self.coordinator.set_config_override(
self.entity_description.config_key,
self.entity_description.config_section,
self._attr_native_value,
)
else:
# Entity is disabled - remove override
self.coordinator.remove_config_override(
self.entity_description.config_key,
self.entity_description.config_section,
)
async def async_set_native_value(self, value: float) -> None:
"""Update the current value and trigger recalculation."""
self._attr_native_value = value
# Update the coordinator's runtime override
self.coordinator.set_config_override(
self.entity_description.config_key,
self.entity_description.config_section,
value,
)
# Trigger period recalculation (same path as options update)
await self.coordinator.async_handle_config_override_update()
_LOGGER.debug(
"Updated %s to %s, triggered period recalculation",
self.entity_description.key,
value,
)
@property
def extra_state_attributes(self) -> dict[str, Any] | None:
"""Return entity state attributes with description."""
language = self.coordinator.hass.config.language or "en"
# Try to get description from custom translations
# Custom translations use direct path: number.{key}.description
translation_path = [
"number",
self.entity_description.translation_key or self.entity_description.key,
"description",
]
description = get_translation(translation_path, language)
attrs: dict[str, Any] = {}
if description:
attrs["description"] = description
return attrs if attrs else None
@callback
def async_registry_entry_updated(self) -> None:
"""Handle entity registry update (enabled/disabled state change)."""
# This is called when the entity is enabled/disabled in the UI
self.hass.async_create_task(self._sync_override_state())

View file

@ -0,0 +1,250 @@
"""
Number entity definitions for Tibber Prices configuration overrides.
These number entities allow runtime configuration of Best Price and Peak Price
period calculation settings. They are disabled by default - users can enable
individual entities to override specific settings at runtime.
When enabled, the entity value takes precedence over the options flow setting.
When disabled (default), the options flow setting is used.
"""
from __future__ import annotations
from dataclasses import dataclass
from homeassistant.components.number import (
NumberEntityDescription,
NumberMode,
)
from homeassistant.const import PERCENTAGE, EntityCategory
@dataclass(frozen=True, kw_only=True)
class TibberPricesNumberEntityDescription(NumberEntityDescription):
"""Describes a Tibber Prices number entity for config overrides."""
# The config key this entity overrides (matches CONF_* constants)
config_key: str
# The section in options where this setting is stored (e.g., "flexibility_settings")
config_section: str
# Whether this is for best_price (False) or peak_price (True)
is_peak_price: bool = False
# Default value from const.py
default_value: float | int = 0
# ============================================================================
# BEST PRICE PERIOD CONFIGURATION OVERRIDES
# ============================================================================
BEST_PRICE_NUMBER_ENTITIES = (
TibberPricesNumberEntityDescription(
key="best_price_flex_override",
translation_key="best_price_flex_override",
name="Best Price: Flexibility",
icon="mdi:arrow-down-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=0,
native_max_value=50,
native_step=1,
native_unit_of_measurement=PERCENTAGE,
mode=NumberMode.SLIDER,
config_key="best_price_flex",
config_section="flexibility_settings",
is_peak_price=False,
default_value=15, # DEFAULT_BEST_PRICE_FLEX
),
TibberPricesNumberEntityDescription(
key="best_price_min_distance_override",
translation_key="best_price_min_distance_override",
name="Best Price: Minimum Distance",
icon="mdi:arrow-down-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=-50,
native_max_value=0,
native_step=1,
native_unit_of_measurement=PERCENTAGE,
mode=NumberMode.SLIDER,
config_key="best_price_min_distance_from_avg",
config_section="flexibility_settings",
is_peak_price=False,
default_value=-5, # DEFAULT_BEST_PRICE_MIN_DISTANCE_FROM_AVG
),
TibberPricesNumberEntityDescription(
key="best_price_min_period_length_override",
translation_key="best_price_min_period_length_override",
name="Best Price: Minimum Period Length",
icon="mdi:arrow-down-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=15,
native_max_value=180,
native_step=15,
native_unit_of_measurement="min",
mode=NumberMode.SLIDER,
config_key="best_price_min_period_length",
config_section="period_settings",
is_peak_price=False,
default_value=60, # DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH
),
TibberPricesNumberEntityDescription(
key="best_price_min_periods_override",
translation_key="best_price_min_periods_override",
name="Best Price: Minimum Periods",
icon="mdi:arrow-down-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=1,
native_max_value=10,
native_step=1,
mode=NumberMode.SLIDER,
config_key="min_periods_best",
config_section="relaxation_and_target_periods",
is_peak_price=False,
default_value=2, # DEFAULT_MIN_PERIODS_BEST
),
TibberPricesNumberEntityDescription(
key="best_price_relaxation_attempts_override",
translation_key="best_price_relaxation_attempts_override",
name="Best Price: Relaxation Attempts",
icon="mdi:arrow-down-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=1,
native_max_value=12,
native_step=1,
mode=NumberMode.SLIDER,
config_key="relaxation_attempts_best",
config_section="relaxation_and_target_periods",
is_peak_price=False,
default_value=11, # DEFAULT_RELAXATION_ATTEMPTS_BEST
),
TibberPricesNumberEntityDescription(
key="best_price_gap_count_override",
translation_key="best_price_gap_count_override",
name="Best Price: Gap Tolerance",
icon="mdi:arrow-down-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=0,
native_max_value=8,
native_step=1,
mode=NumberMode.SLIDER,
config_key="best_price_max_level_gap_count",
config_section="period_settings",
is_peak_price=False,
default_value=1, # DEFAULT_BEST_PRICE_MAX_LEVEL_GAP_COUNT
),
)
# ============================================================================
# PEAK PRICE PERIOD CONFIGURATION OVERRIDES
# ============================================================================
PEAK_PRICE_NUMBER_ENTITIES = (
TibberPricesNumberEntityDescription(
key="peak_price_flex_override",
translation_key="peak_price_flex_override",
name="Peak Price: Flexibility",
icon="mdi:arrow-up-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=-50,
native_max_value=0,
native_step=1,
native_unit_of_measurement=PERCENTAGE,
mode=NumberMode.SLIDER,
config_key="peak_price_flex",
config_section="flexibility_settings",
is_peak_price=True,
default_value=-20, # DEFAULT_PEAK_PRICE_FLEX
),
TibberPricesNumberEntityDescription(
key="peak_price_min_distance_override",
translation_key="peak_price_min_distance_override",
name="Peak Price: Minimum Distance",
icon="mdi:arrow-up-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=0,
native_max_value=50,
native_step=1,
native_unit_of_measurement=PERCENTAGE,
mode=NumberMode.SLIDER,
config_key="peak_price_min_distance_from_avg",
config_section="flexibility_settings",
is_peak_price=True,
default_value=5, # DEFAULT_PEAK_PRICE_MIN_DISTANCE_FROM_AVG
),
TibberPricesNumberEntityDescription(
key="peak_price_min_period_length_override",
translation_key="peak_price_min_period_length_override",
name="Peak Price: Minimum Period Length",
icon="mdi:arrow-up-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=15,
native_max_value=180,
native_step=15,
native_unit_of_measurement="min",
mode=NumberMode.SLIDER,
config_key="peak_price_min_period_length",
config_section="period_settings",
is_peak_price=True,
default_value=30, # DEFAULT_PEAK_PRICE_MIN_PERIOD_LENGTH
),
TibberPricesNumberEntityDescription(
key="peak_price_min_periods_override",
translation_key="peak_price_min_periods_override",
name="Peak Price: Minimum Periods",
icon="mdi:arrow-up-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=1,
native_max_value=10,
native_step=1,
mode=NumberMode.SLIDER,
config_key="min_periods_peak",
config_section="relaxation_and_target_periods",
is_peak_price=True,
default_value=2, # DEFAULT_MIN_PERIODS_PEAK
),
TibberPricesNumberEntityDescription(
key="peak_price_relaxation_attempts_override",
translation_key="peak_price_relaxation_attempts_override",
name="Peak Price: Relaxation Attempts",
icon="mdi:arrow-up-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=1,
native_max_value=12,
native_step=1,
mode=NumberMode.SLIDER,
config_key="relaxation_attempts_peak",
config_section="relaxation_and_target_periods",
is_peak_price=True,
default_value=11, # DEFAULT_RELAXATION_ATTEMPTS_PEAK
),
TibberPricesNumberEntityDescription(
key="peak_price_gap_count_override",
translation_key="peak_price_gap_count_override",
name="Peak Price: Gap Tolerance",
icon="mdi:arrow-up-bold-circle",
entity_category=EntityCategory.CONFIG,
entity_registry_enabled_default=False,
native_min_value=0,
native_max_value=8,
native_step=1,
mode=NumberMode.SLIDER,
config_key="peak_price_max_level_gap_count",
config_section="period_settings",
is_peak_price=True,
default_value=1, # DEFAULT_PEAK_PRICE_MAX_LEVEL_GAP_COUNT
),
)
# All number entity descriptions combined
NUMBER_ENTITY_DESCRIPTIONS = BEST_PRICE_NUMBER_ENTITIES + PEAK_PRICE_NUMBER_ENTITIES

View file

@ -17,6 +17,11 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.const import (
CONF_CURRENCY_DISPLAY_MODE,
DISPLAY_MODE_BASE,
)
from .core import TibberPricesSensor from .core import TibberPricesSensor
from .definitions import ENTITY_DESCRIPTIONS from .definitions import ENTITY_DESCRIPTIONS
@ -34,10 +39,22 @@ async def async_setup_entry(
"""Set up Tibber Prices sensor based on a config entry.""" """Set up Tibber Prices sensor based on a config entry."""
coordinator = entry.runtime_data.coordinator coordinator = entry.runtime_data.coordinator
# Get display mode from config
display_mode = entry.options.get(CONF_CURRENCY_DISPLAY_MODE, DISPLAY_MODE_BASE)
# Filter entity descriptions based on display mode
# Skip current_interval_price_base if user configured major display
# (regular current_interval_price already shows major units)
entities_to_create = [
entity_description
for entity_description in ENTITY_DESCRIPTIONS
if not (entity_description.key == "current_interval_price_base" and display_mode == DISPLAY_MODE_BASE)
]
async_add_entities( async_add_entities(
TibberPricesSensor( TibberPricesSensor(
coordinator=coordinator, coordinator=coordinator,
entity_description=entity_description, entity_description=entity_description,
) )
for entity_description in ENTITY_DESCRIPTIONS for entity_description in entities_to_create
) )

View file

@ -14,6 +14,22 @@ from custom_components.tibber_prices.entity_utils import (
add_description_attributes, add_description_attributes,
add_icon_color_attribute, add_icon_color_attribute,
) )
from custom_components.tibber_prices.sensor.types import (
DailyStatPriceAttributes,
DailyStatRatingAttributes,
FutureAttributes,
IntervalLevelAttributes,
# Import all types for re-export
IntervalPriceAttributes,
IntervalRatingAttributes,
LifecycleAttributes,
MetadataAttributes,
SensorAttributes,
TimingAttributes,
TrendAttributes,
VolatilityAttributes,
Window24hAttributes,
)
if TYPE_CHECKING: if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.core import ( from custom_components.tibber_prices.coordinator.core import (
@ -34,6 +50,20 @@ from .volatility import add_volatility_type_attributes, get_prices_for_volatilit
from .window_24h import add_average_price_attributes from .window_24h import add_average_price_attributes
__all__ = [ __all__ = [
"DailyStatPriceAttributes",
"DailyStatRatingAttributes",
"FutureAttributes",
"IntervalLevelAttributes",
"IntervalPriceAttributes",
"IntervalRatingAttributes",
"LifecycleAttributes",
"MetadataAttributes",
# Type exports
"SensorAttributes",
"TimingAttributes",
"TrendAttributes",
"VolatilityAttributes",
"Window24hAttributes",
"add_volatility_type_attributes", "add_volatility_type_attributes",
"build_extra_state_attributes", "build_extra_state_attributes",
"build_sensor_attributes", "build_sensor_attributes",
@ -47,7 +77,9 @@ def build_sensor_attributes(
coordinator: TibberPricesDataUpdateCoordinator, coordinator: TibberPricesDataUpdateCoordinator,
native_value: Any, native_value: Any,
cached_data: dict, cached_data: dict,
) -> dict | None: *,
config_entry: TibberPricesConfigEntry,
) -> dict[str, Any] | None:
""" """
Build attributes for a sensor based on its key. Build attributes for a sensor based on its key.
@ -58,6 +90,7 @@ def build_sensor_attributes(
coordinator: The data update coordinator coordinator: The data update coordinator
native_value: The current native value of the sensor native_value: The current native value of the sensor
cached_data: Dictionary containing cached sensor data cached_data: Dictionary containing cached sensor data
config_entry: Config entry for user preferences
Returns: Returns:
Dictionary of attributes or None if no attributes should be added Dictionary of attributes or None if no attributes should be added
@ -97,6 +130,7 @@ def build_sensor_attributes(
native_value=native_value, native_value=native_value,
cached_data=cached_data, cached_data=cached_data,
time=time, time=time,
config_entry=config_entry,
) )
elif key in [ elif key in [
"trailing_price_average", "trailing_price_average",
@ -106,9 +140,23 @@ def build_sensor_attributes(
"leading_price_min", "leading_price_min",
"leading_price_max", "leading_price_max",
]: ]:
add_average_price_attributes(attributes=attributes, key=key, coordinator=coordinator, time=time) add_average_price_attributes(
attributes=attributes,
key=key,
coordinator=coordinator,
time=time,
cached_data=cached_data,
config_entry=config_entry,
)
elif key.startswith("next_avg_"): elif key.startswith("next_avg_"):
add_next_avg_attributes(attributes=attributes, key=key, coordinator=coordinator, time=time) add_next_avg_attributes(
attributes=attributes,
key=key,
coordinator=coordinator,
time=time,
cached_data=cached_data,
config_entry=config_entry,
)
elif any( elif any(
pattern in key pattern in key
for pattern in [ for pattern in [
@ -130,6 +178,7 @@ def build_sensor_attributes(
key=key, key=key,
cached_data=cached_data, cached_data=cached_data,
time=time, time=time,
config_entry=config_entry,
) )
elif key == "data_lifecycle_status": elif key == "data_lifecycle_status":
# Lifecycle sensor uses dedicated builder with calculator # Lifecycle sensor uses dedicated builder with calculator
@ -175,7 +224,7 @@ def build_extra_state_attributes( # noqa: PLR0913
*, *,
config_entry: TibberPricesConfigEntry, config_entry: TibberPricesConfigEntry,
coordinator_data: dict, coordinator_data: dict,
sensor_attrs: dict | None = None, sensor_attrs: dict[str, Any] | None = None,
time: TibberPricesTimeService, time: TibberPricesTimeService,
) -> dict[str, Any] | None: ) -> dict[str, Any] | None:
""" """

View file

@ -5,12 +5,18 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.const import PRICE_RATING_MAPPING from custom_components.tibber_prices.const import PRICE_RATING_MAPPING
from custom_components.tibber_prices.coordinator.helpers import (
get_intervals_for_day_offsets,
)
from homeassistant.const import PERCENTAGE from homeassistant.const import PERCENTAGE
if TYPE_CHECKING: if TYPE_CHECKING:
from datetime import datetime from datetime import datetime
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from custom_components.tibber_prices.data import TibberPricesConfigEntry
from .helpers import add_alternate_average_attribute
def _get_day_midnight_timestamp(key: str, *, time: TibberPricesTimeService) -> datetime: def _get_day_midnight_timestamp(key: str, *, time: TibberPricesTimeService) -> datetime:
@ -46,20 +52,32 @@ def _get_day_key_from_sensor_key(key: str) -> str:
return "today" return "today"
def _add_fallback_timestamp(attributes: dict, key: str, price_info: dict) -> None: def _add_fallback_timestamp(
attributes: dict,
key: str,
price_info: dict,
) -> None:
""" """
Add fallback timestamp to attributes based on the day in the sensor key. Add fallback timestamp to attributes based on the day in the sensor key.
Args: Args:
attributes: Dictionary to add timestamp to attributes: Dictionary to add timestamp to
key: The sensor entity key key: The sensor entity key
price_info: Price info dictionary from coordinator data price_info: Price info dictionary from coordinator data (flat structure)
""" """
day_key = _get_day_key_from_sensor_key(key) day_key = _get_day_key_from_sensor_key(key)
day_data = price_info.get(day_key, [])
if day_data: # Use helper to get intervals for this day
attributes["timestamp"] = day_data[0].get("startsAt") # Build minimal coordinator_data structure for helper
coordinator_data = {"priceInfo": price_info}
# Map day key to offset: yesterday=-1, today=0, tomorrow=1
day_offset = {"yesterday": -1, "today": 0, "tomorrow": 1}[day_key]
day_intervals = get_intervals_for_day_offsets(coordinator_data, [day_offset])
# Use first interval's timestamp if available
if day_intervals:
attributes["timestamp"] = day_intervals[0].get("startsAt")
def add_statistics_attributes( def add_statistics_attributes(
@ -68,6 +86,7 @@ def add_statistics_attributes(
cached_data: dict, cached_data: dict,
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
config_entry: TibberPricesConfigEntry,
) -> None: ) -> None:
""" """
Add attributes for statistics and rating sensors. Add attributes for statistics and rating sensors.
@ -77,6 +96,7 @@ def add_statistics_attributes(
key: The sensor entity key key: The sensor entity key
cached_data: Dictionary containing cached sensor data cached_data: Dictionary containing cached sensor data
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required)
config_entry: Config entry for user preferences
""" """
# Data timestamp sensor - shows API fetch time # Data timestamp sensor - shows API fetch time
@ -111,10 +131,17 @@ def add_statistics_attributes(
attributes["timestamp"] = extreme_starts_at attributes["timestamp"] = extreme_starts_at
return return
# Daily average sensors - show midnight to indicate whole day # Daily average sensors - show midnight to indicate whole day + add alternate value
daily_avg_sensors = {"average_price_today", "average_price_tomorrow"} daily_avg_sensors = {"average_price_today", "average_price_tomorrow"}
if key in daily_avg_sensors: if key in daily_avg_sensors:
attributes["timestamp"] = _get_day_midnight_timestamp(key, time=time) attributes["timestamp"] = _get_day_midnight_timestamp(key, time=time)
# Add alternate average attribute
add_alternate_average_attribute(
attributes,
cached_data,
key, # base_key = key itself ("average_price_today" or "average_price_tomorrow")
config_entry=config_entry,
)
return return
# Daily aggregated level/rating sensors - show midnight to indicate whole day # Daily aggregated level/rating sensors - show midnight to indicate whole day

View file

@ -4,22 +4,30 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.const import get_display_unit_factor
from custom_components.tibber_prices.coordinator.helpers import get_intervals_for_day_offsets
if TYPE_CHECKING: if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.core import ( from custom_components.tibber_prices.coordinator.core import (
TibberPricesDataUpdateCoordinator, TibberPricesDataUpdateCoordinator,
) )
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from custom_components.tibber_prices.data import TibberPricesConfigEntry
from .helpers import add_alternate_average_attribute
# Constants # Constants
MAX_FORECAST_INTERVALS = 8 # Show up to 8 future intervals (2 hours with 15-min intervals) MAX_FORECAST_INTERVALS = 8 # Show up to 8 future intervals (2 hours with 15-min intervals)
def add_next_avg_attributes( def add_next_avg_attributes( # noqa: PLR0913
attributes: dict, attributes: dict,
key: str, key: str,
coordinator: TibberPricesDataUpdateCoordinator, coordinator: TibberPricesDataUpdateCoordinator,
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
cached_data: dict | None = None,
config_entry: TibberPricesConfigEntry | None = None,
) -> None: ) -> None:
""" """
Add attributes for next N hours average price sensors. Add attributes for next N hours average price sensors.
@ -29,6 +37,8 @@ def add_next_avg_attributes(
key: The sensor entity key key: The sensor entity key
coordinator: The data update coordinator coordinator: The data update coordinator
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required)
cached_data: Optional cached data dictionary for median values
config_entry: Optional config entry for user preferences
""" """
# Extract hours from sensor key (e.g., "next_avg_3h" -> 3) # Extract hours from sensor key (e.g., "next_avg_3h" -> 3)
@ -40,15 +50,11 @@ def add_next_avg_attributes(
# Use TimeService to get the N-hour window starting from next interval # Use TimeService to get the N-hour window starting from next interval
next_interval_start, window_end = time.get_next_n_hours_window(hours) next_interval_start, window_end = time.get_next_n_hours_window(hours)
# Get all price intervals # Get all intervals (yesterday, today, tomorrow) via helper
price_info = coordinator.data.get("priceInfo", {}) all_prices = get_intervals_for_day_offsets(coordinator.data, [-1, 0, 1])
today_prices = price_info.get("today", [])
tomorrow_prices = price_info.get("tomorrow", [])
all_prices = today_prices + tomorrow_prices
if not all_prices: if not all_prices:
return return
# Find all intervals in the window # Find all intervals in the window
intervals_in_window = [] intervals_in_window = []
for price_data in all_prices: for price_data in all_prices:
@ -64,33 +70,42 @@ def add_next_avg_attributes(
attributes["interval_count"] = len(intervals_in_window) attributes["interval_count"] = len(intervals_in_window)
attributes["hours"] = hours attributes["hours"] = hours
# Add alternate average attribute if available in cached_data
if cached_data and config_entry:
base_key = f"next_avg_{hours}h"
add_alternate_average_attribute(
attributes,
cached_data,
base_key,
config_entry=config_entry,
)
def get_future_prices( def get_future_prices(
coordinator: TibberPricesDataUpdateCoordinator, coordinator: TibberPricesDataUpdateCoordinator,
max_intervals: int | None = None, max_intervals: int | None = None,
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
config_entry: TibberPricesConfigEntry,
) -> list[dict] | None: ) -> list[dict] | None:
""" """
Get future price data for multiple upcoming intervals. Get future price data for multiple upcoming intervals.
Args: Args:
coordinator: The data update coordinator coordinator: The data update coordinator.
max_intervals: Maximum number of future intervals to return max_intervals: Maximum number of future intervals to return.
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required).
config_entry: Config entry to get display unit configuration.
Returns: Returns:
List of upcoming price intervals with timestamps and prices List of upcoming price intervals with timestamps and prices.
""" """
if not coordinator.data: if not coordinator.data:
return None return None
price_info = coordinator.data.get("priceInfo", {}) # Get all intervals (yesterday, today, tomorrow) via helper
all_prices = get_intervals_for_day_offsets(coordinator.data, [-1, 0, 1])
today_prices = price_info.get("today", [])
tomorrow_prices = price_info.get("tomorrow", [])
all_prices = today_prices + tomorrow_prices
if not all_prices: if not all_prices:
return None return None
@ -101,28 +116,46 @@ def get_future_prices(
# Track the maximum intervals to return # Track the maximum intervals to return
intervals_to_return = MAX_FORECAST_INTERVALS if max_intervals is None else max_intervals intervals_to_return = MAX_FORECAST_INTERVALS if max_intervals is None else max_intervals
for day_key in ["today", "tomorrow"]: # Get current date for day key determination
for price_data in price_info.get(day_key, []): now = time.now()
starts_at = time.get_interval_time(price_data) today_date = now.date()
if starts_at is None: tomorrow_date = time.get_local_date(offset_days=1)
continue
interval_end = starts_at + time.get_interval_duration() for price_data in all_prices:
starts_at = time.get_interval_time(price_data)
if starts_at is None:
continue
# Use TimeService to check if interval is in future interval_end = starts_at + time.get_interval_duration()
if time.is_in_future(starts_at):
future_prices.append( # Use TimeService to check if interval is in future
{ if time.is_in_future(starts_at):
"interval_start": starts_at, # Determine which day this interval belongs to
"interval_end": interval_end, interval_date = starts_at.date()
"price": float(price_data["total"]), if interval_date == today_date:
"price_minor": round(float(price_data["total"]) * 100, 2), day_key = "today"
"level": price_data.get("level", "NORMAL"), elif interval_date == tomorrow_date:
"rating": price_data.get("difference", None), day_key = "tomorrow"
"rating_level": price_data.get("rating_level"), else:
"day": day_key, day_key = "unknown"
}
) # Convert to display currency unit based on configuration
price_major = float(price_data["total"])
factor = get_display_unit_factor(config_entry)
price_display = round(price_major * factor, 2)
future_prices.append(
{
"interval_start": starts_at,
"interval_end": interval_end,
"price": price_major,
"price_minor": price_display,
"level": price_data.get("level", "NORMAL"),
"rating": price_data.get("difference", None),
"rating_level": price_data.get("rating_level"),
"day": day_key,
}
)
# Sort by start time # Sort by start time
future_prices.sort(key=lambda x: x["interval_start"]) future_prices.sort(key=lambda x: x["interval_start"])

View file

@ -0,0 +1,41 @@
"""Helper functions for sensor attributes."""
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from custom_components.tibber_prices.data import TibberPricesConfigEntry
def add_alternate_average_attribute(
attributes: dict,
cached_data: dict,
base_key: str,
*,
config_entry: TibberPricesConfigEntry, # noqa: ARG001
) -> None:
"""
Add both average values (mean and median) as attributes.
This ensures automations work consistently regardless of which value
is displayed in the state. Both values are always available as attributes.
Note: To avoid duplicate recording, the value used as state should be
excluded from recorder via dynamic _unrecorded_attributes in sensor core.
Args:
attributes: Dictionary to add attribute to
cached_data: Cached calculation data containing mean/median values
base_key: Base key for cached values (e.g., "average_price_today", "rolling_hour_0")
config_entry: Config entry for user preferences (used to determine which value is in state)
"""
# Always add both mean and median values as attributes
mean_value = cached_data.get(f"{base_key}_mean")
if mean_value is not None:
attributes["price_mean"] = mean_value
median_value = cached_data.get(f"{base_key}_median")
if median_value is not None:
attributes["price_median"] = median_value

View file

@ -17,10 +17,78 @@ if TYPE_CHECKING:
TibberPricesDataUpdateCoordinator, TibberPricesDataUpdateCoordinator,
) )
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from custom_components.tibber_prices.data import TibberPricesConfigEntry
from .helpers import add_alternate_average_attribute
from .metadata import get_current_interval_data from .metadata import get_current_interval_data
def _get_interval_data_for_attributes(
key: str,
coordinator: TibberPricesDataUpdateCoordinator,
attributes: dict,
*,
time: TibberPricesTimeService,
) -> dict | None:
"""
Get interval data and set timestamp based on sensor type.
Refactored to reduce branch complexity in main function.
Args:
key: The sensor entity key
coordinator: The data update coordinator
attributes: Attributes dict to update with timestamp if needed
time: TibberPricesTimeService instance
Returns:
Interval data if found, None otherwise
"""
now = time.now()
# Current/next price sensors - override timestamp with interval's startsAt
next_sensors = ["next_interval_price", "next_interval_price_level", "next_interval_price_rating"]
prev_sensors = ["previous_interval_price", "previous_interval_price_level", "previous_interval_price_rating"]
next_hour = ["next_hour_average_price", "next_hour_price_level", "next_hour_price_rating"]
curr_interval = ["current_interval_price", "current_interval_price_base"]
curr_hour = ["current_hour_average_price", "current_hour_price_level", "current_hour_price_rating"]
if key in next_sensors:
target_time = time.get_next_interval_start()
interval_data = find_price_data_for_interval(coordinator.data, target_time, time=time)
if interval_data:
attributes["timestamp"] = interval_data["startsAt"]
return interval_data
if key in prev_sensors:
target_time = time.get_interval_offset_time(-1)
interval_data = find_price_data_for_interval(coordinator.data, target_time, time=time)
if interval_data:
attributes["timestamp"] = interval_data["startsAt"]
return interval_data
if key in next_hour:
target_time = now + timedelta(hours=1)
interval_data = find_price_data_for_interval(coordinator.data, target_time, time=time)
if interval_data:
attributes["timestamp"] = interval_data["startsAt"]
return interval_data
# Current interval sensors (both variants)
if key in curr_interval:
interval_data = get_current_interval_data(coordinator, time=time)
if interval_data and "startsAt" in interval_data:
attributes["timestamp"] = interval_data["startsAt"]
return interval_data
# Current hour sensors - keep default timestamp
if key in curr_hour:
return get_current_interval_data(coordinator, time=time)
return None
def add_current_interval_price_attributes( # noqa: PLR0913 def add_current_interval_price_attributes( # noqa: PLR0913
attributes: dict, attributes: dict,
key: str, key: str,
@ -29,6 +97,7 @@ def add_current_interval_price_attributes( # noqa: PLR0913
cached_data: dict, cached_data: dict,
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
config_entry: TibberPricesConfigEntry,
) -> None: ) -> None:
""" """
Add attributes for current interval price sensors. Add attributes for current interval price sensors.
@ -40,65 +109,19 @@ def add_current_interval_price_attributes( # noqa: PLR0913
native_value: The current native value of the sensor native_value: The current native value of the sensor
cached_data: Dictionary containing cached sensor data cached_data: Dictionary containing cached sensor data
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required)
config_entry: Config entry for user preferences
""" """
price_info = coordinator.data.get("priceInfo", {}) if coordinator.data else {} # Get interval data and handle timestamp overrides
now = time.now() interval_data = _get_interval_data_for_attributes(key, coordinator, attributes, time=time)
# Determine which interval to use based on sensor type
next_interval_sensors = [
"next_interval_price",
"next_interval_price_level",
"next_interval_price_rating",
]
previous_interval_sensors = [
"previous_interval_price",
"previous_interval_price_level",
"previous_interval_price_rating",
]
next_hour_sensors = [
"next_hour_average_price",
"next_hour_price_level",
"next_hour_price_rating",
]
current_hour_sensors = [
"current_hour_average_price",
"current_hour_price_level",
"current_hour_price_rating",
]
# Set interval data based on sensor type
# For sensors showing data from OTHER intervals (next/previous), override timestamp with that interval's startsAt
# For current interval sensors, keep the default platform timestamp (calculation time)
interval_data = None
if key in next_interval_sensors:
target_time = time.get_next_interval_start()
interval_data = find_price_data_for_interval(price_info, target_time, time=time)
# Override timestamp with the NEXT interval's startsAt (when that interval starts)
if interval_data:
attributes["timestamp"] = interval_data["startsAt"]
elif key in previous_interval_sensors:
target_time = time.get_interval_offset_time(-1)
interval_data = find_price_data_for_interval(price_info, target_time, time=time)
# Override timestamp with the PREVIOUS interval's startsAt
if interval_data:
attributes["timestamp"] = interval_data["startsAt"]
elif key in next_hour_sensors:
target_time = now + timedelta(hours=1)
interval_data = find_price_data_for_interval(price_info, target_time, time=time)
# Override timestamp with the center of the next rolling hour window
if interval_data:
attributes["timestamp"] = interval_data["startsAt"]
elif key in current_hour_sensors:
current_interval_data = get_current_interval_data(coordinator, time=time)
# Keep default timestamp (when calculation was made) for current hour sensors
else:
current_interval_data = get_current_interval_data(coordinator, time=time)
interval_data = current_interval_data # Use current_interval_data as interval_data for current_interval_price
# Keep default timestamp (current calculation time) for current interval sensors
# Add icon_color for price sensors (based on their price level) # Add icon_color for price sensors (based on their price level)
if key in ["current_interval_price", "next_interval_price", "previous_interval_price"]: if key in [
"current_interval_price",
"current_interval_price_base",
"next_interval_price",
"previous_interval_price",
]:
# For interval-based price sensors, get level from interval_data # For interval-based price sensors, get level from interval_data
if interval_data and "level" in interval_data: if interval_data and "level" in interval_data:
level = interval_data["level"] level = interval_data["level"]
@ -109,6 +132,15 @@ def add_current_interval_price_attributes( # noqa: PLR0913
if level: if level:
add_icon_color_attribute(attributes, key="price_level", state_value=level) add_icon_color_attribute(attributes, key="price_level", state_value=level)
# Add alternate average attribute for rolling hour average price sensors
base_key = "rolling_hour_0" if key == "current_hour_average_price" else "rolling_hour_1"
add_alternate_average_attribute(
attributes,
cached_data,
base_key,
config_entry=config_entry,
)
# Add price level attributes for all level sensors # Add price level attributes for all level sensors
add_level_attributes_for_sensor( add_level_attributes_for_sensor(
attributes=attributes, attributes=attributes,

View file

@ -1,4 +1,24 @@
"""Attribute builders for lifecycle diagnostic sensor.""" """
Attribute builders for lifecycle diagnostic sensor.
This sensor uses event-based updates with state-change filtering to minimize
recorder entries. Only attributes that are relevant to the lifecycle STATE
are included here - attributes that change independently of state belong
in a separate sensor or diagnostics.
Included attributes (update only on state change):
- tomorrow_available: Whether tomorrow's price data is available
- next_api_poll: When the next API poll will occur (builds user trust)
- updates_today: Number of API calls made today
- last_turnover: When the last midnight turnover occurred
- last_error: Details of the last error (if any)
Pool statistics (sensor_intervals_count, cache_fill_percent, etc.) are
intentionally NOT included here because they change independently of
the lifecycle state. With state-change filtering, these would become
stale. Pool statistics are available via diagnostics or could be
exposed as a separate sensor if needed.
"""
from __future__ import annotations from __future__ import annotations
@ -13,11 +33,6 @@ if TYPE_CHECKING:
) )
# Constants for cache age formatting
MINUTES_PER_HOUR = 60
MINUTES_PER_DAY = 1440 # 24 * 60
def build_lifecycle_attributes( def build_lifecycle_attributes(
coordinator: TibberPricesDataUpdateCoordinator, coordinator: TibberPricesDataUpdateCoordinator,
lifecycle_calculator: TibberPricesLifecycleCalculator, lifecycle_calculator: TibberPricesLifecycleCalculator,
@ -25,7 +40,11 @@ def build_lifecycle_attributes(
""" """
Build attributes for data_lifecycle_status sensor. Build attributes for data_lifecycle_status sensor.
Shows comprehensive cache status, data availability, and update timing. Event-based updates with state-change filtering - attributes only update
when the lifecycle STATE changes (freshcached, cachedturnover_pending, etc.).
Only includes attributes that are directly relevant to the lifecycle state.
Pool statistics are intentionally excluded to avoid stale data.
Returns: Returns:
Dict with lifecycle attributes Dict with lifecycle attributes
@ -33,60 +52,31 @@ def build_lifecycle_attributes(
""" """
attributes: dict[str, Any] = {} attributes: dict[str, Any] = {}
# Cache Status (formatted for readability) # === Tomorrow Data Status ===
cache_age = lifecycle_calculator.get_cache_age_minutes() # Critical for understanding lifecycle state transitions
if cache_age is not None: attributes["tomorrow_available"] = lifecycle_calculator.has_tomorrow_data()
# Format cache age with units for better readability
if cache_age < MINUTES_PER_HOUR:
attributes["cache_age"] = f"{cache_age} min"
elif cache_age < MINUTES_PER_DAY: # Less than 24 hours
hours = cache_age // MINUTES_PER_HOUR
minutes = cache_age % MINUTES_PER_HOUR
attributes["cache_age"] = f"{hours}h {minutes}min" if minutes > 0 else f"{hours}h"
else: # 24+ hours
days = cache_age // MINUTES_PER_DAY
hours = (cache_age % MINUTES_PER_DAY) // MINUTES_PER_HOUR
attributes["cache_age"] = f"{days}d {hours}h" if hours > 0 else f"{days}d"
# Keep raw value for automations # === Next API Poll Time ===
attributes["cache_age_minutes"] = cache_age # Builds user trust: shows when the integration will check for tomorrow data
# - Before 13:00: Shows today 13:00 (when tomorrow-search begins)
cache_validity = lifecycle_calculator.get_cache_validity_status() # - After 13:00 without tomorrow data: Shows next Timer #1 execution (active polling)
attributes["cache_validity"] = cache_validity # - After 13:00 with tomorrow data: Shows tomorrow 13:00 (predictive)
if coordinator._last_price_update: # noqa: SLF001 - Internal state access for diagnostic display
attributes["last_api_fetch"] = coordinator._last_price_update.isoformat() # noqa: SLF001
attributes["last_cache_update"] = coordinator._last_price_update.isoformat() # noqa: SLF001
# Data Availability & Completeness
data_completeness = lifecycle_calculator.get_data_completeness_status()
attributes["data_completeness"] = data_completeness
attributes["yesterday_available"] = lifecycle_calculator.is_data_available("yesterday")
attributes["today_available"] = lifecycle_calculator.is_data_available("today")
attributes["tomorrow_available"] = lifecycle_calculator.is_data_available("tomorrow")
attributes["tomorrow_expected_after"] = "13:00"
# Next Actions (only show if meaningful)
next_poll = lifecycle_calculator.get_next_api_poll_time() next_poll = lifecycle_calculator.get_next_api_poll_time()
if next_poll: # None means data is complete, no more polls needed if next_poll:
attributes["next_api_poll"] = next_poll.isoformat() attributes["next_api_poll"] = next_poll.isoformat()
next_tomorrow_check = lifecycle_calculator.get_next_tomorrow_check_time() # === Update Statistics ===
if next_tomorrow_check: # Shows API activity - resets at midnight with turnover
attributes["next_tomorrow_check"] = next_tomorrow_check.isoformat()
next_midnight = lifecycle_calculator.get_next_midnight_turnover_time()
attributes["next_midnight_turnover"] = next_midnight.isoformat()
# Update Statistics
api_calls = lifecycle_calculator.get_api_calls_today() api_calls = lifecycle_calculator.get_api_calls_today()
attributes["updates_today"] = api_calls attributes["updates_today"] = api_calls
if coordinator._last_actual_turnover: # noqa: SLF001 - Internal state access for diagnostic display # === Midnight Turnover Info ===
attributes["last_turnover"] = coordinator._last_actual_turnover.isoformat() # noqa: SLF001 # When was the last successful data rotation
if coordinator._midnight_handler.last_turnover_time: # noqa: SLF001
attributes["last_turnover"] = coordinator._midnight_handler.last_turnover_time.isoformat() # noqa: SLF001
# Last Error (if any) # === Error Status ===
# Present only when there's an active error
if coordinator.last_exception: if coordinator.last_exception:
attributes["last_error"] = str(coordinator.last_exception) attributes["last_error"] = str(coordinator.last_exception)

View file

@ -32,7 +32,6 @@ def get_current_interval_data(
if not coordinator.data: if not coordinator.data:
return None return None
price_info = coordinator.data.get("priceInfo", {})
now = time.now() now = time.now()
return find_price_data_for_interval(price_info, now, time=time) return find_price_data_for_interval(coordinator.data, now, time=time)

View file

@ -13,6 +13,17 @@ if TYPE_CHECKING:
TIMER_30_SEC_BOUNDARY = 30 TIMER_30_SEC_BOUNDARY = 30
def _hours_to_minutes(state_value: Any) -> int | None:
"""Convert hour-based state back to rounded minutes for attributes."""
if state_value is None:
return None
try:
return round(float(state_value) * 60)
except (TypeError, ValueError):
return None
def _is_timing_or_volatility_sensor(key: str) -> bool: def _is_timing_or_volatility_sensor(key: str) -> bool:
"""Check if sensor is a timing or volatility sensor.""" """Check if sensor is a timing or volatility sensor."""
return key.endswith("_volatility") or ( return key.endswith("_volatility") or (
@ -69,5 +80,16 @@ def add_period_timing_attributes(
attributes["timestamp"] = timestamp attributes["timestamp"] = timestamp
# Add minute-precision attributes for hour-based states to keep automation-friendly values
minute_value = _hours_to_minutes(state_value)
if minute_value is not None:
if key.endswith("period_duration"):
attributes["period_duration_minutes"] = minute_value
elif key.endswith("remaining_minutes"):
attributes["remaining_minutes"] = minute_value
elif key.endswith("next_in_minutes"):
attributes["next_in_minutes"] = minute_value
# Add icon_color for dynamic styling # Add icon_color for dynamic styling
add_icon_color_attribute(attributes, key=key, state_value=state_value) add_icon_color_attribute(attributes, key=key, state_value=state_value)

View file

@ -5,6 +5,7 @@ from __future__ import annotations
from datetime import timedelta from datetime import timedelta
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.coordinator.helpers import get_intervals_for_day_offsets
from custom_components.tibber_prices.utils.price import calculate_volatility_level from custom_components.tibber_prices.utils.price import calculate_volatility_level
if TYPE_CHECKING: if TYPE_CHECKING:
@ -32,7 +33,7 @@ def add_volatility_attributes(
def get_prices_for_volatility( def get_prices_for_volatility(
volatility_type: str, volatility_type: str,
price_info: dict, coordinator_data: dict,
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
) -> list[float]: ) -> list[float]:
@ -41,18 +42,33 @@ def get_prices_for_volatility(
Args: Args:
volatility_type: One of "today", "tomorrow", "next_24h", "today_tomorrow" volatility_type: One of "today", "tomorrow", "next_24h", "today_tomorrow"
price_info: Price information dictionary from coordinator data coordinator_data: Coordinator data dict
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required)
Returns: Returns:
List of prices to analyze List of prices to analyze
""" """
# Get all intervals (yesterday, today, tomorrow) via helper
all_intervals = get_intervals_for_day_offsets(coordinator_data, [-1, 0, 1])
if volatility_type == "today": if volatility_type == "today":
return [float(p["total"]) for p in price_info.get("today", []) if "total" in p] # Filter for today's intervals
today_date = time.now().date()
return [
float(p["total"])
for p in all_intervals
if "total" in p and p.get("startsAt") and p["startsAt"].date() == today_date
]
if volatility_type == "tomorrow": if volatility_type == "tomorrow":
return [float(p["total"]) for p in price_info.get("tomorrow", []) if "total" in p] # Filter for tomorrow's intervals
tomorrow_date = (time.now() + timedelta(days=1)).date()
return [
float(p["total"])
for p in all_intervals
if "total" in p and p.get("startsAt") and p["startsAt"].date() == tomorrow_date
]
if volatility_type == "next_24h": if volatility_type == "next_24h":
# Rolling 24h from now # Rolling 24h from now
@ -60,23 +76,24 @@ def get_prices_for_volatility(
end_time = now + timedelta(hours=24) end_time = now + timedelta(hours=24)
prices = [] prices = []
for day_key in ["today", "tomorrow"]: for price_data in all_intervals:
for price_data in price_info.get(day_key, []): starts_at = price_data.get("startsAt") # Already datetime in local timezone
starts_at = price_data.get("startsAt") # Already datetime in local timezone if starts_at is None:
if starts_at is None: continue
continue
if time.is_in_future(starts_at) and starts_at < end_time and "total" in price_data: if time.is_in_future(starts_at) and starts_at < end_time and "total" in price_data:
prices.append(float(price_data["total"])) prices.append(float(price_data["total"]))
return prices return prices
if volatility_type == "today_tomorrow": if volatility_type == "today_tomorrow":
# Combined today + tomorrow # Combined today + tomorrow
today_date = time.now().date()
tomorrow_date = (time.now() + timedelta(days=1)).date()
prices = [] prices = []
for day_key in ["today", "tomorrow"]: for price_data in all_intervals:
for price_data in price_info.get(day_key, []): starts_at = price_data.get("startsAt")
if "total" in price_data: if starts_at and starts_at.date() in [today_date, tomorrow_date] and "total" in price_data:
prices.append(float(price_data["total"])) prices.append(float(price_data["total"]))
return prices return prices
return [] return []
@ -85,7 +102,7 @@ def get_prices_for_volatility(
def add_volatility_type_attributes( def add_volatility_type_attributes(
volatility_attributes: dict, volatility_attributes: dict,
volatility_type: str, volatility_type: str,
price_info: dict, coordinator_data: dict,
thresholds: dict, thresholds: dict,
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
@ -96,41 +113,51 @@ def add_volatility_type_attributes(
Args: Args:
volatility_attributes: Dictionary to add type-specific attributes to volatility_attributes: Dictionary to add type-specific attributes to
volatility_type: Type of volatility calculation volatility_type: Type of volatility calculation
price_info: Price information dictionary from coordinator data coordinator_data: Coordinator data dict
thresholds: Volatility thresholds configuration thresholds: Volatility thresholds configuration
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required)
""" """
# Get all intervals (yesterday, today, tomorrow) via helper
all_intervals = get_intervals_for_day_offsets(coordinator_data, [-1, 0, 1])
now = time.now()
today_date = now.date()
tomorrow_date = (now + timedelta(days=1)).date()
# Add timestamp for calendar day volatility sensors (midnight of the day) # Add timestamp for calendar day volatility sensors (midnight of the day)
if volatility_type == "today": if volatility_type == "today":
today_data = price_info.get("today", []) today_data = [p for p in all_intervals if p.get("startsAt") and p["startsAt"].date() == today_date]
if today_data: if today_data:
volatility_attributes["timestamp"] = today_data[0].get("startsAt") volatility_attributes["timestamp"] = today_data[0].get("startsAt")
elif volatility_type == "tomorrow": elif volatility_type == "tomorrow":
tomorrow_data = price_info.get("tomorrow", []) tomorrow_data = [p for p in all_intervals if p.get("startsAt") and p["startsAt"].date() == tomorrow_date]
if tomorrow_data: if tomorrow_data:
volatility_attributes["timestamp"] = tomorrow_data[0].get("startsAt") volatility_attributes["timestamp"] = tomorrow_data[0].get("startsAt")
elif volatility_type == "today_tomorrow": elif volatility_type == "today_tomorrow":
# For combined today+tomorrow, use today's midnight # For combined today+tomorrow, use today's midnight
today_data = price_info.get("today", []) today_data = [p for p in all_intervals if p.get("startsAt") and p["startsAt"].date() == today_date]
if today_data: if today_data:
volatility_attributes["timestamp"] = today_data[0].get("startsAt") volatility_attributes["timestamp"] = today_data[0].get("startsAt")
# Add breakdown for today vs tomorrow # Add breakdown for today vs tomorrow
today_prices = [float(p["total"]) for p in price_info.get("today", []) if "total" in p] today_prices = [
tomorrow_prices = [float(p["total"]) for p in price_info.get("tomorrow", []) if "total" in p] float(p["total"])
for p in all_intervals
if "total" in p and p.get("startsAt") and p["startsAt"].date() == today_date
]
tomorrow_prices = [
float(p["total"])
for p in all_intervals
if "total" in p and p.get("startsAt") and p["startsAt"].date() == tomorrow_date
]
if today_prices: if today_prices:
today_vol = calculate_volatility_level(today_prices, **thresholds) today_vol = calculate_volatility_level(today_prices, **thresholds)
today_spread = (max(today_prices) - min(today_prices)) * 100
volatility_attributes["today_spread"] = round(today_spread, 2)
volatility_attributes["today_volatility"] = today_vol volatility_attributes["today_volatility"] = today_vol
volatility_attributes["interval_count_today"] = len(today_prices) volatility_attributes["interval_count_today"] = len(today_prices)
if tomorrow_prices: if tomorrow_prices:
tomorrow_vol = calculate_volatility_level(tomorrow_prices, **thresholds) tomorrow_vol = calculate_volatility_level(tomorrow_prices, **thresholds)
tomorrow_spread = (max(tomorrow_prices) - min(tomorrow_prices)) * 100
volatility_attributes["tomorrow_spread"] = round(tomorrow_spread, 2)
volatility_attributes["tomorrow_volatility"] = tomorrow_vol volatility_attributes["tomorrow_volatility"] = tomorrow_vol
volatility_attributes["interval_count_tomorrow"] = len(tomorrow_prices) volatility_attributes["interval_count_tomorrow"] = len(tomorrow_prices)
elif volatility_type == "next_24h": elif volatility_type == "next_24h":

View file

@ -4,11 +4,16 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.coordinator.helpers import get_intervals_for_day_offsets
if TYPE_CHECKING: if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.core import ( from custom_components.tibber_prices.coordinator.core import (
TibberPricesDataUpdateCoordinator, TibberPricesDataUpdateCoordinator,
) )
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from custom_components.tibber_prices.data import TibberPricesConfigEntry
from .helpers import add_alternate_average_attribute
def _update_extreme_interval(extreme_interval: dict | None, price_data: dict, key: str) -> dict: def _update_extreme_interval(extreme_interval: dict | None, price_data: dict, key: str) -> dict:
@ -38,12 +43,14 @@ def _update_extreme_interval(extreme_interval: dict | None, price_data: dict, ke
return price_data if is_new_extreme else extreme_interval return price_data if is_new_extreme else extreme_interval
def add_average_price_attributes( def add_average_price_attributes( # noqa: PLR0913
attributes: dict, attributes: dict,
key: str, key: str,
coordinator: TibberPricesDataUpdateCoordinator, coordinator: TibberPricesDataUpdateCoordinator,
*, *,
time: TibberPricesTimeService, time: TibberPricesTimeService,
cached_data: dict | None = None,
config_entry: TibberPricesConfigEntry | None = None,
) -> None: ) -> None:
""" """
Add attributes for trailing and leading average/min/max price sensors. Add attributes for trailing and leading average/min/max price sensors.
@ -53,17 +60,15 @@ def add_average_price_attributes(
key: The sensor entity key key: The sensor entity key
coordinator: The data update coordinator coordinator: The data update coordinator
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required)
cached_data: Optional cached data dictionary for median values
config_entry: Optional config entry for user preferences
""" """
# Determine if this is trailing or leading # Determine if this is trailing or leading
is_trailing = "trailing" in key is_trailing = "trailing" in key
# Get all price intervals # Get all intervals (yesterday, today, tomorrow) via helper
price_info = coordinator.data.get("priceInfo", {}) all_prices = get_intervals_for_day_offsets(coordinator.data, [-1, 0, 1])
yesterday_prices = price_info.get("yesterday", [])
today_prices = price_info.get("today", [])
tomorrow_prices = price_info.get("tomorrow", [])
all_prices = yesterday_prices + today_prices + tomorrow_prices
if not all_prices: if not all_prices:
return return
@ -100,3 +105,13 @@ def add_average_price_attributes(
attributes["timestamp"] = intervals_in_window[0].get("startsAt") attributes["timestamp"] = intervals_in_window[0].get("startsAt")
attributes["interval_count"] = len(intervals_in_window) attributes["interval_count"] = len(intervals_in_window)
# Add alternate average attribute for average sensors if available in cached_data
if cached_data and config_entry and "average" in key:
base_key = key.replace("_average", "")
add_alternate_average_attribute(
attributes,
cached_data,
base_key,
config_entry=config_entry,
)

View file

@ -4,6 +4,10 @@ from __future__ import annotations
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
from custom_components.tibber_prices.coordinator.helpers import (
get_intervals_for_day_offsets,
)
if TYPE_CHECKING: if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator import ( from custom_components.tibber_prices.coordinator import (
TibberPricesDataUpdateCoordinator, TibberPricesDataUpdateCoordinator,
@ -56,9 +60,9 @@ class TibberPricesBaseCalculator:
return self._coordinator.data return self._coordinator.data
@property @property
def price_info(self) -> dict[str, Any]: def price_info(self) -> list[dict[str, Any]]:
"""Get price information from coordinator data.""" """Get price info (intervals list) from coordinator data."""
return self.coordinator_data.get("priceInfo", {}) return self.coordinator_data.get("priceInfo", [])
@property @property
def user_data(self) -> dict[str, Any]: def user_data(self) -> dict[str, Any]:
@ -67,5 +71,116 @@ class TibberPricesBaseCalculator:
@property @property
def currency(self) -> str: def currency(self) -> str:
"""Get currency code from price info.""" """Get currency code from coordinator data."""
return self.price_info.get("currency", "EUR") return self.coordinator_data.get("currency", "EUR")
# Smart data access methods with built-in None-safety
def get_intervals(self, day_offset: int) -> list[dict]:
"""
Get price intervals for a specific day with None-safety.
Uses get_intervals_for_day_offsets() to abstract data structure access.
Args:
day_offset: Day offset (-1=yesterday, 0=today, 1=tomorrow).
Returns:
List of interval dictionaries, empty list if unavailable.
"""
if not self.coordinator_data:
return []
return get_intervals_for_day_offsets(self.coordinator_data, [day_offset])
@property
def intervals_today(self) -> list[dict]:
"""Get today's intervals with None-safety."""
return self.get_intervals(0)
@property
def intervals_tomorrow(self) -> list[dict]:
"""Get tomorrow's intervals with None-safety."""
return self.get_intervals(1)
@property
def intervals_yesterday(self) -> list[dict]:
"""Get yesterday's intervals with None-safety."""
return self.get_intervals(-1)
def find_interval_at_offset(self, offset: int) -> dict | None:
"""
Find interval at given offset from current time with bounds checking.
Args:
offset: Offset from current interval (0=current, 1=next, -1=previous).
Returns:
Interval dictionary or None if out of bounds or unavailable.
"""
if not self.coordinator_data:
return None
from custom_components.tibber_prices.utils.price import ( # noqa: PLC0415 - avoid circular import
find_price_data_for_interval,
)
time = self.coordinator.time
target_time = time.get_interval_offset_time(offset)
return find_price_data_for_interval(self.coordinator.data, target_time, time=time)
def safe_get_from_interval(
self,
interval: dict[str, Any],
key: str,
default: Any = None,
) -> Any:
"""
Safely get a value from an interval dictionary.
Args:
interval: Interval dictionary.
key: Key to retrieve.
default: Default value if key not found.
Returns:
Value from interval or default.
"""
return interval.get(key, default) if interval else default
def has_data(self) -> bool:
"""
Check if coordinator has any data available.
Returns:
True if data is available, False otherwise.
"""
return bool(self.coordinator_data)
def has_price_info(self) -> bool:
"""
Check if price info is available in coordinator data.
Returns:
True if price info exists, False otherwise.
"""
return bool(self.price_info)
def get_day_intervals(self, day_offset: int) -> list[dict]:
"""
Get intervals for a specific day from coordinator data.
This is an alias for get_intervals() with consistent naming.
Args:
day_offset: Day offset (-1=yesterday, 0=today, 1=tomorrow).
Returns:
List of interval dictionaries, empty list if unavailable.
"""
return self.get_intervals(day_offset)

View file

@ -49,8 +49,8 @@ class TibberPricesDailyStatCalculator(TibberPricesBaseCalculator):
self, self,
*, *,
day: str = "today", day: str = "today",
stat_func: Callable[[list[float]], float], stat_func: Callable[[list[float]], float] | Callable[[list[float]], tuple[float, float | None]],
) -> float | None: ) -> float | tuple[float, float | None] | None:
""" """
Unified method for daily statistics (min/max/avg within calendar day). Unified method for daily statistics (min/max/avg within calendar day).
@ -59,17 +59,17 @@ class TibberPricesDailyStatCalculator(TibberPricesBaseCalculator):
Args: Args:
day: "today" or "tomorrow" - which calendar day to calculate for. day: "today" or "tomorrow" - which calendar day to calculate for.
stat_func: Statistical function (min, max, or lambda for avg). stat_func: Statistical function (min, max, or lambda for avg/median).
Returns: Returns:
Price value in minor currency units (cents/øre), or None if unavailable. Price value in subunit currency units (cents/øre), or None if unavailable.
For average functions: tuple of (avg, median) where median may be None.
For min/max functions: single float value.
""" """
if not self.coordinator_data: if not self.has_data():
return None return None
price_info = self.price_info
# Get local midnight boundaries based on the requested day using TimeService # Get local midnight boundaries based on the requested day using TimeService
time = self.coordinator.time time = self.coordinator.time
local_midnight, local_midnight_next_day = time.get_day_boundaries(day) local_midnight, local_midnight_next_day = time.get_day_boundaries(day)
@ -77,8 +77,8 @@ class TibberPricesDailyStatCalculator(TibberPricesBaseCalculator):
# Collect all prices and their intervals from both today and tomorrow data # Collect all prices and their intervals from both today and tomorrow data
# that fall within the target day's local date boundaries # that fall within the target day's local date boundaries
price_intervals = [] price_intervals = []
for day_key in ["today", "tomorrow"]: for day_offset in [0, 1]: # today=0, tomorrow=1
for price_data in price_info.get(day_key, []): for price_data in self.get_intervals(day_offset):
starts_at = price_data.get("startsAt") # Already datetime in local timezone starts_at = price_data.get("startsAt") # Already datetime in local timezone
if not starts_at: if not starts_at:
continue continue
@ -99,7 +99,25 @@ class TibberPricesDailyStatCalculator(TibberPricesBaseCalculator):
# Find the extreme value and store its interval for later use in attributes # Find the extreme value and store its interval for later use in attributes
prices = [pi["price"] for pi in price_intervals] prices = [pi["price"] for pi in price_intervals]
value = stat_func(prices) result = stat_func(prices)
# Check if result is a tuple (avg, median) from average functions
if isinstance(result, tuple):
value, median = result
# Store the interval (for avg, use first interval as reference)
if price_intervals:
self._last_extreme_interval = price_intervals[0]["interval"]
# Convert to display currency units based on config
avg_result = round(get_price_value(value, config_entry=self.coordinator.config_entry), 2)
median_result = (
round(get_price_value(median, config_entry=self.coordinator.config_entry), 2)
if median is not None
else None
)
return avg_result, median_result
# Single value result (min/max functions)
value = result
# Store the interval with the extreme price for use in attributes # Store the interval with the extreme price for use in attributes
for pi in price_intervals: for pi in price_intervals:
@ -107,8 +125,8 @@ class TibberPricesDailyStatCalculator(TibberPricesBaseCalculator):
self._last_extreme_interval = pi["interval"] self._last_extreme_interval = pi["interval"]
break break
# Always return in minor currency units (cents/øre) with 2 decimals # Return in configured display currency units with 2 decimals
result = get_price_value(value, in_euro=False) result = get_price_value(value, config_entry=self.coordinator.config_entry)
return round(result, 2) return round(result, 2)
def get_daily_aggregated_value( def get_daily_aggregated_value(
@ -131,11 +149,9 @@ class TibberPricesDailyStatCalculator(TibberPricesBaseCalculator):
Aggregated level/rating value (lowercase), or None if unavailable. Aggregated level/rating value (lowercase), or None if unavailable.
""" """
if not self.coordinator_data: if not self.has_data():
return None return None
price_info = self.price_info
# Get local midnight boundaries based on the requested day using TimeService # Get local midnight boundaries based on the requested day using TimeService
time = self.coordinator.time time = self.coordinator.time
local_midnight, local_midnight_next_day = time.get_day_boundaries(day) local_midnight, local_midnight_next_day = time.get_day_boundaries(day)
@ -143,8 +159,8 @@ class TibberPricesDailyStatCalculator(TibberPricesBaseCalculator):
# Collect all intervals from both today and tomorrow data # Collect all intervals from both today and tomorrow data
# that fall within the target day's local date boundaries # that fall within the target day's local date boundaries
day_intervals = [] day_intervals = []
for day_key in ["yesterday", "today", "tomorrow"]: for day_offset in [-1, 0, 1]: # yesterday=-1, today=0, tomorrow=1
for price_data in price_info.get(day_key, []): for price_data in self.get_intervals(day_offset):
starts_at = price_data.get("startsAt") # Already datetime in local timezone starts_at = price_data.get("startsAt") # Already datetime in local timezone
if not starts_at: if not starts_at:
continue continue

View file

@ -4,7 +4,7 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.utils.price import find_price_data_for_interval from custom_components.tibber_prices.const import get_display_unit_factor
from .base import TibberPricesBaseCalculator from .base import TibberPricesBaseCalculator
@ -36,7 +36,7 @@ class TibberPricesIntervalCalculator(TibberPricesBaseCalculator):
self._last_rating_level: str | None = None self._last_rating_level: str | None = None
self._last_rating_difference: float | None = None self._last_rating_difference: float | None = None
def get_interval_value( def get_interval_value( # noqa: PLR0911
self, self,
*, *,
interval_offset: int, interval_offset: int,
@ -57,32 +57,31 @@ class TibberPricesIntervalCalculator(TibberPricesBaseCalculator):
None if data unavailable. None if data unavailable.
""" """
if not self.coordinator_data: if not self.has_data():
return None return None
price_info = self.price_info interval_data = self.find_interval_at_offset(interval_offset)
time = self.coordinator.time
# Use TimeService to get interval offset time
target_time = time.get_interval_offset_time(interval_offset)
interval_data = find_price_data_for_interval(price_info, target_time, time=time)
if not interval_data: if not interval_data:
return None return None
# Extract value based on type # Extract value based on type
if value_type == "price": if value_type == "price":
price = interval_data.get("total") price = self.safe_get_from_interval(interval_data, "total")
if price is None: if price is None:
return None return None
price = float(price) price = float(price)
return price if in_euro else round(price * 100, 2) # Return in base currency if in_euro=True, otherwise in display unit
if in_euro:
return price
factor = get_display_unit_factor(self.config_entry)
return round(price * factor, 2)
if value_type == "level": if value_type == "level":
level = interval_data.get("level") level = self.safe_get_from_interval(interval_data, "level")
return level.lower() if level else None return level.lower() if level else None
# For rating: extract rating_level # For rating: extract rating_level
rating = interval_data.get("rating_level") rating = self.safe_get_from_interval(interval_data, "rating_level")
return rating.lower() if rating else None return rating.lower() if rating else None
def get_price_level_value(self) -> str | None: def get_price_level_value(self) -> str | None:
@ -117,19 +116,16 @@ class TibberPricesIntervalCalculator(TibberPricesBaseCalculator):
Rating level (lowercase), or None if unavailable. Rating level (lowercase), or None if unavailable.
""" """
if not self.coordinator_data or rating_type != "current": if not self.has_data() or rating_type != "current":
self._last_rating_difference = None self._last_rating_difference = None
self._last_rating_level = None self._last_rating_level = None
return None return None
time = self.coordinator.time current_interval = self.find_interval_at_offset(0)
now = time.now()
price_info = self.price_info
current_interval = find_price_data_for_interval(price_info, now, time=time)
if current_interval: if current_interval:
rating_level = current_interval.get("rating_level") rating_level = self.safe_get_from_interval(current_interval, "rating_level")
difference = current_interval.get("difference") difference = self.safe_get_from_interval(current_interval, "difference")
if rating_level is not None: if rating_level is not None:
self._last_rating_difference = float(difference) if difference is not None else None self._last_rating_difference = float(difference) if difference is not None else None
self._last_rating_level = rating_level self._last_rating_level = rating_level

View file

@ -2,11 +2,7 @@
from __future__ import annotations from __future__ import annotations
from datetime import timedelta from datetime import datetime, timedelta
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from datetime import datetime
from custom_components.tibber_prices.coordinator.constants import UPDATE_INTERVAL from custom_components.tibber_prices.coordinator.constants import UPDATE_INTERVAL
@ -15,11 +11,7 @@ from .base import TibberPricesBaseCalculator
# Constants for lifecycle state determination # Constants for lifecycle state determination
FRESH_DATA_THRESHOLD_MINUTES = 5 # Data is "fresh" within 5 minutes of API fetch FRESH_DATA_THRESHOLD_MINUTES = 5 # Data is "fresh" within 5 minutes of API fetch
TOMORROW_CHECK_HOUR = 13 # After 13:00, we actively check for tomorrow data TOMORROW_CHECK_HOUR = 13 # After 13:00, we actively check for tomorrow data
TURNOVER_WARNING_SECONDS = 300 # Warn 5 minutes before midnight TURNOVER_WARNING_SECONDS = 900 # Warn 15 minutes before midnight (last quarter-hour: 23:45-00:00)
# Constants for 15-minute update boundaries (Timer #1)
QUARTER_HOUR_BOUNDARIES = [0, 15, 30, 45] # Minutes when Timer #1 can trigger
LAST_HOUR_OF_DAY = 23
class TibberPricesLifecycleCalculator(TibberPricesBaseCalculator): class TibberPricesLifecycleCalculator(TibberPricesBaseCalculator):
@ -34,29 +26,31 @@ class TibberPricesLifecycleCalculator(TibberPricesBaseCalculator):
- "fresh": Just fetched from API (within 5 minutes) - "fresh": Just fetched from API (within 5 minutes)
- "refreshing": Currently fetching data from API - "refreshing": Currently fetching data from API
- "searching_tomorrow": After 13:00, actively looking for tomorrow data - "searching_tomorrow": After 13:00, actively looking for tomorrow data
- "turnover_pending": Midnight is approaching (within 5 minutes) - "turnover_pending": Last interval of day (23:45-00:00, midnight approaching)
- "error": Last API call failed - "error": Last API call failed
Priority order (highest to lowest):
1. refreshing - Active operation has highest priority
2. error - Errors must be immediately visible
3. turnover_pending - Important event at 23:45, should stay visible
4. searching_tomorrow - Stable during search phase (13:00-~15:00)
5. fresh - Informational only, lowest priority among active states
6. cached - Default fallback
""" """
coordinator = self.coordinator coordinator = self.coordinator
current_time = coordinator.time.now() current_time = coordinator.time.now()
# Check if actively fetching # Priority 1: Check if actively fetching (highest priority)
if coordinator._is_fetching: # noqa: SLF001 - Internal state access for lifecycle tracking if coordinator._is_fetching: # noqa: SLF001 - Internal state access for lifecycle tracking
return "refreshing" return "refreshing"
# Check if last update failed # Priority 2: Check if last update failed
# If coordinator has last_exception set, the last fetch failed # If coordinator has last_exception set, the last fetch failed
if coordinator.last_exception is not None: if coordinator.last_exception is not None:
return "error" return "error"
# Check if data is fresh (within 5 minutes of last API fetch) # Priority 3: Check if midnight turnover is pending (last quarter of day: 23:45-00:00)
if coordinator._last_price_update: # noqa: SLF001 - Internal state access for lifecycle tracking
age = current_time - coordinator._last_price_update # noqa: SLF001
if age <= timedelta(minutes=FRESH_DATA_THRESHOLD_MINUTES):
return "fresh"
# Check if midnight turnover is pending (within 15 minutes)
midnight = coordinator.time.as_local(current_time).replace( midnight = coordinator.time.as_local(current_time).replace(
hour=0, minute=0, second=0, microsecond=0 hour=0, minute=0, second=0, microsecond=0
) + timedelta(days=1) ) + timedelta(days=1)
@ -64,26 +58,22 @@ class TibberPricesLifecycleCalculator(TibberPricesBaseCalculator):
if 0 < time_to_midnight <= TURNOVER_WARNING_SECONDS: # Within 15 minutes of midnight (23:45-00:00) if 0 < time_to_midnight <= TURNOVER_WARNING_SECONDS: # Within 15 minutes of midnight (23:45-00:00)
return "turnover_pending" return "turnover_pending"
# Check if we're in tomorrow data search mode (after 13:00 and tomorrow missing) # Priority 4: Check if we're in tomorrow data search mode (after 13:00 and tomorrow missing)
# This should remain stable during the search phase, not flicker with "fresh" every 15 minutes
now_local = coordinator.time.as_local(current_time) now_local = coordinator.time.as_local(current_time)
if now_local.hour >= TOMORROW_CHECK_HOUR: if now_local.hour >= TOMORROW_CHECK_HOUR and coordinator._needs_tomorrow_data(): # noqa: SLF001 - Internal state access
_, tomorrow_midnight = coordinator.time.get_day_boundaries("today") return "searching_tomorrow"
tomorrow_date = tomorrow_midnight.date()
if coordinator._needs_tomorrow_data(tomorrow_date): # noqa: SLF001 - Internal state access
return "searching_tomorrow"
# Default: using cached data # Priority 5: Check if data is fresh (within 5 minutes of last API fetch)
# Lower priority than searching_tomorrow to avoid state flickering during search phase
if coordinator._last_price_update: # noqa: SLF001 - Internal state access for lifecycle tracking
age = current_time - coordinator._last_price_update # noqa: SLF001
if age <= timedelta(minutes=FRESH_DATA_THRESHOLD_MINUTES):
return "fresh"
# Priority 6: Default - using cached data
return "cached" return "cached"
def get_cache_age_minutes(self) -> int | None:
"""Calculate how many minutes old the cached data is."""
coordinator = self.coordinator
if not coordinator._last_price_update: # noqa: SLF001 - Internal state access for lifecycle tracking
return None
age = coordinator.time.now() - coordinator._last_price_update # noqa: SLF001
return int(age.total_seconds() / 60)
def get_next_api_poll_time(self) -> datetime | None: def get_next_api_poll_time(self) -> datetime | None:
""" """
Calculate when the next API poll attempt will occur. Calculate when the next API poll attempt will occur.
@ -105,12 +95,31 @@ class TibberPricesLifecycleCalculator(TibberPricesBaseCalculator):
now_local = coordinator.time.as_local(current_time) now_local = coordinator.time.as_local(current_time)
# Check if tomorrow data is missing # Check if tomorrow data is missing
_, tomorrow_midnight = coordinator.time.get_day_boundaries("today") tomorrow_missing = coordinator._needs_tomorrow_data() # noqa: SLF001
tomorrow_date = tomorrow_midnight.date()
tomorrow_missing = coordinator._needs_tomorrow_data(tomorrow_date) # noqa: SLF001
# Case 1: Before 13:00 today - next poll is today at 13:00 (when tomorrow-search begins) # Get tomorrow date for time calculations
_, tomorrow_midnight = coordinator.time.get_day_boundaries("today")
# Case 1: Before 13:00 today - next poll is today at 13:xx:xx (when tomorrow-search begins)
if now_local.hour < TOMORROW_CHECK_HOUR: if now_local.hour < TOMORROW_CHECK_HOUR:
# Calculate exact time based on Timer #1 offset (minute and second precision)
if coordinator._last_coordinator_update is not None: # noqa: SLF001
last_update_local = coordinator.time.as_local(coordinator._last_coordinator_update) # noqa: SLF001
# Timer offset: minutes + seconds past the quarter-hour
minutes_past_quarter = last_update_local.minute % 15
seconds_offset = last_update_local.second
# Calculate first timer execution at or after 13:00 today
# Just apply timer offset to 13:00 (first quarter-hour mark >= 13:00)
# Timer runs at X:04:37 → Next poll at 13:04:37
return now_local.replace(
hour=TOMORROW_CHECK_HOUR,
minute=minutes_past_quarter,
second=seconds_offset,
microsecond=0,
)
# Fallback: No timer history yet
return now_local.replace(hour=TOMORROW_CHECK_HOUR, minute=0, second=0, microsecond=0) return now_local.replace(hour=TOMORROW_CHECK_HOUR, minute=0, second=0, microsecond=0)
# Case 2: After 13:00 today AND tomorrow data missing - actively polling now # Case 2: After 13:00 today AND tomorrow data missing - actively polling now
@ -153,118 +162,6 @@ class TibberPricesLifecycleCalculator(TibberPricesBaseCalculator):
# Fallback: If we don't know timer offset yet, assume 13:00:00 # Fallback: If we don't know timer offset yet, assume 13:00:00
return tomorrow_13 return tomorrow_13
def get_next_tomorrow_check_time(self) -> datetime | None:
"""
Calculate when the next tomorrow data check will occur.
Returns None if not applicable (before 13:00 or tomorrow already available).
"""
coordinator = self.coordinator
current_time = coordinator.time.now()
now_local = coordinator.time.as_local(current_time)
# Only relevant after 13:00
if now_local.hour < TOMORROW_CHECK_HOUR:
return None
# Only relevant if tomorrow data is missing
_, tomorrow_midnight = coordinator.time.get_day_boundaries("today")
tomorrow_date = tomorrow_midnight.date()
if not coordinator._needs_tomorrow_data(tomorrow_date): # noqa: SLF001 - Internal state access
return None
# Next check = next regular API poll (same as get_next_api_poll_time)
return self.get_next_api_poll_time()
def get_next_midnight_turnover_time(self) -> datetime:
"""Calculate when the next midnight turnover will occur."""
coordinator = self.coordinator
current_time = coordinator.time.now()
now_local = coordinator.time.as_local(current_time)
# Next midnight
return now_local.replace(hour=0, minute=0, second=0, microsecond=0) + timedelta(days=1)
def is_data_available(self, day: str) -> bool:
"""
Check if data is available for a specific day.
Args:
day: "yesterday", "today", or "tomorrow"
Returns:
True if data exists and is not empty
"""
coordinator = self.coordinator
if not coordinator.data:
return False
price_info = coordinator.data.get("priceInfo", {})
day_data = price_info.get(day, [])
return bool(day_data)
def get_data_completeness_status(self) -> str:
"""
Get human-readable data completeness status.
Returns:
'complete': All data (yesterday/today/tomorrow) available
'missing_tomorrow': Only yesterday and today available
'missing_yesterday': Only today and tomorrow available
'partial': Only today or some other partial combination
'no_data': No data available at all
"""
yesterday_available = self.is_data_available("yesterday")
today_available = self.is_data_available("today")
tomorrow_available = self.is_data_available("tomorrow")
if yesterday_available and today_available and tomorrow_available:
return "complete"
if yesterday_available and today_available and not tomorrow_available:
return "missing_tomorrow"
if not yesterday_available and today_available and tomorrow_available:
return "missing_yesterday"
if today_available:
return "partial"
return "no_data"
def get_cache_validity_status(self) -> str:
"""
Get cache validity status.
Returns:
"valid": Cache is current and matches today's date
"stale": Cache exists but is outdated
"date_mismatch": Cache is from a different day
"empty": No cache data
"""
coordinator = self.coordinator
# Check if coordinator has data (transformed, ready for entities)
if not coordinator.data:
return "empty"
# Check if we have price update timestamp
if not coordinator._last_price_update: # noqa: SLF001 - Internal state access for lifecycle tracking
return "empty"
current_time = coordinator.time.now()
current_local_date = coordinator.time.as_local(current_time).date()
last_update_local_date = coordinator.time.as_local(coordinator._last_price_update).date() # noqa: SLF001
if current_local_date != last_update_local_date:
return "date_mismatch"
# Check if cache is stale (older than expected)
age = current_time - coordinator._last_price_update # noqa: SLF001
# Consider stale if older than 2 hours (8 * 15-minute intervals)
if age > timedelta(hours=2):
return "stale"
return "valid"
def get_api_calls_today(self) -> int: def get_api_calls_today(self) -> int:
"""Get the number of API calls made today.""" """Get the number of API calls made today."""
coordinator = self.coordinator coordinator = self.coordinator
@ -275,3 +172,13 @@ class TibberPricesLifecycleCalculator(TibberPricesBaseCalculator):
return 0 return 0
return coordinator._api_calls_today # noqa: SLF001 return coordinator._api_calls_today # noqa: SLF001
def has_tomorrow_data(self) -> bool:
"""
Check if tomorrow's price data is available.
Returns:
True if tomorrow data exists in the pool.
"""
return not self.coordinator._needs_tomorrow_data() # noqa: SLF001

View file

@ -8,10 +8,11 @@ from custom_components.tibber_prices.const import (
DEFAULT_PRICE_RATING_THRESHOLD_HIGH, DEFAULT_PRICE_RATING_THRESHOLD_HIGH,
DEFAULT_PRICE_RATING_THRESHOLD_LOW, DEFAULT_PRICE_RATING_THRESHOLD_LOW,
) )
from custom_components.tibber_prices.coordinator.helpers import get_intervals_for_day_offsets
from custom_components.tibber_prices.entity_utils import find_rolling_hour_center_index from custom_components.tibber_prices.entity_utils import find_rolling_hour_center_index
from custom_components.tibber_prices.sensor.helpers import ( from custom_components.tibber_prices.sensor.helpers import (
aggregate_average_data,
aggregate_level_data, aggregate_level_data,
aggregate_price_data,
aggregate_rating_data, aggregate_rating_data,
) )
@ -31,7 +32,7 @@ class TibberPricesRollingHourCalculator(TibberPricesBaseCalculator):
*, *,
hour_offset: int = 0, hour_offset: int = 0,
value_type: str = "price", value_type: str = "price",
) -> str | float | None: ) -> str | float | tuple[float | None, float | None] | None:
""" """
Unified method to get aggregated values from 5-interval rolling window. Unified method to get aggregated values from 5-interval rolling window.
@ -43,17 +44,16 @@ class TibberPricesRollingHourCalculator(TibberPricesBaseCalculator):
Returns: Returns:
Aggregated value based on type: Aggregated value based on type:
- "price": float (average price in minor currency units) - "price": float or tuple[float, float | None] (avg, median)
- "level": str (aggregated level: "very_cheap", "cheap", etc.) - "level": str (aggregated level: "very_cheap", "cheap", etc.)
- "rating": str (aggregated rating: "low", "normal", "high") - "rating": str (aggregated rating: "low", "normal", "high")
""" """
if not self.coordinator_data: if not self.has_data():
return None return None
# Get all available price data # Get all available price data (yesterday, today, tomorrow)
price_info = self.price_info all_prices = get_intervals_for_day_offsets(self.coordinator_data, [-1, 0, 1])
all_prices = price_info.get("yesterday", []) + price_info.get("today", []) + price_info.get("tomorrow", [])
if not all_prices: if not all_prices:
return None return None
@ -81,7 +81,7 @@ class TibberPricesRollingHourCalculator(TibberPricesBaseCalculator):
self, self,
window_data: list[dict], window_data: list[dict],
value_type: str, value_type: str,
) -> str | float | None: ) -> str | float | tuple[float | None, float | None] | None:
""" """
Aggregate data from multiple intervals based on value type. Aggregate data from multiple intervals based on value type.
@ -90,7 +90,10 @@ class TibberPricesRollingHourCalculator(TibberPricesBaseCalculator):
value_type: "price" | "level" | "rating". value_type: "price" | "level" | "rating".
Returns: Returns:
Aggregated value based on type. Aggregated value based on type:
- "price": tuple[float, float | None] (avg, median)
- "level": str
- "rating": str
""" """
# Get thresholds from config for rating aggregation # Get thresholds from config for rating aggregation
@ -103,9 +106,12 @@ class TibberPricesRollingHourCalculator(TibberPricesBaseCalculator):
DEFAULT_PRICE_RATING_THRESHOLD_HIGH, DEFAULT_PRICE_RATING_THRESHOLD_HIGH,
) )
# Map value types to aggregation functions # Handle price aggregation - return tuple directly
if value_type == "price":
return aggregate_average_data(window_data, self.config_entry)
# Map other value types to aggregation functions
aggregators = { aggregators = {
"price": lambda data: aggregate_price_data(data),
"level": lambda data: aggregate_level_data(data), "level": lambda data: aggregate_level_data(data),
"rating": lambda data: aggregate_rating_data(data, threshold_low, threshold_high), "rating": lambda data: aggregate_rating_data(data, threshold_low, threshold_high),
} }

View file

@ -65,11 +65,11 @@ class TibberPricesTimingCalculator(TibberPricesBaseCalculator):
- None if no relevant period data available - None if no relevant period data available
""" """
if not self.coordinator.data: if not self.has_data():
return None return None
# Get period data from coordinator # Get period data from coordinator
periods_data = self.coordinator.data.get("periods", {}) periods_data = self.coordinator_data.get("pricePeriods", {})
period_data = periods_data.get(period_type) period_data = periods_data.get(period_type)
if not period_data or not period_data.get("periods"): if not period_data or not period_data.get("periods"):
@ -158,7 +158,8 @@ class TibberPricesTimingCalculator(TibberPricesBaseCalculator):
Args: Args:
periods: List of period dictionaries periods: List of period dictionaries
skip_current: If True, skip the first future period (to get next-next) skip_current: If True, try to skip the first future period (to get next-next)
If only one future period exists, return it anyway (pragmatic fallback)
Returns: Returns:
Next period dict or None if no future periods Next period dict or None if no future periods
@ -173,11 +174,13 @@ class TibberPricesTimingCalculator(TibberPricesBaseCalculator):
# Sort by start time to ensure correct order # Sort by start time to ensure correct order
future_periods.sort(key=lambda p: p["start"]) future_periods.sort(key=lambda p: p["start"])
# Return second period if skip_current=True (next-next), otherwise first (next) # If skip_current requested and we have multiple periods, return second
# If only one period left, return it anyway (pragmatic: better than showing unknown)
if skip_current and len(future_periods) > 1: if skip_current and len(future_periods) > 1:
return future_periods[1] return future_periods[1]
if not skip_current and future_periods:
return future_periods[0] # Default: return first future period
return future_periods[0] if future_periods else None
return None return None

View file

@ -15,7 +15,9 @@ Caching strategy:
from datetime import datetime from datetime import datetime
from typing import TYPE_CHECKING, Any from typing import TYPE_CHECKING, Any
from custom_components.tibber_prices.utils.average import calculate_next_n_hours_avg from custom_components.tibber_prices.const import get_display_unit_factor
from custom_components.tibber_prices.coordinator.helpers import get_intervals_for_day_offsets
from custom_components.tibber_prices.utils.average import calculate_mean, calculate_next_n_hours_mean
from custom_components.tibber_prices.utils.price import ( from custom_components.tibber_prices.utils.price import (
calculate_price_trend, calculate_price_trend,
find_price_data_for_interval, find_price_data_for_interval,
@ -78,7 +80,7 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
if self._cached_trend_value is not None and self._trend_attributes: if self._cached_trend_value is not None and self._trend_attributes:
return self._cached_trend_value return self._cached_trend_value
if not self.coordinator.data: if not self.has_data():
return None return None
# Get current interval price and timestamp # Get current interval price and timestamp
@ -95,30 +97,33 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
# Get next interval timestamp (basis for calculation) # Get next interval timestamp (basis for calculation)
next_interval_start = time.get_next_interval_start() next_interval_start = time.get_next_interval_start()
# Get future average price # Get future mean price (ignore median for trend calculation)
future_avg = calculate_next_n_hours_avg(self.coordinator.data, hours, time=self.coordinator.time) future_mean, _ = calculate_next_n_hours_mean(self.coordinator.data, hours, time=self.coordinator.time)
if future_avg is None: if future_mean is None:
return None return None
# Get configured thresholds from options # Get configured thresholds from options
threshold_rising = self.config.get("price_trend_threshold_rising", 5.0) threshold_rising = self.config.get("price_trend_threshold_rising", 5.0)
threshold_falling = self.config.get("price_trend_threshold_falling", -5.0) threshold_falling = self.config.get("price_trend_threshold_falling", -5.0)
threshold_strongly_rising = self.config.get("price_trend_threshold_strongly_rising", 6.0)
threshold_strongly_falling = self.config.get("price_trend_threshold_strongly_falling", -6.0)
volatility_threshold_moderate = self.config.get("volatility_threshold_moderate", 15.0) volatility_threshold_moderate = self.config.get("volatility_threshold_moderate", 15.0)
volatility_threshold_high = self.config.get("volatility_threshold_high", 30.0) volatility_threshold_high = self.config.get("volatility_threshold_high", 30.0)
# Prepare data for volatility-adaptive thresholds # Prepare data for volatility-adaptive thresholds
price_info = self.coordinator.data.get("priceInfo", {}) today_prices = self.intervals_today
today_prices = price_info.get("today", []) tomorrow_prices = self.intervals_tomorrow
tomorrow_prices = price_info.get("tomorrow", [])
all_intervals = today_prices + tomorrow_prices all_intervals = today_prices + tomorrow_prices
lookahead_intervals = self.coordinator.time.minutes_to_intervals(hours * 60) lookahead_intervals = self.coordinator.time.minutes_to_intervals(hours * 60)
# Calculate trend with volatility-adaptive thresholds # Calculate trend with volatility-adaptive thresholds
trend_state, diff_pct = calculate_price_trend( trend_state, diff_pct, trend_value = calculate_price_trend(
current_interval_price, current_interval_price,
future_avg, future_mean,
threshold_rising=threshold_rising, threshold_rising=threshold_rising,
threshold_falling=threshold_falling, threshold_falling=threshold_falling,
threshold_strongly_rising=threshold_strongly_rising,
threshold_strongly_falling=threshold_strongly_falling,
volatility_adjustment=True, # Always enabled volatility_adjustment=True, # Always enabled
lookahead_intervals=lookahead_intervals, lookahead_intervals=lookahead_intervals,
all_intervals=all_intervals, all_intervals=all_intervals,
@ -126,18 +131,25 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
volatility_threshold_high=volatility_threshold_high, volatility_threshold_high=volatility_threshold_high,
) )
# Determine icon color based on trend state # Determine icon color based on trend state (5-level scale)
# Strongly rising/falling uses more intense colors
icon_color = { icon_color = {
"rising": "var(--error-color)", # Red/Orange for rising prices (expensive) "strongly_rising": "var(--error-color)", # Red for strongly rising (very expensive)
"falling": "var(--success-color)", # Green for falling prices (cheaper) "rising": "var(--warning-color)", # Orange/Yellow for rising prices
"stable": "var(--state-icon-color)", # Default gray for stable prices "stable": "var(--state-icon-color)", # Default gray for stable prices
"falling": "var(--success-color)", # Green for falling prices (cheaper)
"strongly_falling": "var(--success-color)", # Green for strongly falling (great deal)
}.get(trend_state, "var(--state-icon-color)") }.get(trend_state, "var(--state-icon-color)")
# Convert prices to display currency unit based on configuration
factor = get_display_unit_factor(self.config_entry)
# Store attributes in sensor-specific dictionary AND cache the trend value # Store attributes in sensor-specific dictionary AND cache the trend value
self._trend_attributes = { self._trend_attributes = {
"timestamp": next_interval_start, "timestamp": next_interval_start,
"trend_value": trend_value,
f"trend_{hours}h_%": round(diff_pct, 1), f"trend_{hours}h_%": round(diff_pct, 1),
f"next_{hours}h_avg": round(future_avg * 100, 2), f"next_{hours}h_avg": round(future_mean * factor, 2),
"interval_count": lookahead_intervals, "interval_count": lookahead_intervals,
"threshold_rising": threshold_rising, "threshold_rising": threshold_rising,
"threshold_falling": threshold_falling, "threshold_falling": threshold_falling,
@ -149,11 +161,13 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
# Get second half average for longer periods # Get second half average for longer periods
later_half_avg = self._calculate_later_half_average(hours, next_interval_start) later_half_avg = self._calculate_later_half_average(hours, next_interval_start)
if later_half_avg is not None: if later_half_avg is not None:
self._trend_attributes[f"second_half_{hours}h_avg"] = round(later_half_avg * 100, 2) self._trend_attributes[f"second_half_{hours}h_avg"] = round(later_half_avg * factor, 2)
# Calculate incremental change: how much does the later half differ from current? # Calculate incremental change: how much does the later half differ from current?
if current_interval_price > 0: # CRITICAL: Use abs() for negative prices and allow calculation for all non-zero prices
later_half_diff = ((later_half_avg - current_interval_price) / current_interval_price) * 100 # Example: current=-10, later=-5 → diff=5, pct=5/abs(-10)*100=+50% (correctly shows increase)
if current_interval_price != 0:
later_half_diff = ((later_half_avg - current_interval_price) / abs(current_interval_price)) * 100
self._trend_attributes[f"second_half_{hours}h_diff_from_current_%"] = round(later_half_diff, 1) self._trend_attributes[f"second_half_{hours}h_diff_from_current_%"] = round(later_half_diff, 1)
# Cache the trend value for consistency # Cache the trend value for consistency
@ -245,12 +259,11 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
Average price for the later half intervals, or None if insufficient data Average price for the later half intervals, or None if insufficient data
""" """
if not self.coordinator.data: if not self.has_data():
return None return None
price_info = self.coordinator.data.get("priceInfo", {}) today_prices = self.intervals_today
today_prices = price_info.get("today", []) tomorrow_prices = self.intervals_tomorrow
tomorrow_prices = price_info.get("tomorrow", [])
all_prices = today_prices + tomorrow_prices all_prices = today_prices + tomorrow_prices
if not all_prices: if not all_prices:
@ -277,7 +290,7 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
later_prices.append(float(price)) later_prices.append(float(price))
if later_prices: if later_prices:
return sum(later_prices) / len(later_prices) return calculate_mean(later_prices)
return None return None
@ -305,12 +318,11 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
return self._trend_calculation_cache return self._trend_calculation_cache
# Validate coordinator data # Validate coordinator data
if not self.coordinator.data: if not self.has_data():
return None return None
price_info = self.coordinator.data.get("priceInfo", {}) all_intervals = get_intervals_for_day_offsets(self.coordinator_data, [-1, 0, 1])
all_intervals = price_info.get("today", []) + price_info.get("tomorrow", []) current_interval = find_price_data_for_interval(self.coordinator.data, now, time=time)
current_interval = find_price_data_for_interval(price_info, now, time=time)
if not all_intervals or not current_interval: if not all_intervals or not current_interval:
return None return None
@ -345,11 +357,11 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
# Combine momentum + future outlook to get ACTUAL current trend # Combine momentum + future outlook to get ACTUAL current trend
if len(future_intervals) >= min_intervals_for_trend and future_prices: if len(future_intervals) >= min_intervals_for_trend and future_prices:
future_avg = sum(future_prices) / len(future_prices) future_mean = calculate_mean(future_prices)
current_trend_state = self._combine_momentum_with_future( current_trend_state = self._combine_momentum_with_future(
current_momentum=current_momentum, current_momentum=current_momentum,
current_price=current_price, current_price=current_price,
future_avg=future_avg, future_mean=future_mean,
context={ context={
"all_intervals": all_intervals, "all_intervals": all_intervals,
"current_index": current_index, "current_index": current_index,
@ -410,6 +422,8 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
return { return {
"rising": self.config.get("price_trend_threshold_rising", 5.0), "rising": self.config.get("price_trend_threshold_rising", 5.0),
"falling": self.config.get("price_trend_threshold_falling", -5.0), "falling": self.config.get("price_trend_threshold_falling", -5.0),
"strongly_rising": self.config.get("price_trend_threshold_strongly_rising", 6.0),
"strongly_falling": self.config.get("price_trend_threshold_strongly_falling", -6.0),
"moderate": self.config.get("volatility_threshold_moderate", 15.0), "moderate": self.config.get("volatility_threshold_moderate", 15.0),
"high": self.config.get("volatility_threshold_high", 30.0), "high": self.config.get("volatility_threshold_high", 30.0),
} }
@ -424,7 +438,7 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
current_index: Index of current interval current_index: Index of current interval
Returns: Returns:
Momentum direction: "rising", "falling", or "stable" Momentum direction: "strongly_rising", "rising", "stable", "falling", or "strongly_falling"
""" """
# Look back 1 hour (4 intervals) for quick reaction # Look back 1 hour (4 intervals) for quick reaction
@ -447,64 +461,91 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
weighted_sum = sum(price * weight for price, weight in zip(trailing_prices, weights, strict=True)) weighted_sum = sum(price * weight for price, weight in zip(trailing_prices, weights, strict=True))
weighted_avg = weighted_sum / sum(weights) weighted_avg = weighted_sum / sum(weights)
# Calculate momentum with 3% threshold # Calculate momentum with thresholds
# Using same logic as 5-level trend: 3% for normal, 6% (2x) for strong
momentum_threshold = 0.03 momentum_threshold = 0.03
diff = (current_price - weighted_avg) / weighted_avg strong_momentum_threshold = 0.06
diff = (current_price - weighted_avg) / abs(weighted_avg) if weighted_avg != 0 else 0
if diff > momentum_threshold: # Determine momentum level based on thresholds
return "rising" if diff >= strong_momentum_threshold:
if diff < -momentum_threshold: momentum = "strongly_rising"
return "falling" elif diff > momentum_threshold:
return "stable" momentum = "rising"
elif diff <= -strong_momentum_threshold:
momentum = "strongly_falling"
elif diff < -momentum_threshold:
momentum = "falling"
else:
momentum = "stable"
return momentum
def _combine_momentum_with_future( def _combine_momentum_with_future(
self, self,
*, *,
current_momentum: str, current_momentum: str,
current_price: float, current_price: float,
future_avg: float, future_mean: float,
context: dict, context: dict,
) -> str: ) -> str:
""" """
Combine momentum analysis with future outlook to determine final trend. Combine momentum analysis with future outlook to determine final trend.
Uses 5-level scale: strongly_rising, rising, stable, falling, strongly_falling.
Momentum intensity is preserved when future confirms the trend direction.
Args: Args:
current_momentum: Current momentum direction (rising/falling/stable) current_momentum: Current momentum direction (5-level scale)
current_price: Current interval price current_price: Current interval price
future_avg: Average price in future window future_mean: Average price in future window
context: Dict with all_intervals, current_index, lookahead_intervals, thresholds context: Dict with all_intervals, current_index, lookahead_intervals, thresholds
Returns: Returns:
Final trend direction: "rising", "falling", or "stable" Final trend direction (5-level scale)
""" """
if current_momentum == "rising": # Use calculate_price_trend for consistency with 5-level logic
# We're in uptrend - does it continue?
return "rising" if future_avg >= current_price * 0.98 else "falling"
if current_momentum == "falling":
# We're in downtrend - does it continue?
return "falling" if future_avg <= current_price * 1.02 else "rising"
# current_momentum == "stable" - what's coming?
all_intervals = context["all_intervals"] all_intervals = context["all_intervals"]
current_index = context["current_index"] current_index = context["current_index"]
lookahead_intervals = context["lookahead_intervals"] lookahead_intervals = context["lookahead_intervals"]
thresholds = context["thresholds"] thresholds = context["thresholds"]
lookahead_for_volatility = all_intervals[current_index : current_index + lookahead_intervals] lookahead_for_volatility = all_intervals[current_index : current_index + lookahead_intervals]
trend_state, _ = calculate_price_trend( future_trend, _, _ = calculate_price_trend(
current_price, current_price,
future_avg, future_mean,
threshold_rising=thresholds["rising"], threshold_rising=thresholds["rising"],
threshold_falling=thresholds["falling"], threshold_falling=thresholds["falling"],
threshold_strongly_rising=thresholds["strongly_rising"],
threshold_strongly_falling=thresholds["strongly_falling"],
volatility_adjustment=True, volatility_adjustment=True,
lookahead_intervals=lookahead_intervals, lookahead_intervals=lookahead_intervals,
all_intervals=lookahead_for_volatility, all_intervals=lookahead_for_volatility,
volatility_threshold_moderate=thresholds["moderate"], volatility_threshold_moderate=thresholds["moderate"],
volatility_threshold_high=thresholds["high"], volatility_threshold_high=thresholds["high"],
) )
return trend_state
# Check if momentum and future trend are aligned (same direction)
momentum_rising = current_momentum in ("rising", "strongly_rising")
momentum_falling = current_momentum in ("falling", "strongly_falling")
future_rising = future_trend in ("rising", "strongly_rising")
future_falling = future_trend in ("falling", "strongly_falling")
if momentum_rising and future_rising:
# Both indicate rising - use the stronger signal
if current_momentum == "strongly_rising" or future_trend == "strongly_rising":
return "strongly_rising"
return "rising"
if momentum_falling and future_falling:
# Both indicate falling - use the stronger signal
if current_momentum == "strongly_falling" or future_trend == "strongly_falling":
return "strongly_falling"
return "falling"
# Conflicting signals or stable momentum - trust future trend calculation
return future_trend
def _calculate_standard_trend( def _calculate_standard_trend(
self, self,
@ -526,15 +567,17 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
if not standard_future_prices: if not standard_future_prices:
return "stable" return "stable"
standard_future_avg = sum(standard_future_prices) / len(standard_future_prices) standard_future_mean = calculate_mean(standard_future_prices)
current_price = float(current_interval["total"]) current_price = float(current_interval["total"])
standard_lookahead_volatility = all_intervals[current_index : current_index + standard_lookahead] standard_lookahead_volatility = all_intervals[current_index : current_index + standard_lookahead]
current_trend_3h, _ = calculate_price_trend( current_trend_3h, _, _ = calculate_price_trend(
current_price, current_price,
standard_future_avg, standard_future_mean,
threshold_rising=thresholds["rising"], threshold_rising=thresholds["rising"],
threshold_falling=thresholds["falling"], threshold_falling=thresholds["falling"],
threshold_strongly_rising=thresholds["strongly_rising"],
threshold_strongly_falling=thresholds["strongly_falling"],
volatility_adjustment=True, volatility_adjustment=True,
lookahead_intervals=standard_lookahead, lookahead_intervals=standard_lookahead,
all_intervals=standard_lookahead_volatility, all_intervals=standard_lookahead_volatility,
@ -597,16 +640,18 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
if not future_prices: if not future_prices:
continue continue
future_avg = sum(future_prices) / len(future_prices) future_mean = calculate_mean(future_prices)
price = float(interval["total"]) price = float(interval["total"])
# Calculate trend at this past point # Calculate trend at this past point
lookahead_for_volatility = all_intervals[i : i + intervals_in_3h] lookahead_for_volatility = all_intervals[i : i + intervals_in_3h]
trend_state, _ = calculate_price_trend( trend_state, _, _ = calculate_price_trend(
price, price,
future_avg, future_mean,
threshold_rising=thresholds["rising"], threshold_rising=thresholds["rising"],
threshold_falling=thresholds["falling"], threshold_falling=thresholds["falling"],
threshold_strongly_rising=thresholds["strongly_rising"],
threshold_strongly_falling=thresholds["strongly_falling"],
volatility_adjustment=True, volatility_adjustment=True,
lookahead_intervals=intervals_in_3h, lookahead_intervals=intervals_in_3h,
all_intervals=lookahead_for_volatility, all_intervals=lookahead_for_volatility,
@ -669,16 +714,18 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
if not future_prices: if not future_prices:
continue continue
future_avg = sum(future_prices) / len(future_prices) future_mean = calculate_mean(future_prices)
current_price = float(interval["total"]) current_price = float(interval["total"])
# Calculate trend at this future point # Calculate trend at this future point
lookahead_for_volatility = all_intervals[i : i + intervals_in_3h] lookahead_for_volatility = all_intervals[i : i + intervals_in_3h]
trend_state, _ = calculate_price_trend( trend_state, _, _ = calculate_price_trend(
current_price, current_price,
future_avg, future_mean,
threshold_rising=thresholds["rising"], threshold_rising=thresholds["rising"],
threshold_falling=thresholds["falling"], threshold_falling=thresholds["falling"],
threshold_strongly_rising=thresholds["strongly_rising"],
threshold_strongly_falling=thresholds["strongly_falling"],
volatility_adjustment=True, volatility_adjustment=True,
lookahead_intervals=intervals_in_3h, lookahead_intervals=intervals_in_3h,
all_intervals=lookahead_for_volatility, all_intervals=lookahead_for_volatility,
@ -693,14 +740,17 @@ class TibberPricesTrendCalculator(TibberPricesBaseCalculator):
time = self.coordinator.time time = self.coordinator.time
minutes_until = int(time.minutes_until(interval_start)) minutes_until = int(time.minutes_until(interval_start))
# Convert prices to display currency unit
factor = get_display_unit_factor(self.config_entry)
self._trend_change_attributes = { self._trend_change_attributes = {
"direction": trend_state, "direction": trend_state,
"from_direction": current_trend_state, "from_direction": current_trend_state,
"minutes_until_change": minutes_until, "minutes_until_change": minutes_until,
"current_price_now": round(float(current_interval["total"]) * 100, 2), "current_price_now": round(float(current_interval["total"]) * factor, 2),
"price_at_change": round(current_price * 100, 2), "price_at_change": round(current_price * factor, 2),
"avg_after_change": round(future_avg * 100, 2), "avg_after_change": round(future_mean * factor, 2),
"trend_diff_%": round((future_avg - current_price) / current_price * 100, 1), "trend_diff_%": round((future_mean - current_price) / current_price * 100, 1),
} }
return interval_start return interval_start

View file

@ -4,12 +4,22 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from custom_components.tibber_prices.const import (
CONF_VOLATILITY_THRESHOLD_HIGH,
CONF_VOLATILITY_THRESHOLD_MODERATE,
CONF_VOLATILITY_THRESHOLD_VERY_HIGH,
DEFAULT_VOLATILITY_THRESHOLD_HIGH,
DEFAULT_VOLATILITY_THRESHOLD_MODERATE,
DEFAULT_VOLATILITY_THRESHOLD_VERY_HIGH,
get_display_unit_factor,
)
from custom_components.tibber_prices.entity_utils import add_icon_color_attribute from custom_components.tibber_prices.entity_utils import add_icon_color_attribute
from custom_components.tibber_prices.sensor.attributes import ( from custom_components.tibber_prices.sensor.attributes import (
add_volatility_type_attributes, add_volatility_type_attributes,
get_prices_for_volatility, get_prices_for_volatility,
) )
from custom_components.tibber_prices.utils.price import calculate_volatility_level from custom_components.tibber_prices.utils.average import calculate_mean
from custom_components.tibber_prices.utils.price import calculate_volatility_with_cv
from .base import TibberPricesBaseCalculator from .base import TibberPricesBaseCalculator
@ -51,20 +61,28 @@ class TibberPricesVolatilityCalculator(TibberPricesBaseCalculator):
Volatility level: "low", "moderate", "high", "very_high", or None if unavailable. Volatility level: "low", "moderate", "high", "very_high", or None if unavailable.
""" """
if not self.coordinator_data: if not self.has_data():
return None return None
price_info = self.price_info
# Get volatility thresholds from config # Get volatility thresholds from config
thresholds = { thresholds = {
"threshold_moderate": self.config.get("volatility_threshold_moderate", 5.0), "threshold_moderate": self.config.get(
"threshold_high": self.config.get("volatility_threshold_high", 15.0), CONF_VOLATILITY_THRESHOLD_MODERATE,
"threshold_very_high": self.config.get("volatility_threshold_very_high", 30.0), DEFAULT_VOLATILITY_THRESHOLD_MODERATE,
),
"threshold_high": self.config.get(CONF_VOLATILITY_THRESHOLD_HIGH, DEFAULT_VOLATILITY_THRESHOLD_HIGH),
"threshold_very_high": self.config.get(
CONF_VOLATILITY_THRESHOLD_VERY_HIGH,
DEFAULT_VOLATILITY_THRESHOLD_VERY_HIGH,
),
} }
# Get prices based on volatility type # Get prices based on volatility type
prices_to_analyze = get_prices_for_volatility(volatility_type, price_info, time=self.coordinator.time) prices_to_analyze = get_prices_for_volatility(
volatility_type,
self.coordinator.data,
time=self.coordinator.time,
)
if not prices_to_analyze: if not prices_to_analyze:
return None return None
@ -73,21 +91,24 @@ class TibberPricesVolatilityCalculator(TibberPricesBaseCalculator):
price_min = min(prices_to_analyze) price_min = min(prices_to_analyze)
price_max = max(prices_to_analyze) price_max = max(prices_to_analyze)
spread = price_max - price_min spread = price_max - price_min
price_avg = sum(prices_to_analyze) / len(prices_to_analyze) # Use arithmetic mean for volatility calculation (required for coefficient of variation)
price_mean = calculate_mean(prices_to_analyze)
# Convert to minor currency units (ct/øre) for display # Convert to display currency unit based on configuration
spread_minor = spread * 100 factor = get_display_unit_factor(self.config_entry)
spread_display = spread * factor
# Calculate volatility level with custom thresholds (pass price list, not spread) # Calculate volatility level AND coefficient of variation
volatility = calculate_volatility_level(prices_to_analyze, **thresholds) volatility, cv = calculate_volatility_with_cv(prices_to_analyze, **thresholds)
# Store attributes for this sensor # Store attributes for this sensor
self._last_volatility_attributes = { self._last_volatility_attributes = {
"price_spread": round(spread_minor, 2), "price_spread": round(spread_display, 2),
"price_volatility": volatility, "price_coefficient_variation_%": round(cv, 2) if cv is not None else None,
"price_min": round(price_min * 100, 2), "price_volatility": volatility.lower(),
"price_max": round(price_max * 100, 2), "price_min": round(price_min * factor, 2),
"price_avg": round(price_avg * 100, 2), "price_max": round(price_max * factor, 2),
"price_mean": round(price_mean * factor, 2),
"interval_count": len(prices_to_analyze), "interval_count": len(prices_to_analyze),
} }
@ -96,7 +117,11 @@ class TibberPricesVolatilityCalculator(TibberPricesBaseCalculator):
# Add type-specific attributes # Add type-specific attributes
add_volatility_type_attributes( add_volatility_type_attributes(
self._last_volatility_attributes, volatility_type, price_info, thresholds, time=self.coordinator.time self._last_volatility_attributes,
volatility_type,
self.coordinator.data,
thresholds,
time=self.coordinator.time,
) )
# Return lowercase for ENUM device class # Return lowercase for ENUM device class

View file

@ -24,7 +24,7 @@ class TibberPricesWindow24hCalculator(TibberPricesBaseCalculator):
self, self,
*, *,
stat_func: Callable, stat_func: Callable,
) -> float | None: ) -> float | tuple[float, float | None] | None:
""" """
Unified method for 24-hour sliding window statistics. Unified method for 24-hour sliding window statistics.
@ -33,20 +33,38 @@ class TibberPricesWindow24hCalculator(TibberPricesBaseCalculator):
- "leading": Next 24 hours (96 intervals after current) - "leading": Next 24 hours (96 intervals after current)
Args: Args:
stat_func: Function from average_utils (e.g., calculate_current_trailing_avg). stat_func: Function from average_utils (e.g., calculate_current_trailing_mean).
Returns: Returns:
Price value in minor currency units (cents/øre), or None if unavailable. Price value in subunit currency units (cents/øre), or None if unavailable.
For mean functions: tuple of (mean, median) where median may be None.
For min/max functions: single float value.
""" """
if not self.coordinator_data: if not self.has_data():
return None return None
value = stat_func(self.coordinator_data, time=self.coordinator.time) result = stat_func(self.coordinator_data, time=self.coordinator.time)
# Check if result is a tuple (mean, median) from mean functions
if isinstance(result, tuple):
value, median = result
if value is None:
return None
# Convert to display currency units based on config
mean_result = round(get_price_value(value, config_entry=self.coordinator.config_entry), 2)
median_result = (
round(get_price_value(median, config_entry=self.coordinator.config_entry), 2)
if median is not None
else None
)
return mean_result, median_result
# Single value result (min/max functions)
value = result
if value is None: if value is None:
return None return None
# Always return in minor currency units (cents/øre) with 2 decimals # Return in configured display currency units with 2 decimals
result = get_price_value(value, in_euro=False) result = get_price_value(value, config_entry=self.coordinator.config_entry)
return round(result, 2) return round(result, 2)

View file

@ -38,6 +38,9 @@ async def call_chartdata_service_async(
# Add required entry_id parameter # Add required entry_id parameter
service_params["entry_id"] = config_entry.entry_id service_params["entry_id"] = config_entry.entry_id
# Make sure metadata is never requested for this sensor
service_params["metadata"] = "none"
# Call get_chartdata service using official HA service system # Call get_chartdata service using official HA service system
try: try:
response = await hass.services.async_call( response = await hass.services.async_call(

View file

@ -0,0 +1,149 @@
"""Chart metadata export functionality for Tibber Prices sensors."""
from __future__ import annotations
from typing import TYPE_CHECKING
from custom_components.tibber_prices.const import (
CONF_CURRENCY_DISPLAY_MODE,
DATA_CHART_METADATA_CONFIG,
DISPLAY_MODE_SUBUNIT,
DOMAIN,
)
if TYPE_CHECKING:
from datetime import datetime
from custom_components.tibber_prices.coordinator import TibberPricesDataUpdateCoordinator
from custom_components.tibber_prices.data import TibberPricesConfigEntry
from homeassistant.core import HomeAssistant
async def call_chartdata_service_for_metadata_async(
hass: HomeAssistant,
coordinator: TibberPricesDataUpdateCoordinator,
config_entry: TibberPricesConfigEntry,
) -> tuple[dict | None, str | None]:
"""
Call get_chartdata service with configuration from configuration.yaml for metadata (async).
Returns:
Tuple of (response, error_message).
If successful: (response_dict, None)
If failed: (None, error_string)
"""
# Get configuration from hass.data (loaded from configuration.yaml)
domain_data = hass.data.get(DOMAIN, {})
chart_metadata_config = domain_data.get(DATA_CHART_METADATA_CONFIG, {})
# Use chart_metadata_config directly (already a dict from async_setup)
service_params = dict(chart_metadata_config) if chart_metadata_config else {}
# Add required entry_id parameter
service_params["entry_id"] = config_entry.entry_id
# Force metadata to "only" - this sensor ONLY provides metadata
service_params["metadata"] = "only"
# Use user's display unit preference from config_entry
# This ensures chart_metadata yaxis values match the user's configured currency display mode
if "subunit_currency" not in service_params:
display_mode = config_entry.options.get(CONF_CURRENCY_DISPLAY_MODE, DISPLAY_MODE_SUBUNIT)
service_params["subunit_currency"] = display_mode == DISPLAY_MODE_SUBUNIT
# Call get_chartdata service using official HA service system
try:
response = await hass.services.async_call(
DOMAIN,
"get_chartdata",
service_params,
blocking=True,
return_response=True,
)
except Exception as ex:
coordinator.logger.exception("Chart metadata service call failed")
return None, str(ex)
else:
return response, None
def get_chart_metadata_state(
chart_metadata_response: dict | None,
chart_metadata_error: str | None,
) -> str | None:
"""
Return state for chart_metadata sensor.
Args:
chart_metadata_response: Last service response (or None)
chart_metadata_error: Last error message (or None)
Returns:
"error" if error occurred
"ready" if metadata available
"pending" if no data yet
"""
if chart_metadata_error:
return "error"
if chart_metadata_response:
return "ready"
return "pending"
def build_chart_metadata_attributes(
chart_metadata_response: dict | None,
chart_metadata_last_update: datetime | None,
chart_metadata_error: str | None,
) -> dict[str, object] | None:
"""
Return chart metadata from last service call as attributes.
Attribute order: timestamp, error (if any), metadata fields (at the end).
Args:
chart_metadata_response: Last service response (should contain "metadata" key)
chart_metadata_last_update: Timestamp of last update
chart_metadata_error: Error message if service call failed
Returns:
Dict with timestamp, optional error, and metadata fields.
"""
# Build base attributes with timestamp FIRST
attributes: dict[str, object] = {
"timestamp": chart_metadata_last_update,
}
# Add error message if service call failed
if chart_metadata_error:
attributes["error"] = chart_metadata_error
if not chart_metadata_response:
# No data - only timestamp (and error if present)
return attributes
# Extract metadata from response (get_chartdata returns {"metadata": {...}})
metadata = chart_metadata_response.get("metadata", {})
# Extract the fields we care about for charts
# These are the universal chart metadata fields useful for any chart card
if metadata:
yaxis_suggested = metadata.get("yaxis_suggested", {})
# Add yaxis bounds (useful for all chart cards)
if "min" in yaxis_suggested:
attributes["yaxis_min"] = yaxis_suggested["min"]
if "max" in yaxis_suggested:
attributes["yaxis_max"] = yaxis_suggested["max"]
# Add currency info (useful for labeling)
if "currency" in metadata:
attributes["currency"] = metadata["currency"]
# Add resolution info (interval duration in minutes)
if "resolution" in metadata:
attributes["resolution"] = metadata["resolution"]
return attributes

View file

@ -9,18 +9,26 @@ from custom_components.tibber_prices.binary_sensor.attributes import (
get_price_intervals_attributes, get_price_intervals_attributes,
) )
from custom_components.tibber_prices.const import ( from custom_components.tibber_prices.const import (
CONF_AVERAGE_SENSOR_DISPLAY,
CONF_CURRENCY_DISPLAY_MODE,
CONF_PRICE_RATING_THRESHOLD_HIGH, CONF_PRICE_RATING_THRESHOLD_HIGH,
CONF_PRICE_RATING_THRESHOLD_LOW, CONF_PRICE_RATING_THRESHOLD_LOW,
DEFAULT_AVERAGE_SENSOR_DISPLAY,
DEFAULT_PRICE_RATING_THRESHOLD_HIGH, DEFAULT_PRICE_RATING_THRESHOLD_HIGH,
DEFAULT_PRICE_RATING_THRESHOLD_LOW, DEFAULT_PRICE_RATING_THRESHOLD_LOW,
DISPLAY_MODE_BASE,
DOMAIN, DOMAIN,
format_price_unit_major, format_price_unit_base,
format_price_unit_minor, get_display_unit_factor,
get_display_unit_string,
) )
from custom_components.tibber_prices.coordinator import ( from custom_components.tibber_prices.coordinator import (
MINUTE_UPDATE_ENTITY_KEYS, MINUTE_UPDATE_ENTITY_KEYS,
TIME_SENSITIVE_ENTITY_KEYS, TIME_SENSITIVE_ENTITY_KEYS,
) )
from custom_components.tibber_prices.coordinator.helpers import (
get_intervals_for_day_offsets,
)
from custom_components.tibber_prices.entity import TibberPricesEntity from custom_components.tibber_prices.entity import TibberPricesEntity
from custom_components.tibber_prices.entity_utils import ( from custom_components.tibber_prices.entity_utils import (
add_icon_color_attribute, add_icon_color_attribute,
@ -32,14 +40,14 @@ from custom_components.tibber_prices.entity_utils.icons import (
get_dynamic_icon, get_dynamic_icon,
) )
from custom_components.tibber_prices.utils.average import ( from custom_components.tibber_prices.utils.average import (
calculate_next_n_hours_avg, calculate_next_n_hours_mean,
) )
from custom_components.tibber_prices.utils.price import ( from custom_components.tibber_prices.utils.price import (
calculate_volatility_level, calculate_volatility_level,
) )
from homeassistant.components.sensor import ( from homeassistant.components.sensor import (
RestoreSensor,
SensorDeviceClass, SensorDeviceClass,
SensorEntity,
SensorEntityDescription, SensorEntityDescription,
) )
from homeassistant.const import EntityCategory from homeassistant.const import EntityCategory
@ -67,6 +75,11 @@ from .chart_data import (
call_chartdata_service_async, call_chartdata_service_async,
get_chart_data_state, get_chart_data_state,
) )
from .chart_metadata import (
build_chart_metadata_attributes,
call_chartdata_service_for_metadata_async,
get_chart_metadata_state,
)
from .helpers import aggregate_level_data, aggregate_rating_data from .helpers import aggregate_level_data, aggregate_rating_data
from .value_getters import get_value_getter_mapping from .value_getters import get_value_getter_mapping
@ -84,8 +97,60 @@ MAX_FORECAST_INTERVALS = 8 # Show up to 8 future intervals (2 hours with 15-min
MIN_HOURS_FOR_LATER_HALF = 3 # Minimum hours needed to calculate later half average MIN_HOURS_FOR_LATER_HALF = 3 # Minimum hours needed to calculate later half average
class TibberPricesSensor(TibberPricesEntity, SensorEntity): class TibberPricesSensor(TibberPricesEntity, RestoreSensor):
"""tibber_prices Sensor class.""" """tibber_prices Sensor class with state restoration."""
# Base attributes excluded from recorder history (shared across all sensors)
# See: https://developers.home-assistant.io/docs/core/entity/#excluding-state-attributes-from-recorder-history
_unrecorded_attributes = frozenset(
{
"timestamp",
# Descriptions/Help Text (static, large)
"description",
"usage_tips",
# Large Nested Structures
"trend_attributes",
"current_trend_attributes",
"trend_change_attributes",
"volatility_attributes",
"data", # chart_data_export large nested data
# Frequently Changing Diagnostics
"icon_color",
"cache_age",
"cache_validity",
"data_completeness",
"data_status",
# Static/Rarely Changing
"tomorrow_expected_after",
"level_value",
"rating_value",
"level_id",
"rating_id",
"currency",
"resolution",
"yaxis_min",
"yaxis_max",
# Temporary/Time-Bound
"next_api_poll",
"next_midnight_turnover",
"last_update", # Lifecycle sensor last update timestamp
"last_turnover",
"last_error",
"error",
# Relaxation Details
"relaxation_level",
"relaxation_threshold_original_%",
"relaxation_threshold_applied_%",
# Redundant/Derived (removed from attributes, kept here for safety)
"volatility",
"diff_%",
"rating_difference_%",
"period_price_diff_from_daily_min",
"period_price_diff_from_daily_min_%",
"periods_total",
"periods_remaining",
}
)
def __init__( def __init__(
self, self,
@ -97,6 +162,8 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
self.entity_description = entity_description self.entity_description = entity_description
self._attr_unique_id = f"{coordinator.config_entry.entry_id}_{entity_description.key}" self._attr_unique_id = f"{coordinator.config_entry.entry_id}_{entity_description.key}"
self._attr_has_entity_name = True self._attr_has_entity_name = True
# Cached data for attributes (e.g., median values)
self.cached_data: dict[str, Any] = {}
# Instantiate calculators # Instantiate calculators
self._metadata_calculator = TibberPricesMetadataCalculator(coordinator) self._metadata_calculator = TibberPricesMetadataCalculator(coordinator)
self._volatility_calculator = TibberPricesVolatilityCalculator(coordinator) self._volatility_calculator = TibberPricesVolatilityCalculator(coordinator)
@ -110,19 +177,88 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
self._value_getter: Callable | None = self._get_value_getter() self._value_getter: Callable | None = self._get_value_getter()
self._time_sensitive_remove_listener: Callable | None = None self._time_sensitive_remove_listener: Callable | None = None
self._minute_update_remove_listener: Callable | None = None self._minute_update_remove_listener: Callable | None = None
# Lifecycle sensor state change detection (for recorder optimization)
# Store as Any because native_value can be str/float/datetime depending on sensor type
self._last_lifecycle_state: Any = None
# Chart data export (for chart_data_export sensor) - from binary_sensor # Chart data export (for chart_data_export sensor) - from binary_sensor
self._chart_data_last_update = None # Track last service call timestamp self._chart_data_last_update = None # Track last service call timestamp
self._chart_data_error = None # Track last service call error self._chart_data_error = None # Track last service call error
self._chart_data_response = None # Store service response for attributes self._chart_data_response = None # Store service response for attributes
# Chart metadata (for chart_metadata sensor)
# Register for push updates if this is the lifecycle sensor self._chart_metadata_last_update = None # Track last service call timestamp
if entity_description.key == "data_lifecycle_status": self._chart_metadata_error = None # Track last service call error
coordinator.register_lifecycle_callback(self.async_write_ha_state) self._chart_metadata_response = None # Store service response for attributes
async def async_added_to_hass(self) -> None: async def async_added_to_hass(self) -> None:
"""When entity is added to hass.""" """When entity is added to hass."""
await super().async_added_to_hass() await super().async_added_to_hass()
# Configure dynamic attribute exclusion for average sensors
self._configure_average_sensor_exclusions()
# Restore last state if available
await self._restore_last_state()
# Register listeners for time-sensitive updates
self._register_update_listeners()
# Trigger initial chart data loads as background tasks
self._trigger_chart_data_loads()
def _configure_average_sensor_exclusions(self) -> None:
"""Configure dynamic attribute exclusions for average sensors."""
# Dynamically exclude average attribute that matches state value
# (to avoid recording the same value twice: once as state, once as attribute)
key = self.entity_description.key
if key in (
"average_price_today",
"average_price_tomorrow",
"trailing_price_average",
"leading_price_average",
"current_hour_average_price",
"next_hour_average_price",
) or key.startswith("next_avg_"): # Future average sensors
display_mode = self.coordinator.config_entry.options.get(
CONF_AVERAGE_SENSOR_DISPLAY,
DEFAULT_AVERAGE_SENSOR_DISPLAY,
)
# Modify _state_info to add dynamic exclusion
if self._state_info is None:
self._state_info = {"unrecorded_attributes": frozenset()}
current_unrecorded = self._state_info.get("unrecorded_attributes", frozenset())
# State shows median → exclude price_median from attributes
# State shows mean → exclude price_mean from attributes
if display_mode == "median":
self._state_info["unrecorded_attributes"] = current_unrecorded | {"price_median"}
else:
self._state_info["unrecorded_attributes"] = current_unrecorded | {"price_mean"}
async def _restore_last_state(self) -> None:
"""Restore last state if available."""
if (
(last_state := await self.async_get_last_state()) is not None
and last_state.state not in (None, "unknown", "unavailable", "")
and (last_sensor_data := await self.async_get_last_sensor_data()) is not None
):
# Restore native_value from extra data (more reliable than state)
self._attr_native_value = last_sensor_data.native_value
# For chart sensors, restore response data from attributes
if self.entity_description.key == "chart_data_export":
self._chart_data_response = last_state.attributes.get("data")
self._chart_data_last_update = last_state.attributes.get("last_update")
elif self.entity_description.key == "chart_metadata":
# Restore metadata response from attributes
metadata_attrs = {}
for key in ["title", "yaxis_min", "yaxis_max", "currency", "resolution"]:
if key in last_state.attributes:
metadata_attrs[key] = last_state.attributes[key]
if metadata_attrs:
self._chart_metadata_response = metadata_attrs
self._chart_metadata_last_update = last_state.attributes.get("last_update")
def _register_update_listeners(self) -> None:
"""Register listeners for time-sensitive updates."""
# Register with coordinator for time-sensitive updates if applicable # Register with coordinator for time-sensitive updates if applicable
if self.entity_description.key in TIME_SENSITIVE_ENTITY_KEYS: if self.entity_description.key in TIME_SENSITIVE_ENTITY_KEYS:
self._time_sensitive_remove_listener = self.coordinator.async_add_time_sensitive_listener( self._time_sensitive_remove_listener = self.coordinator.async_add_time_sensitive_listener(
@ -135,9 +271,17 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
self._handle_minute_update self._handle_minute_update
) )
# For chart_data_export, trigger initial service call def _trigger_chart_data_loads(self) -> None:
"""Trigger initial chart data loads as background tasks."""
# For chart_data_export, trigger initial service call as background task
# (non-blocking to avoid delaying entity setup)
if self.entity_description.key == "chart_data_export": if self.entity_description.key == "chart_data_export":
await self._refresh_chart_data() self.hass.async_create_task(self._refresh_chart_data())
# For chart_metadata, trigger initial service call as background task
# (non-blocking to avoid delaying entity setup)
if self.entity_description.key == "chart_metadata":
self.hass.async_create_task(self._refresh_chart_metadata())
async def async_will_remove_from_hass(self) -> None: async def async_will_remove_from_hass(self) -> None:
"""When entity will be removed from hass.""" """When entity will be removed from hass."""
@ -171,7 +315,18 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
# Clear trend calculation cache for trend sensors # Clear trend calculation cache for trend sensors
elif self.entity_description.key in ("current_price_trend", "next_price_trend_change"): elif self.entity_description.key in ("current_price_trend", "next_price_trend_change"):
self._trend_calculator.clear_calculation_cache() self._trend_calculator.clear_calculation_cache()
self.async_write_ha_state()
# For lifecycle sensor: Only write state if it actually changed (state-change filter)
# This enables precise detection at quarter-hour boundaries (23:45 turnover_pending,
# 13:00 searching_tomorrow, 00:00 turnover complete) without recorder spam
if self.entity_description.key == "data_lifecycle_status":
current_state = self.native_value
if current_state != self._last_lifecycle_state:
self._last_lifecycle_state = current_state
self.async_write_ha_state()
# If state didn't change, skip write to recorder
else:
self.async_write_ha_state()
@callback @callback
def _handle_minute_update(self, time_service: TibberPricesTimeService) -> None: def _handle_minute_update(self, time_service: TibberPricesTimeService) -> None:
@ -193,13 +348,29 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
# Clear cached trend values when coordinator data changes # Clear cached trend values when coordinator data changes
if self.entity_description.key.startswith("price_trend_"): if self.entity_description.key.startswith("price_trend_"):
self._trend_calculator.clear_trend_cache() self._trend_calculator.clear_trend_cache()
# Also clear calculation cache (e.g., when threshold config changes)
self._trend_calculator.clear_calculation_cache()
# Refresh chart data when coordinator updates (new price data or user data) # Refresh chart data when coordinator updates (new price data or user data)
if self.entity_description.key == "chart_data_export": if self.entity_description.key == "chart_data_export":
# Schedule async refresh as a task (we're in a callback) # Schedule async refresh as a task (we're in a callback)
self.hass.async_create_task(self._refresh_chart_data()) self.hass.async_create_task(self._refresh_chart_data())
super()._handle_coordinator_update() # Refresh chart metadata when coordinator updates (new price data or user data)
if self.entity_description.key == "chart_metadata":
# Schedule async refresh as a task (we're in a callback)
self.hass.async_create_task(self._refresh_chart_metadata())
# For lifecycle sensor: Only write state if it actually changed (event-based filter)
# Prevents excessive recorder entries while keeping quarter-hour update capability
if self.entity_description.key == "data_lifecycle_status":
current_state = self.native_value
if current_state != self._last_lifecycle_state:
self._last_lifecycle_state = current_state
super()._handle_coordinator_update()
# If state didn't change, skip write to recorder
else:
super()._handle_coordinator_update()
def _get_value_getter(self) -> Callable | None: def _get_value_getter(self) -> Callable | None:
"""Return the appropriate value getter method based on the sensor type.""" """Return the appropriate value getter method based on the sensor type."""
@ -217,6 +388,7 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
get_next_avg_n_hours_value=self._get_next_avg_n_hours_value, get_next_avg_n_hours_value=self._get_next_avg_n_hours_value,
get_data_timestamp=self._get_data_timestamp, get_data_timestamp=self._get_data_timestamp,
get_chart_data_export_value=self._get_chart_data_export_value, get_chart_data_export_value=self._get_chart_data_export_value,
get_chart_metadata_value=self._get_chart_metadata_value,
) )
return handlers.get(self.entity_description.key) return handlers.get(self.entity_description.key)
@ -249,7 +421,7 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
Returns: Returns:
Aggregated value based on type: Aggregated value based on type:
- "price": float (average price in minor currency units) - "price": float (average price in subunit currency units)
- "level": str (aggregated level: "very_cheap", "cheap", etc.) - "level": str (aggregated level: "very_cheap", "cheap", etc.)
- "rating": str (aggregated rating: "low", "normal", "high") - "rating": str (aggregated rating: "low", "normal", "high")
@ -257,9 +429,8 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
if not self.coordinator.data: if not self.coordinator.data:
return None return None
# Get all available price data # Get all available price data (yesterday, today, tomorrow) via helper
price_info = self.coordinator.data.get("priceInfo", {}) all_prices = get_intervals_for_day_offsets(self.coordinator.data, [-1, 0, 1])
all_prices = price_info.get("yesterday", []) + price_info.get("today", []) + price_info.get("tomorrow", [])
if not all_prices: if not all_prices:
return None return None
@ -281,7 +452,15 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
if not window_data: if not window_data:
return None return None
return self._rolling_hour_calculator.aggregate_window_data(window_data, value_type) result = self._rolling_hour_calculator.aggregate_window_data(window_data, value_type)
# For price type, aggregate_window_data returns (avg, median)
if isinstance(result, tuple):
avg, median = result
# Cache median for attributes
if median is not None:
self.cached_data[f"{self.entity_description.key}_median"] = median
return avg
return result
# ======================================================================== # ========================================================================
# INTERVAL-BASED VALUE METHODS # INTERVAL-BASED VALUE METHODS
@ -311,39 +490,27 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
stat_func: Statistical function (min, max, or lambda for avg) stat_func: Statistical function (min, max, or lambda for avg)
Returns: Returns:
Price value in minor currency units (cents/øre), or None if unavailable Price value in subunit currency units (cents/øre), or None if unavailable
""" """
if not self.coordinator.data: if not self.coordinator.data:
return None return None
price_info = self.coordinator.data.get("priceInfo", {}) # Map day key to offset: yesterday=-1, today=0, tomorrow=1
day_offset = {"yesterday": -1, "today": 0, "tomorrow": 1}[day]
day_intervals = get_intervals_for_day_offsets(self.coordinator.data, [day_offset])
# Get TimeService from coordinator # Collect all prices and their intervals
time = self.coordinator.time
# Get local midnight boundaries based on the requested day using TimeService
local_midnight, local_midnight_next_day = time.get_day_boundaries(day)
# Collect all prices and their intervals from both today and tomorrow data
# that fall within the target day's local date boundaries
price_intervals = [] price_intervals = []
for day_key in ["today", "tomorrow"]: for price_data in day_intervals:
for price_data in price_info.get(day_key, []): total_price = price_data.get("total")
starts_at = price_data.get("startsAt") # Already datetime in local timezone if total_price is not None:
if not starts_at: price_intervals.append(
continue {
"price": float(total_price),
# Include price if it starts within the target day's local date boundaries "interval": price_data,
if local_midnight <= starts_at < local_midnight_next_day: }
total_price = price_data.get("total") )
if total_price is not None:
price_intervals.append(
{
"price": float(total_price),
"interval": price_data,
}
)
if not price_intervals: if not price_intervals:
return None return None
@ -358,8 +525,8 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
self._last_extreme_interval = pi["interval"] self._last_extreme_interval = pi["interval"]
break break
# Always return in minor currency units (cents/øre) with 2 decimals # Return in configured display currency units with 2 decimals
result = get_price_value(value, in_euro=False) result = get_price_value(value, config_entry=self.coordinator.config_entry)
return round(result, 2) return round(result, 2)
def _get_daily_aggregated_value( def _get_daily_aggregated_value(
@ -385,24 +552,9 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
if not self.coordinator.data: if not self.coordinator.data:
return None return None
price_info = self.coordinator.data.get("priceInfo", {}) # Map day key to offset: yesterday=-1, today=0, tomorrow=1
day_offset = {"yesterday": -1, "today": 0, "tomorrow": 1}[day]
# Get local midnight boundaries based on the requested day using TimeService day_intervals = get_intervals_for_day_offsets(self.coordinator.data, [day_offset])
time = self.coordinator.time
local_midnight, local_midnight_next_day = time.get_day_boundaries(day)
# Collect all intervals from both today and tomorrow data
# that fall within the target day's local date boundaries
day_intervals = []
for day_key in ["yesterday", "today", "tomorrow"]:
for price_data in price_info.get(day_key, []):
starts_at = price_data.get("startsAt") # Already datetime in local timezone
if not starts_at:
continue
# Include interval if it starts within the target day's local date boundaries
if local_midnight <= starts_at < local_midnight_next_day:
day_intervals.append(price_data)
if not day_intervals: if not day_intervals:
return None return None
@ -437,10 +589,10 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
- "leading": Next 24 hours (96 intervals after current) - "leading": Next 24 hours (96 intervals after current)
Args: Args:
stat_func: Function from average_utils (e.g., calculate_current_trailing_avg) stat_func: Function from average_utils (e.g., calculate_current_trailing_mean)
Returns: Returns:
Price value in minor currency units (cents/øre), or None if unavailable Price value in subunit currency units (cents/øre), or None if unavailable
""" """
if not self.coordinator.data: if not self.coordinator.data:
@ -451,8 +603,8 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
if value is None: if value is None:
return None return None
# Always return in minor currency units (cents/øre) with 2 decimals # Return in configured display currency units with 2 decimals
result = get_price_value(value, in_euro=False) result = get_price_value(value, config_entry=self.coordinator.config_entry)
return round(result, 2) return round(result, 2)
def _translate_rating_level(self, level: str) -> str: def _translate_rating_level(self, level: str) -> str:
@ -486,21 +638,37 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
def _get_next_avg_n_hours_value(self, hours: int) -> float | None: def _get_next_avg_n_hours_value(self, hours: int) -> float | None:
""" """
Get average price for next N hours starting from next interval. Get mean price for next N hours starting from next interval.
Args: Args:
hours: Number of hours to look ahead (1, 2, 3, 4, 5, 6, 8, 12) hours: Number of hours to look ahead (1, 2, 3, 4, 5, 6, 8, 12)
Returns: Returns:
Average price in minor currency units (e.g., cents), or None if unavailable Mean or median price (based on config) in subunit currency units (e.g., cents),
or None if unavailable
""" """
avg_price = calculate_next_n_hours_avg(self.coordinator.data, hours, time=self.coordinator.time) mean_price, median_price = calculate_next_n_hours_mean(self.coordinator.data, hours, time=self.coordinator.time)
if avg_price is None: if mean_price is None:
return None return None
# Convert from major to minor currency units (e.g., EUR to cents) # Get display unit factor (100 for minor, 1 for major)
return round(avg_price * 100, 2) factor = get_display_unit_factor(self.coordinator.config_entry)
# Get user preference for display (mean or median)
display_pref = self.coordinator.config_entry.options.get(
CONF_AVERAGE_SENSOR_DISPLAY, DEFAULT_AVERAGE_SENSOR_DISPLAY
)
# Store both values for attributes
self.cached_data[f"next_avg_{hours}h_mean"] = round(mean_price * factor, 2)
if median_price is not None:
self.cached_data[f"next_avg_{hours}h_median"] = round(median_price * factor, 2)
# Return the value chosen for state display
if display_pref == "median" and median_price is not None:
return round(median_price * factor, 2)
return round(mean_price * factor, 2) # "mean"
def _get_data_timestamp(self) -> datetime | None: def _get_data_timestamp(self) -> datetime | None:
""" """
@ -516,16 +684,17 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
if not self.coordinator.data: if not self.coordinator.data:
return None return None
price_info = self.coordinator.data.get("priceInfo", {}) # Use helper to get all intervals (today and tomorrow)
all_intervals = get_intervals_for_day_offsets(self.coordinator.data, [0, 1])
latest_timestamp = None latest_timestamp = None
for day in ["today", "tomorrow"]: # Search through intervals to find latest timestamp
for price_data in price_info.get(day, []): for price_data in all_intervals:
starts_at = price_data.get("startsAt") # Already datetime in local timezone starts_at = price_data.get("startsAt") # Already datetime in local timezone
if not starts_at: if not starts_at:
continue continue
if not latest_timestamp or starts_at > latest_timestamp: if not latest_timestamp or starts_at > latest_timestamp:
latest_timestamp = starts_at latest_timestamp = starts_at
# Return timezone-aware datetime (HA handles timezone display automatically) # Return timezone-aware datetime (HA handles timezone display automatically)
return latest_timestamp return latest_timestamp
@ -544,8 +713,6 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
if not self.coordinator.data: if not self.coordinator.data:
return None return None
price_info = self.coordinator.data.get("priceInfo", {})
# Get volatility thresholds from config # Get volatility thresholds from config
thresholds = { thresholds = {
"threshold_moderate": self.coordinator.config_entry.options.get("volatility_threshold_moderate", 5.0), "threshold_moderate": self.coordinator.config_entry.options.get("volatility_threshold_moderate", 5.0),
@ -554,30 +721,19 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
} }
# Get prices based on volatility type # Get prices based on volatility type
prices_to_analyze = get_prices_for_volatility(volatility_type, price_info, time=self.coordinator.time) prices_to_analyze = get_prices_for_volatility(
volatility_type, self.coordinator.data, time=self.coordinator.time
)
if not prices_to_analyze: if not prices_to_analyze:
return None return None
# Calculate spread and basic statistics # Calculate volatility level with custom thresholds
price_min = min(prices_to_analyze) # Note: Volatility calculation (coefficient of variation) uses mean internally
price_max = max(prices_to_analyze)
spread = price_max - price_min
price_avg = sum(prices_to_analyze) / len(prices_to_analyze)
# Convert to minor currency units (ct/øre) for display
spread_minor = spread * 100
# Calculate volatility level with custom thresholds (pass price list, not spread)
volatility = calculate_volatility_level(prices_to_analyze, **thresholds) volatility = calculate_volatility_level(prices_to_analyze, **thresholds)
# Store attributes for this sensor # Store minimal attributes (only unique info not available in other sensors)
self._last_volatility_attributes = { self._last_volatility_attributes = {
"price_spread": round(spread_minor, 2),
"price_volatility": volatility,
"price_min": round(price_min * 100, 2),
"price_max": round(price_max * 100, 2),
"price_avg": round(price_avg * 100, 2),
"interval_count": len(prices_to_analyze), "interval_count": len(prices_to_analyze),
} }
@ -586,7 +742,11 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
# Add type-specific attributes # Add type-specific attributes
add_volatility_type_attributes( add_volatility_type_attributes(
self._last_volatility_attributes, volatility_type, price_info, thresholds, time=self.coordinator.time self._last_volatility_attributes,
volatility_type,
self.coordinator.data,
thresholds,
time=self.coordinator.time,
) )
# Return lowercase for ENUM device class # Return lowercase for ENUM device class
@ -700,7 +860,7 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
return True return True
@property @property
def native_value(self) -> float | str | datetime | None: def native_value(self) -> float | str | datetime | None: # noqa: PLR0912
"""Return the native value of the sensor.""" """Return the native value of the sensor."""
try: try:
if not self.coordinator.data or not self._value_getter: if not self.coordinator.data or not self._value_getter:
@ -708,7 +868,8 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
# For price_level, ensure we return the translated value as state # For price_level, ensure we return the translated value as state
if self.entity_description.key == "current_interval_price_level": if self.entity_description.key == "current_interval_price_level":
return self._interval_calculator.get_price_level_value() return self._interval_calculator.get_price_level_value()
return self._value_getter()
result = self._value_getter()
except (KeyError, ValueError, TypeError) as ex: except (KeyError, ValueError, TypeError) as ex:
self.coordinator.logger.exception( self.coordinator.logger.exception(
"Error getting sensor value", "Error getting sensor value",
@ -718,6 +879,48 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
}, },
) )
return None return None
else:
# Handle tuple results (average + median) from calculators
if isinstance(result, tuple):
avg, median = result
# Get user preference for state display
display_pref = self.coordinator.config_entry.options.get(
CONF_AVERAGE_SENSOR_DISPLAY,
DEFAULT_AVERAGE_SENSOR_DISPLAY,
)
# Cache BOTH values for attribute builders to use
key = self.entity_description.key
if "average_price_today" in key:
self.cached_data["average_price_today_mean"] = avg
self.cached_data["average_price_today_median"] = median
elif "average_price_tomorrow" in key:
self.cached_data["average_price_tomorrow_mean"] = avg
self.cached_data["average_price_tomorrow_median"] = median
elif "trailing_price_average" in key:
self.cached_data["trailing_price_mean"] = avg
self.cached_data["trailing_price_median"] = median
elif "leading_price_average" in key:
self.cached_data["leading_price_mean"] = avg
self.cached_data["leading_price_median"] = median
elif "current_hour_average_price" in key:
self.cached_data["rolling_hour_0_mean"] = avg
self.cached_data["rolling_hour_0_median"] = median
elif "next_hour_average_price" in key:
self.cached_data["rolling_hour_1_mean"] = avg
self.cached_data["rolling_hour_1_median"] = median
elif key.startswith("next_avg_"):
# Extract hours from key (e.g., "next_avg_3h" -> "3")
hours = key.split("_")[-1].replace("h", "")
self.cached_data[f"next_avg_{hours}h_mean"] = avg
self.cached_data[f"next_avg_{hours}h_median"] = median
# Return the value chosen for state display
if display_pref == "median":
return median
return avg # "mean"
return result
@property @property
def native_unit_of_measurement(self) -> str | None: def native_unit_of_measurement(self) -> str | None:
@ -726,15 +929,15 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
if self.entity_description.device_class == SensorDeviceClass.MONETARY: if self.entity_description.device_class == SensorDeviceClass.MONETARY:
currency = None currency = None
if self.coordinator.data: if self.coordinator.data:
price_info = self.coordinator.data.get("priceInfo", {}) currency = self.coordinator.data.get("currency")
currency = price_info.get("currency")
# Use major currency unit for Energy Dashboard sensor # Special case: Energy Dashboard sensor always uses base currency
if self.entity_description.key == "current_interval_price_major": # regardless of user display mode configuration
return format_price_unit_major(currency) if self.entity_description.key == "current_interval_price_base":
return format_price_unit_base(currency)
# Use minor currency unit for all other price sensors # Get unit based on user configuration (major or minor)
return format_price_unit_minor(currency) return get_display_unit_string(self.coordinator.config_entry, currency)
# For all other sensors, use unit from entity description # For all other sensors, use unit from entity description
return self.entity_description.native_unit_of_measurement return self.entity_description.native_unit_of_measurement
@ -743,7 +946,12 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
"""Check if the current time is within a best price period.""" """Check if the current time is within a best price period."""
if not self.coordinator.data: if not self.coordinator.data:
return False return False
attrs = get_price_intervals_attributes(self.coordinator.data, reverse_sort=False, time=self.coordinator.time) attrs = get_price_intervals_attributes(
self.coordinator.data,
reverse_sort=False,
time=self.coordinator.time,
config_entry=self.coordinator.config_entry,
)
if not attrs: if not attrs:
return False return False
start = attrs.get("start") start = attrs.get("start")
@ -758,7 +966,12 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
"""Check if the current time is within a peak price period.""" """Check if the current time is within a peak price period."""
if not self.coordinator.data: if not self.coordinator.data:
return False return False
attrs = get_price_intervals_attributes(self.coordinator.data, reverse_sort=True, time=self.coordinator.time) attrs = get_price_intervals_attributes(
self.coordinator.data,
reverse_sort=True,
time=self.coordinator.time,
config_entry=self.coordinator.config_entry,
)
if not attrs: if not attrs:
return False return False
start = attrs.get("start") start = attrs.get("start")
@ -774,11 +987,13 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
key = self.entity_description.key key = self.entity_description.key
value = self.native_value value = self.native_value
# Icon mapping for trend directions # Icon mapping for trend directions (5-level scale)
trend_icons = { trend_icons = {
"strongly_rising": "mdi:chevron-double-up",
"rising": "mdi:trending-up", "rising": "mdi:trending-up",
"falling": "mdi:trending-down",
"stable": "mdi:trending-neutral", "stable": "mdi:trending-neutral",
"falling": "mdi:trending-down",
"strongly_falling": "mdi:chevron-double-down",
} }
# Special handling for next_price_trend_change: Icon based on direction attribute # Special handling for next_price_trend_change: Icon based on direction attribute
@ -815,6 +1030,43 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
# Fall back to static icon from entity description # Fall back to static icon from entity description
return icon or self.entity_description.icon return icon or self.entity_description.icon
@property
def suggested_display_precision(self) -> int | None:
"""
Return suggested display precision based on currency display mode.
For MONETARY sensors:
- Current/Next Interval Price: Show exact price with higher precision
- Base currency (/kr): 4 decimals (e.g., 0.1234 )
- Subunit currency (ct/øre): 2 decimals (e.g., 12.34 ct)
- All other price sensors:
- Base currency (/kr): 2 decimals (e.g., 0.12 )
- Subunit currency (ct/øre): 1 decimal (e.g., 12.5 ct)
For non-MONETARY sensors, use static value from entity description.
"""
# Only apply dynamic precision to MONETARY sensors
if self.entity_description.device_class != SensorDeviceClass.MONETARY:
return self.entity_description.suggested_display_precision
# Check display mode configuration
display_mode = self.coordinator.config_entry.options.get(CONF_CURRENCY_DISPLAY_MODE, DISPLAY_MODE_BASE)
# Special case: Energy Dashboard sensor always shows base currency with 4 decimals
# regardless of display mode (it's always in base currency by design)
if self.entity_description.key == "current_interval_price_base":
return 4
# Special case: Current and Next interval price sensors get higher precision
# to show exact prices as received from API
if self.entity_description.key in ("current_interval_price", "next_interval_price"):
# Major: 4 decimals (0.1234 €), Minor: 2 decimals (12.34 ct)
return 4 if display_mode == DISPLAY_MODE_BASE else 2
# All other sensors: Standard precision
# Major: 2 decimals (0.12 €), Minor: 1 decimal (12.5 ct)
return 2 if display_mode == DISPLAY_MODE_BASE else 1
@property @property
def extra_state_attributes(self) -> dict[str, Any] | None: def extra_state_attributes(self) -> dict[str, Any] | None:
"""Return additional state attributes.""" """Return additional state attributes."""
@ -856,20 +1108,30 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
if key == "chart_data_export": if key == "chart_data_export":
return self._get_chart_data_export_attributes() return self._get_chart_data_export_attributes()
# Special handling for chart_metadata - returns metadata in attributes
if key == "chart_metadata":
return self._get_chart_metadata_attributes()
# Prepare cached data that attribute builders might need # Prepare cached data that attribute builders might need
cached_data = { # Start with all mean/median values from self.cached_data
"trend_attributes": self._trend_calculator.get_trend_attributes(), cached_data = {k: v for k, v in self.cached_data.items() if "_mean" in k or "_median" in k}
"current_trend_attributes": self._trend_calculator.get_current_trend_attributes(),
"trend_change_attributes": self._trend_calculator.get_trend_change_attributes(), # Add special calculator results
"volatility_attributes": self._volatility_calculator.get_volatility_attributes(), cached_data.update(
"last_extreme_interval": self._daily_stat_calculator.get_last_extreme_interval(), {
"last_price_level": self._interval_calculator.get_last_price_level(), "trend_attributes": self._trend_calculator.get_trend_attributes(),
"last_rating_difference": self._interval_calculator.get_last_rating_difference(), "current_trend_attributes": self._trend_calculator.get_current_trend_attributes(),
"last_rating_level": self._interval_calculator.get_last_rating_level(), "trend_change_attributes": self._trend_calculator.get_trend_change_attributes(),
"data_timestamp": getattr(self, "_data_timestamp", None), "volatility_attributes": self._volatility_calculator.get_volatility_attributes(),
"rolling_hour_level": self._get_rolling_hour_level_for_cached_data(key), "last_extreme_interval": self._daily_stat_calculator.get_last_extreme_interval(),
"lifecycle_calculator": self._lifecycle_calculator, # For lifecycle sensor attributes "last_price_level": self._interval_calculator.get_last_price_level(),
} "last_rating_difference": self._interval_calculator.get_last_rating_difference(),
"last_rating_level": self._interval_calculator.get_last_rating_level(),
"data_timestamp": getattr(self, "_data_timestamp", None),
"rolling_hour_level": self._get_rolling_hour_level_for_cached_data(key),
"lifecycle_calculator": self._lifecycle_calculator, # For lifecycle sensor attributes
}
)
# Use the centralized attribute builder # Use the centralized attribute builder
return build_sensor_attributes( return build_sensor_attributes(
@ -877,6 +1139,7 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
coordinator=self.coordinator, coordinator=self.coordinator,
native_value=self.native_value, native_value=self.native_value,
cached_data=cached_data, cached_data=cached_data,
config_entry=self.coordinator.config_entry,
) )
def _get_rolling_hour_level_for_cached_data(self, key: str) -> str | None: def _get_rolling_hour_level_for_cached_data(self, key: str) -> str | None:
@ -931,3 +1194,36 @@ class TibberPricesSensor(TibberPricesEntity, SensorEntity):
chart_data_last_update=self._chart_data_last_update, chart_data_last_update=self._chart_data_last_update,
chart_data_error=self._chart_data_error, chart_data_error=self._chart_data_error,
) )
def _get_chart_metadata_value(self) -> str | None:
"""Return state for chart_metadata sensor."""
return get_chart_metadata_state(
chart_metadata_response=self._chart_metadata_response,
chart_metadata_error=self._chart_metadata_error,
)
async def _refresh_chart_metadata(self) -> None:
"""Refresh chart metadata by calling get_chartdata service with metadata=only."""
response, error = await call_chartdata_service_for_metadata_async(
hass=self.hass,
coordinator=self.coordinator,
config_entry=self.coordinator.config_entry,
)
self._chart_metadata_response = response
time = self.coordinator.time
self._chart_metadata_last_update = time.now()
self._chart_metadata_error = error
# Trigger state update after refresh
self.async_write_ha_state()
def _get_chart_metadata_attributes(self) -> dict[str, object] | None:
"""
Return chart metadata from last service call as attributes.
Delegates to chart_metadata module for attribute building.
"""
return build_chart_metadata_attributes(
chart_metadata_response=self._chart_metadata_response,
chart_metadata_last_update=self._chart_metadata_last_update,
chart_metadata_error=self._chart_metadata_error,
)

View file

@ -68,13 +68,13 @@ INTERVAL_PRICE_SENSORS = (
suggested_display_precision=2, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="current_interval_price_major", key="current_interval_price_base",
translation_key="current_interval_price_major", translation_key="current_interval_price_base",
name="Current Electricity Price (Energy Dashboard)", name="Current Electricity Price (Energy Dashboard)",
icon="mdi:cash", # Dynamic: shows cash-multiple/plus/cash/minus/remove based on price level icon="mdi:cash", # Dynamic: shows cash-multiple/plus/cash/minus/remove based on price level
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None for Energy Dashboard state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None for Energy Dashboard
suggested_display_precision=4, # More precision for major currency (e.g., 0.2534 EUR/kWh) suggested_display_precision=4, # More precision for base currency (e.g., 0.2534 EUR/kWh)
), ),
SensorEntityDescription( SensorEntityDescription(
key="next_interval_price", key="next_interval_price",
@ -181,7 +181,7 @@ ROLLING_HOUR_PRICE_SENSORS = (
icon="mdi:cash", # Dynamic: shows cash-multiple/plus/cash/minus/remove based on aggregated price level icon="mdi:cash", # Dynamic: shows cash-multiple/plus/cash/minus/remove based on aggregated price level
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="next_hour_average_price", key="next_hour_average_price",
@ -190,7 +190,7 @@ ROLLING_HOUR_PRICE_SENSORS = (
icon="mdi:cash-fast", # Dynamic: shows cash-multiple/plus/cash/minus/remove based on aggregated price level icon="mdi:cash-fast", # Dynamic: shows cash-multiple/plus/cash/minus/remove based on aggregated price level
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
), ),
) )
@ -259,7 +259,7 @@ DAILY_STAT_SENSORS = (
icon="mdi:arrow-collapse-down", icon="mdi:arrow-collapse-down",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="highest_price_today", key="highest_price_today",
@ -268,7 +268,7 @@ DAILY_STAT_SENSORS = (
icon="mdi:arrow-collapse-up", icon="mdi:arrow-collapse-up",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="average_price_today", key="average_price_today",
@ -277,7 +277,7 @@ DAILY_STAT_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="lowest_price_tomorrow", key="lowest_price_tomorrow",
@ -286,7 +286,7 @@ DAILY_STAT_SENSORS = (
icon="mdi:arrow-collapse-down", icon="mdi:arrow-collapse-down",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="highest_price_tomorrow", key="highest_price_tomorrow",
@ -295,7 +295,7 @@ DAILY_STAT_SENSORS = (
icon="mdi:arrow-collapse-up", icon="mdi:arrow-collapse-up",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="average_price_tomorrow", key="average_price_tomorrow",
@ -304,7 +304,7 @@ DAILY_STAT_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
), ),
) )
@ -395,7 +395,7 @@ WINDOW_24H_SENSORS = (
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="leading_price_average", key="leading_price_average",
@ -405,7 +405,7 @@ WINDOW_24H_SENSORS = (
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
entity_registry_enabled_default=False, # Advanced use case entity_registry_enabled_default=False, # Advanced use case
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="trailing_price_min", key="trailing_price_min",
@ -415,7 +415,7 @@ WINDOW_24H_SENSORS = (
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="trailing_price_max", key="trailing_price_max",
@ -425,7 +425,7 @@ WINDOW_24H_SENSORS = (
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="leading_price_min", key="leading_price_min",
@ -435,7 +435,7 @@ WINDOW_24H_SENSORS = (
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
entity_registry_enabled_default=False, # Advanced use case entity_registry_enabled_default=False, # Advanced use case
suggested_display_precision=1, suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="leading_price_max", key="leading_price_max",
@ -445,7 +445,7 @@ WINDOW_24H_SENSORS = (
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
entity_registry_enabled_default=False, # Advanced use case entity_registry_enabled_default=False, # Advanced use case
suggested_display_precision=1, suggested_display_precision=2,
), ),
) )
@ -454,7 +454,7 @@ WINDOW_24H_SENSORS = (
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------
# Calculate averages and trends for upcoming time windows # Calculate averages and trends for upcoming time windows
FUTURE_AVG_SENSORS = ( FUTURE_MEAN_SENSORS = (
# Default enabled: 1h-5h # Default enabled: 1h-5h
SensorEntityDescription( SensorEntityDescription(
key="next_avg_1h", key="next_avg_1h",
@ -463,7 +463,7 @@ FUTURE_AVG_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -473,7 +473,7 @@ FUTURE_AVG_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -483,7 +483,7 @@ FUTURE_AVG_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -493,7 +493,7 @@ FUTURE_AVG_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -503,7 +503,7 @@ FUTURE_AVG_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
# Disabled by default: 6h, 8h, 12h (advanced use cases) # Disabled by default: 6h, 8h, 12h (advanced use cases)
@ -514,7 +514,7 @@ FUTURE_AVG_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -524,7 +524,7 @@ FUTURE_AVG_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -534,7 +534,7 @@ FUTURE_AVG_SENSORS = (
icon="mdi:chart-line", icon="mdi:chart-line",
device_class=SensorDeviceClass.MONETARY, device_class=SensorDeviceClass.MONETARY,
state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None state_class=SensorStateClass.TOTAL, # MONETARY requires TOTAL or None
suggested_display_precision=1, suggested_display_precision=2,
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
), ),
) )
@ -548,7 +548,7 @@ FUTURE_TREND_SENSORS = (
icon="mdi:trending-up", # Dynamic: trending-up/trending-down/trending-neutral based on current trend icon="mdi:trending-up", # Dynamic: trending-up/trending-down/trending-neutral based on current trend
device_class=SensorDeviceClass.ENUM, device_class=SensorDeviceClass.ENUM,
state_class=None, # Enum values: no statistics state_class=None, # Enum values: no statistics
options=["rising", "falling", "stable"], options=["strongly_falling", "falling", "stable", "rising", "strongly_rising"],
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
# Next trend change sensor (when will trend change?) # Next trend change sensor (when will trend change?)
@ -570,7 +570,7 @@ FUTURE_TREND_SENSORS = (
icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value
device_class=SensorDeviceClass.ENUM, device_class=SensorDeviceClass.ENUM,
state_class=None, # Enum values: no statistics state_class=None, # Enum values: no statistics
options=["rising", "falling", "stable"], options=["strongly_falling", "falling", "stable", "rising", "strongly_rising"],
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -580,7 +580,7 @@ FUTURE_TREND_SENSORS = (
icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value
device_class=SensorDeviceClass.ENUM, device_class=SensorDeviceClass.ENUM,
state_class=None, # Enum values: no statistics state_class=None, # Enum values: no statistics
options=["rising", "falling", "stable"], options=["strongly_falling", "falling", "stable", "rising", "strongly_rising"],
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -590,7 +590,7 @@ FUTURE_TREND_SENSORS = (
icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value
device_class=SensorDeviceClass.ENUM, device_class=SensorDeviceClass.ENUM,
state_class=None, # Enum values: no statistics state_class=None, # Enum values: no statistics
options=["rising", "falling", "stable"], options=["strongly_falling", "falling", "stable", "rising", "strongly_rising"],
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -600,7 +600,7 @@ FUTURE_TREND_SENSORS = (
icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value
device_class=SensorDeviceClass.ENUM, device_class=SensorDeviceClass.ENUM,
state_class=None, # Enum values: no statistics state_class=None, # Enum values: no statistics
options=["rising", "falling", "stable"], options=["strongly_falling", "falling", "stable", "rising", "strongly_rising"],
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -610,7 +610,7 @@ FUTURE_TREND_SENSORS = (
icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value
device_class=SensorDeviceClass.ENUM, device_class=SensorDeviceClass.ENUM,
state_class=None, # Enum values: no statistics state_class=None, # Enum values: no statistics
options=["rising", "falling", "stable"], options=["strongly_falling", "falling", "stable", "rising", "strongly_rising"],
entity_registry_enabled_default=True, entity_registry_enabled_default=True,
), ),
# Disabled by default: 6h, 8h, 12h # Disabled by default: 6h, 8h, 12h
@ -621,7 +621,7 @@ FUTURE_TREND_SENSORS = (
icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value
device_class=SensorDeviceClass.ENUM, device_class=SensorDeviceClass.ENUM,
state_class=None, # Enum values: no statistics state_class=None, # Enum values: no statistics
options=["rising", "falling", "stable"], options=["strongly_falling", "falling", "stable", "rising", "strongly_rising"],
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -631,7 +631,7 @@ FUTURE_TREND_SENSORS = (
icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value
device_class=SensorDeviceClass.ENUM, device_class=SensorDeviceClass.ENUM,
state_class=None, # Enum values: no statistics state_class=None, # Enum values: no statistics
options=["rising", "falling", "stable"], options=["strongly_falling", "falling", "stable", "rising", "strongly_rising"],
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -641,7 +641,7 @@ FUTURE_TREND_SENSORS = (
icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value icon="mdi:trending-up", # Dynamic: shows trending-up/trending-down/trending-neutral based on trend value
device_class=SensorDeviceClass.ENUM, device_class=SensorDeviceClass.ENUM,
state_class=None, # Enum values: no statistics state_class=None, # Enum values: no statistics
options=["rising", "falling", "stable"], options=["strongly_falling", "falling", "stable", "rising", "strongly_rising"],
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
), ),
) )
@ -731,9 +731,9 @@ BEST_PRICE_TIMING_SENSORS = (
name="Best Price Period Duration", name="Best Price Period Duration",
icon="mdi:timer", icon="mdi:timer",
device_class=SensorDeviceClass.DURATION, device_class=SensorDeviceClass.DURATION,
native_unit_of_measurement=UnitOfTime.MINUTES, native_unit_of_measurement=UnitOfTime.HOURS,
state_class=None, # Changes with each period: no statistics state_class=None, # Duration not needed in long-term statistics
suggested_display_precision=0, suggested_display_precision=2,
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -741,9 +741,10 @@ BEST_PRICE_TIMING_SENSORS = (
translation_key="best_price_remaining_minutes", translation_key="best_price_remaining_minutes",
name="Best Price Remaining Time", name="Best Price Remaining Time",
icon="mdi:timer-sand", icon="mdi:timer-sand",
native_unit_of_measurement=UnitOfTime.MINUTES, device_class=SensorDeviceClass.DURATION,
state_class=None, # Countdown timer: no statistics native_unit_of_measurement=UnitOfTime.HOURS,
suggested_display_precision=0, state_class=None, # Countdown timers excluded from statistics
suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="best_price_progress", key="best_price_progress",
@ -767,9 +768,10 @@ BEST_PRICE_TIMING_SENSORS = (
translation_key="best_price_next_in_minutes", translation_key="best_price_next_in_minutes",
name="Best Price Starts In", name="Best Price Starts In",
icon="mdi:timer-outline", icon="mdi:timer-outline",
native_unit_of_measurement=UnitOfTime.MINUTES, device_class=SensorDeviceClass.DURATION,
state_class=None, # Countdown timer: no statistics native_unit_of_measurement=UnitOfTime.HOURS,
suggested_display_precision=0, state_class=None, # Next-start timers excluded from statistics
suggested_display_precision=2,
), ),
) )
@ -788,9 +790,9 @@ PEAK_PRICE_TIMING_SENSORS = (
name="Peak Price Period Duration", name="Peak Price Period Duration",
icon="mdi:timer", icon="mdi:timer",
device_class=SensorDeviceClass.DURATION, device_class=SensorDeviceClass.DURATION,
native_unit_of_measurement=UnitOfTime.MINUTES, native_unit_of_measurement=UnitOfTime.HOURS,
state_class=None, # Changes with each period: no statistics state_class=None, # Duration not needed in long-term statistics
suggested_display_precision=0, suggested_display_precision=2,
entity_registry_enabled_default=False, entity_registry_enabled_default=False,
), ),
SensorEntityDescription( SensorEntityDescription(
@ -798,9 +800,10 @@ PEAK_PRICE_TIMING_SENSORS = (
translation_key="peak_price_remaining_minutes", translation_key="peak_price_remaining_minutes",
name="Peak Price Remaining Time", name="Peak Price Remaining Time",
icon="mdi:timer-sand", icon="mdi:timer-sand",
native_unit_of_measurement=UnitOfTime.MINUTES, device_class=SensorDeviceClass.DURATION,
state_class=None, # Countdown timer: no statistics native_unit_of_measurement=UnitOfTime.HOURS,
suggested_display_precision=0, state_class=None, # Countdown timers excluded from statistics
suggested_display_precision=2,
), ),
SensorEntityDescription( SensorEntityDescription(
key="peak_price_progress", key="peak_price_progress",
@ -824,9 +827,10 @@ PEAK_PRICE_TIMING_SENSORS = (
translation_key="peak_price_next_in_minutes", translation_key="peak_price_next_in_minutes",
name="Peak Price Starts In", name="Peak Price Starts In",
icon="mdi:timer-outline", icon="mdi:timer-outline",
native_unit_of_measurement=UnitOfTime.MINUTES, device_class=SensorDeviceClass.DURATION,
state_class=None, # Countdown timer: no statistics native_unit_of_measurement=UnitOfTime.HOURS,
suggested_display_precision=0, state_class=None, # Next-start timers excluded from statistics
suggested_display_precision=2,
), ),
) )
@ -843,6 +847,7 @@ DIAGNOSTIC_SENSORS = (
options=["cached", "fresh", "refreshing", "searching_tomorrow", "turnover_pending", "error"], options=["cached", "fresh", "refreshing", "searching_tomorrow", "turnover_pending", "error"],
state_class=None, # Status value: no statistics state_class=None, # Status value: no statistics
entity_category=EntityCategory.DIAGNOSTIC, entity_category=EntityCategory.DIAGNOSTIC,
entity_registry_enabled_default=True, # Critical for debugging
), ),
# Home metadata from user data # Home metadata from user data
SensorEntityDescription( SensorEntityDescription(
@ -1003,6 +1008,16 @@ DIAGNOSTIC_SENSORS = (
entity_category=EntityCategory.DIAGNOSTIC, entity_category=EntityCategory.DIAGNOSTIC,
entity_registry_enabled_default=False, # Opt-in entity_registry_enabled_default=False, # Opt-in
), ),
SensorEntityDescription(
key="chart_metadata",
translation_key="chart_metadata",
name="Chart Metadata",
icon="mdi:chart-box-outline",
device_class=SensorDeviceClass.ENUM,
options=["pending", "ready", "error"],
entity_category=EntityCategory.DIAGNOSTIC,
entity_registry_enabled_default=True, # Critical for chart features
),
) )
# ---------------------------------------------------------------------------- # ----------------------------------------------------------------------------
@ -1020,7 +1035,7 @@ ENTITY_DESCRIPTIONS = (
*DAILY_LEVEL_SENSORS, *DAILY_LEVEL_SENSORS,
*DAILY_RATING_SENSORS, *DAILY_RATING_SENSORS,
*WINDOW_24H_SENSORS, *WINDOW_24H_SENSORS,
*FUTURE_AVG_SENSORS, *FUTURE_MEAN_SENSORS,
*FUTURE_TREND_SENSORS, *FUTURE_TREND_SENSORS,
*VOLATILITY_SENSORS, *VOLATILITY_SENSORS,
*BEST_PRICE_TIMING_SENSORS, *BEST_PRICE_TIMING_SENSORS,

View file

@ -23,8 +23,12 @@ from typing import TYPE_CHECKING
if TYPE_CHECKING: if TYPE_CHECKING:
from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService from custom_components.tibber_prices.coordinator.time_service import TibberPricesTimeService
from homeassistant.config_entries import ConfigEntry
from custom_components.tibber_prices.const import get_display_unit_factor
from custom_components.tibber_prices.coordinator.helpers import get_intervals_for_day_offsets
from custom_components.tibber_prices.entity_utils.helpers import get_price_value from custom_components.tibber_prices.entity_utils.helpers import get_price_value
from custom_components.tibber_prices.utils.average import calculate_mean, calculate_median
from custom_components.tibber_prices.utils.price import ( from custom_components.tibber_prices.utils.price import (
aggregate_price_levels, aggregate_price_levels,
aggregate_price_rating, aggregate_price_rating,
@ -34,22 +38,31 @@ if TYPE_CHECKING:
from collections.abc import Callable from collections.abc import Callable
def aggregate_price_data(window_data: list[dict]) -> float | None: def aggregate_average_data(
window_data: list[dict],
config_entry: ConfigEntry,
) -> tuple[float | None, float | None]:
""" """
Calculate average price from window data. Calculate average and median price from window data.
Args: Args:
window_data: List of price interval dictionaries with 'total' key window_data: List of price interval dictionaries with 'total' key.
config_entry: Config entry to get display unit configuration.
Returns: Returns:
Average price in minor currency units (cents/øre), or None if no prices Tuple of (average price, median price) in display currency units,
or (None, None) if no prices.
""" """
prices = [float(i["total"]) for i in window_data if "total" in i] prices = [float(i["total"]) for i in window_data if "total" in i]
if not prices: if not prices:
return None return None, None
# Return in minor currency units (cents/øre) # Calculate both mean and median
return round((sum(prices) / len(prices)) * 100, 2) mean = calculate_mean(prices)
median = calculate_median(prices)
# Convert to display currency unit based on configuration
factor = get_display_unit_factor(config_entry)
return round(mean * factor, 2), round(median * factor, 2) if median is not None else None
def aggregate_level_data(window_data: list[dict]) -> str | None: def aggregate_level_data(window_data: list[dict]) -> str | None:
@ -100,25 +113,29 @@ def aggregate_window_data(
value_type: str, value_type: str,
threshold_low: float, threshold_low: float,
threshold_high: float, threshold_high: float,
config_entry: ConfigEntry,
) -> str | float | None: ) -> str | float | None:
""" """
Aggregate data from multiple intervals based on value type. Aggregate data from multiple intervals based on value type.
Unified helper that routes to appropriate aggregation function. Unified helper that routes to appropriate aggregation function.
NOTE: This function is legacy code - rolling_hour calculator has its own implementation.
Args: Args:
window_data: List of price interval dictionaries window_data: List of price interval dictionaries.
value_type: Type of value to aggregate ('price', 'level', or 'rating') value_type: Type of value to aggregate ('price', 'level', or 'rating').
threshold_low: Low threshold for rating calculation threshold_low: Low threshold for rating calculation.
threshold_high: High threshold for rating calculation threshold_high: High threshold for rating calculation.
config_entry: Config entry to get display unit configuration.
Returns: Returns:
Aggregated value (price as float, level/rating as str), or None if no data Aggregated value (price as float, level/rating as str), or None if no data.
""" """
# Map value types to aggregation functions # Map value types to aggregation functions
aggregators: dict[str, Callable] = { aggregators: dict[str, Callable] = {
"price": lambda data: aggregate_price_data(data), "price": lambda data: aggregate_average_data(data, config_entry)[0], # Use only average from tuple
"level": lambda data: aggregate_level_data(data), "level": lambda data: aggregate_level_data(data),
"rating": lambda data: aggregate_rating_data(data, threshold_low, threshold_high), "rating": lambda data: aggregate_rating_data(data, threshold_low, threshold_high),
} }
@ -130,7 +147,7 @@ def aggregate_window_data(
def get_hourly_price_value( def get_hourly_price_value(
price_info: dict, coordinator_data: dict,
*, *,
hour_offset: int, hour_offset: int,
in_euro: bool, in_euro: bool,
@ -143,9 +160,9 @@ def get_hourly_price_value(
Kept for potential backward compatibility. Kept for potential backward compatibility.
Args: Args:
price_info: Price information dict with 'today' and 'tomorrow' keys coordinator_data: Coordinator data dict
hour_offset: Hour offset from current time (positive=future, negative=past) hour_offset: Hour offset from current time (positive=future, negative=past)
in_euro: If True, return price in major currency (EUR), else minor (cents/øre) in_euro: If True, return price in base currency (EUR), else minor (cents/øre)
time: TibberPricesTimeService instance (required) time: TibberPricesTimeService instance (required)
Returns: Returns:
@ -161,30 +178,18 @@ def get_hourly_price_value(
target_hour = target_datetime.hour target_hour = target_datetime.hour
target_date = target_datetime.date() target_date = target_datetime.date()
# Determine which day's data we need # Get all intervals (yesterday, today, tomorrow) via helper
day_key = "tomorrow" if target_date > now.date() else "today" all_intervals = get_intervals_for_day_offsets(coordinator_data, [-1, 0, 1])
for price_data in price_info.get(day_key, []): # Search through all intervals to find the matching hour
for price_data in all_intervals:
# Parse the timestamp and convert to local time # Parse the timestamp and convert to local time
starts_at = time.get_interval_time(price_data) starts_at = time.get_interval_time(price_data)
if starts_at is None: if starts_at is None:
continue continue
# Make sure it's in the local timezone for proper comparison
# Compare using both hour and date for accuracy # Compare using both hour and date for accuracy
if starts_at.hour == target_hour and starts_at.date() == target_date: if starts_at.hour == target_hour and starts_at.date() == target_date:
return get_price_value(float(price_data["total"]), in_euro=in_euro) return get_price_value(float(price_data["total"]), in_euro=in_euro)
# If we didn't find the price in the expected day's data, check the other day
# This is a fallback for potential edge cases
other_day_key = "today" if day_key == "tomorrow" else "tomorrow"
for price_data in price_info.get(other_day_key, []):
starts_at = time.get_interval_time(price_data)
if starts_at is None:
continue
if starts_at.hour == target_hour and starts_at.date() == target_date:
return get_price_value(float(price_data["total"]), in_euro=in_euro)
return None return None

View file

@ -0,0 +1,261 @@
"""
Type definitions for Tibber Prices sensor attributes.
These TypedDict definitions serve as **documentation** of the attribute structure
for each sensor type. They enable IDE autocomplete and type checking when working
with attribute dictionaries.
NOTE: In function signatures, we still use dict[str, Any] for flexibility,
but these TypedDict definitions document what keys and types are expected.
IMPORTANT: The Literal types defined here should be kept in sync with the
string constants in const.py, which are the single source of truth for runtime values.
"""
from __future__ import annotations
from typing import Literal, TypedDict
# ============================================================================
# Literal Type Definitions
# ============================================================================
# SYNC: Keep these in sync with constants in const.py
#
# const.py defines the runtime constants (single source of truth):
# - PRICE_LEVEL_VERY_CHEAP, PRICE_LEVEL_CHEAP, etc.
# - PRICE_RATING_LOW, PRICE_RATING_NORMAL, etc.
# - VOLATILITY_LOW, VOLATILITY_MODERATE, etc.
#
# These Literal types should mirror those constants for type safety.
# Price level literals (from Tibber API)
PriceLevel = Literal[
"VERY_CHEAP",
"CHEAP",
"NORMAL",
"EXPENSIVE",
"VERY_EXPENSIVE",
]
# Price rating literals (calculated values)
PriceRating = Literal[
"LOW",
"NORMAL",
"HIGH",
]
# Volatility level literals (based on coefficient of variation)
VolatilityLevel = Literal[
"LOW",
"MODERATE",
"HIGH",
"VERY_HIGH",
]
# Data completeness literals
DataCompleteness = Literal[
"complete",
"partial_yesterday",
"partial_today",
"partial_tomorrow",
"missing_yesterday",
"missing_today",
"missing_tomorrow",
]
# ============================================================================
# TypedDict Definitions
# ============================================================================
class BaseAttributes(TypedDict, total=False):
"""
Base attributes common to all sensors.
All sensor attributes include at minimum:
- timestamp: ISO 8601 string indicating when the state/attributes are valid
- error: Optional error message if something went wrong
"""
timestamp: str
error: str
class IntervalPriceAttributes(BaseAttributes, total=False):
"""
Attributes for interval price sensors (current/next/previous).
These sensors show price information for a specific 15-minute interval.
"""
level_value: int # Numeric value for price level (1-5)
level_id: PriceLevel # String identifier for price level
icon_color: str # Optional icon color based on level
class IntervalLevelAttributes(BaseAttributes, total=False):
"""
Attributes for interval level sensors.
These sensors show the price level classification for an interval.
"""
icon_color: str # Icon color based on level
class IntervalRatingAttributes(BaseAttributes, total=False):
"""
Attributes for interval rating sensors.
These sensors show the price rating (LOW/NORMAL/HIGH) for an interval.
"""
rating_value: int # Numeric value for price rating (1-3)
rating_id: PriceRating # String identifier for price rating
icon_color: str # Optional icon color based on rating
class RollingHourAttributes(BaseAttributes, total=False):
"""
Attributes for rolling hour sensors.
These sensors aggregate data across 5 intervals (2 before + current + 2 after).
"""
icon_color: str # Optional icon color based on aggregated level
class DailyStatPriceAttributes(BaseAttributes, total=False):
"""
Attributes for daily statistics price sensors (min/max/avg).
These sensors show price statistics for a full calendar day.
"""
# No additional attributes for daily price stats beyond base
class DailyStatRatingAttributes(BaseAttributes, total=False):
"""
Attributes for daily statistics rating sensors.
These sensors show rating statistics for a full calendar day.
"""
diff_percent: str # Key is actually "diff_%" - percentage difference
level_id: PriceRating # Rating level identifier
level_value: int # Numeric rating value (1-3)
class Window24hAttributes(BaseAttributes, total=False):
"""
Attributes for 24-hour window sensors (trailing/leading).
These sensors analyze price data across a 24-hour window from current time.
"""
interval_count: int # Number of intervals in the window
class VolatilityAttributes(BaseAttributes, total=False):
"""
Attributes for volatility sensors.
These sensors analyze price variation and spread across time periods.
"""
today_spread: float # Price range for today (max - min)
today_volatility: str # Volatility level for today
interval_count_today: int # Number of intervals analyzed today
tomorrow_spread: float # Price range for tomorrow (max - min)
tomorrow_volatility: str # Volatility level for tomorrow
interval_count_tomorrow: int # Number of intervals analyzed tomorrow
class TrendAttributes(BaseAttributes, total=False):
"""
Attributes for trend sensors.
These sensors analyze price trends and forecast future movements.
Trend attributes are complex and may vary based on trend type.
"""
# Trend attributes are dynamic and vary by sensor type
# Keep flexible with total=False
class TimingAttributes(BaseAttributes, total=False):
"""
Attributes for period timing sensors (best_price/peak_price timing).
These sensors track timing information for best/peak price periods.
"""
icon_color: str # Icon color based on timing status
class FutureAttributes(BaseAttributes, total=False):
"""
Attributes for future forecast sensors.
These sensors provide N-hour forecasts starting from next interval.
"""
interval_count: int # Number of intervals in forecast
hours: int # Number of hours in forecast window
class LifecycleAttributes(BaseAttributes, total=False):
"""
Attributes for lifecycle/diagnostic sensors.
These sensors provide system information and cache status.
"""
cache_age: str # Human-readable cache age
cache_age_minutes: int # Cache age in minutes
cache_validity: str # Cache validity status
last_api_fetch: str # ISO 8601 timestamp of last API fetch
last_cache_update: str # ISO 8601 timestamp of last cache update
data_completeness: DataCompleteness # Data completeness status
yesterday_available: bool # Whether yesterday data exists
today_available: bool # Whether today data exists
tomorrow_available: bool # Whether tomorrow data exists
tomorrow_expected_after: str # Time when tomorrow data expected
next_api_poll: str # ISO 8601 timestamp of next API poll
next_midnight_turnover: str # ISO 8601 timestamp of next midnight turnover
updates_today: int # Number of API updates today
last_turnover: str # ISO 8601 timestamp of last midnight turnover
last_error: str # Last error message if any
class MetadataAttributes(BaseAttributes, total=False):
"""
Attributes for metadata sensors (home info, metering point).
These sensors provide Tibber account and home metadata.
Metadata attributes vary by sensor type.
"""
# Metadata attributes are dynamic and vary by sensor type
# Keep flexible with total=False
# Union type for all sensor attributes (for documentation purposes)
# In actual code, use dict[str, Any] for flexibility
SensorAttributes = (
IntervalPriceAttributes
| IntervalLevelAttributes
| IntervalRatingAttributes
| RollingHourAttributes
| DailyStatPriceAttributes
| DailyStatRatingAttributes
| Window24hAttributes
| VolatilityAttributes
| TrendAttributes
| TimingAttributes
| FutureAttributes
| LifecycleAttributes
| MetadataAttributes
)

View file

@ -2,15 +2,17 @@
from __future__ import annotations from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING, cast
from custom_components.tibber_prices.utils.average import ( from custom_components.tibber_prices.utils.average import (
calculate_current_leading_avg,
calculate_current_leading_max, calculate_current_leading_max,
calculate_current_leading_mean,
calculate_current_leading_min, calculate_current_leading_min,
calculate_current_trailing_avg,
calculate_current_trailing_max, calculate_current_trailing_max,
calculate_current_trailing_mean,
calculate_current_trailing_min, calculate_current_trailing_min,
calculate_mean,
calculate_median,
) )
if TYPE_CHECKING: if TYPE_CHECKING:
@ -41,6 +43,7 @@ def get_value_getter_mapping( # noqa: PLR0913 - needs all calculators as parame
get_next_avg_n_hours_value: Callable[[int], float | None], get_next_avg_n_hours_value: Callable[[int], float | None],
get_data_timestamp: Callable[[], datetime | None], get_data_timestamp: Callable[[], datetime | None],
get_chart_data_export_value: Callable[[], str | None], get_chart_data_export_value: Callable[[], str | None],
get_chart_metadata_value: Callable[[], str | None],
) -> dict[str, Callable]: ) -> dict[str, Callable]:
""" """
Build mapping from entity key to value getter callable. Build mapping from entity key to value getter callable.
@ -61,11 +64,20 @@ def get_value_getter_mapping( # noqa: PLR0913 - needs all calculators as parame
get_next_avg_n_hours_value: Method for next N-hour average forecasts get_next_avg_n_hours_value: Method for next N-hour average forecasts
get_data_timestamp: Method for data timestamp sensor get_data_timestamp: Method for data timestamp sensor
get_chart_data_export_value: Method for chart data export sensor get_chart_data_export_value: Method for chart data export sensor
get_chart_metadata_value: Method for chart metadata sensor
Returns: Returns:
Dictionary mapping entity keys to their value getter callables. Dictionary mapping entity keys to their value getter callables.
""" """
def _minutes_to_hours(value: float | None) -> float | None:
"""Convert minutes to hours for duration-oriented sensors."""
if value is None:
return None
return value / 60
return { return {
# ================================================================ # ================================================================
# INTERVAL-BASED SENSORS - via IntervalCalculator # INTERVAL-BASED SENSORS - via IntervalCalculator
@ -82,7 +94,7 @@ def get_value_getter_mapping( # noqa: PLR0913 - needs all calculators as parame
"current_interval_price": lambda: interval_calculator.get_interval_value( "current_interval_price": lambda: interval_calculator.get_interval_value(
interval_offset=0, value_type="price", in_euro=False interval_offset=0, value_type="price", in_euro=False
), ),
"current_interval_price_major": lambda: interval_calculator.get_interval_value( "current_interval_price_base": lambda: interval_calculator.get_interval_value(
interval_offset=0, value_type="price", in_euro=True interval_offset=0, value_type="price", in_euro=True
), ),
"next_interval_price": lambda: interval_calculator.get_interval_value( "next_interval_price": lambda: interval_calculator.get_interval_value(
@ -128,14 +140,14 @@ def get_value_getter_mapping( # noqa: PLR0913 - needs all calculators as parame
"highest_price_today": lambda: daily_stat_calculator.get_daily_stat_value(day="today", stat_func=max), "highest_price_today": lambda: daily_stat_calculator.get_daily_stat_value(day="today", stat_func=max),
"average_price_today": lambda: daily_stat_calculator.get_daily_stat_value( "average_price_today": lambda: daily_stat_calculator.get_daily_stat_value(
day="today", day="today",
stat_func=lambda prices: sum(prices) / len(prices), stat_func=lambda prices: (calculate_mean(prices), calculate_median(prices)),
), ),
# Tomorrow statistics sensors # Tomorrow statistics sensors
"lowest_price_tomorrow": lambda: daily_stat_calculator.get_daily_stat_value(day="tomorrow", stat_func=min), "lowest_price_tomorrow": lambda: daily_stat_calculator.get_daily_stat_value(day="tomorrow", stat_func=min),
"highest_price_tomorrow": lambda: daily_stat_calculator.get_daily_stat_value(day="tomorrow", stat_func=max), "highest_price_tomorrow": lambda: daily_stat_calculator.get_daily_stat_value(day="tomorrow", stat_func=max),
"average_price_tomorrow": lambda: daily_stat_calculator.get_daily_stat_value( "average_price_tomorrow": lambda: daily_stat_calculator.get_daily_stat_value(
day="tomorrow", day="tomorrow",
stat_func=lambda prices: sum(prices) / len(prices), stat_func=lambda prices: (calculate_mean(prices), calculate_median(prices)),
), ),
# Daily aggregated level sensors # Daily aggregated level sensors
"yesterday_price_level": lambda: daily_stat_calculator.get_daily_aggregated_value( "yesterday_price_level": lambda: daily_stat_calculator.get_daily_aggregated_value(
@ -160,10 +172,10 @@ def get_value_getter_mapping( # noqa: PLR0913 - needs all calculators as parame
# ================================================================ # ================================================================
# Trailing and leading average sensors # Trailing and leading average sensors
"trailing_price_average": lambda: window_24h_calculator.get_24h_window_value( "trailing_price_average": lambda: window_24h_calculator.get_24h_window_value(
stat_func=calculate_current_trailing_avg, stat_func=calculate_current_trailing_mean,
), ),
"leading_price_average": lambda: window_24h_calculator.get_24h_window_value( "leading_price_average": lambda: window_24h_calculator.get_24h_window_value(
stat_func=calculate_current_leading_avg, stat_func=calculate_current_leading_mean,
), ),
# Trailing and leading min/max sensors # Trailing and leading min/max sensors
"trailing_price_min": lambda: window_24h_calculator.get_24h_window_value( "trailing_price_min": lambda: window_24h_calculator.get_24h_window_value(
@ -239,11 +251,17 @@ def get_value_getter_mapping( # noqa: PLR0913 - needs all calculators as parame
"best_price_end_time": lambda: timing_calculator.get_period_timing_value( "best_price_end_time": lambda: timing_calculator.get_period_timing_value(
period_type="best_price", value_type="end_time" period_type="best_price", value_type="end_time"
), ),
"best_price_period_duration": lambda: timing_calculator.get_period_timing_value( "best_price_period_duration": lambda: _minutes_to_hours(
period_type="best_price", value_type="period_duration" cast(
"float | None",
timing_calculator.get_period_timing_value(period_type="best_price", value_type="period_duration"),
)
), ),
"best_price_remaining_minutes": lambda: timing_calculator.get_period_timing_value( "best_price_remaining_minutes": lambda: _minutes_to_hours(
period_type="best_price", value_type="remaining_minutes" cast(
"float | None",
timing_calculator.get_period_timing_value(period_type="best_price", value_type="remaining_minutes"),
)
), ),
"best_price_progress": lambda: timing_calculator.get_period_timing_value( "best_price_progress": lambda: timing_calculator.get_period_timing_value(
period_type="best_price", value_type="progress" period_type="best_price", value_type="progress"
@ -251,18 +269,27 @@ def get_value_getter_mapping( # noqa: PLR0913 - needs all calculators as parame
"best_price_next_start_time": lambda: timing_calculator.get_period_timing_value( "best_price_next_start_time": lambda: timing_calculator.get_period_timing_value(
period_type="best_price", value_type="next_start_time" period_type="best_price", value_type="next_start_time"
), ),
"best_price_next_in_minutes": lambda: timing_calculator.get_period_timing_value( "best_price_next_in_minutes": lambda: _minutes_to_hours(
period_type="best_price", value_type="next_in_minutes" cast(
"float | None",
timing_calculator.get_period_timing_value(period_type="best_price", value_type="next_in_minutes"),
)
), ),
# Peak Price timing sensors # Peak Price timing sensors
"peak_price_end_time": lambda: timing_calculator.get_period_timing_value( "peak_price_end_time": lambda: timing_calculator.get_period_timing_value(
period_type="peak_price", value_type="end_time" period_type="peak_price", value_type="end_time"
), ),
"peak_price_period_duration": lambda: timing_calculator.get_period_timing_value( "peak_price_period_duration": lambda: _minutes_to_hours(
period_type="peak_price", value_type="period_duration" cast(
"float | None",
timing_calculator.get_period_timing_value(period_type="peak_price", value_type="period_duration"),
)
), ),
"peak_price_remaining_minutes": lambda: timing_calculator.get_period_timing_value( "peak_price_remaining_minutes": lambda: _minutes_to_hours(
period_type="peak_price", value_type="remaining_minutes" cast(
"float | None",
timing_calculator.get_period_timing_value(period_type="peak_price", value_type="remaining_minutes"),
)
), ),
"peak_price_progress": lambda: timing_calculator.get_period_timing_value( "peak_price_progress": lambda: timing_calculator.get_period_timing_value(
period_type="peak_price", value_type="progress" period_type="peak_price", value_type="progress"
@ -270,9 +297,14 @@ def get_value_getter_mapping( # noqa: PLR0913 - needs all calculators as parame
"peak_price_next_start_time": lambda: timing_calculator.get_period_timing_value( "peak_price_next_start_time": lambda: timing_calculator.get_period_timing_value(
period_type="peak_price", value_type="next_start_time" period_type="peak_price", value_type="next_start_time"
), ),
"peak_price_next_in_minutes": lambda: timing_calculator.get_period_timing_value( "peak_price_next_in_minutes": lambda: _minutes_to_hours(
period_type="peak_price", value_type="next_in_minutes" cast(
"float | None",
timing_calculator.get_period_timing_value(period_type="peak_price", value_type="next_in_minutes"),
)
), ),
# Chart data export sensor # Chart data export sensor
"chart_data_export": get_chart_data_export_value, "chart_data_export": get_chart_data_export_value,
# Chart metadata sensor
"chart_metadata": get_chart_metadata_value,
} }

Some files were not shown because too many files have changed in this diff Show more