chore: Remove outdated documentation for sensors and troubleshooting in version v0.25.0b0; update versioning logic to skip documentation versioning for beta releases.

This commit is contained in:
Julian Pawlowski 2025-12-25 23:06:27 +00:00
parent 3624f1c9a8
commit c6b34984fa
36 changed files with 25 additions and 8682 deletions

View file

@ -33,6 +33,17 @@ jobs:
with:
fetch-depth: 0 # Needed for version timestamps
- name: Detect prerelease tag (beta/rc)
id: taginfo
run: |
if [[ "${GITHUB_REF}" =~ ^refs/tags/v[0-9]+\.[0-9]+\.[0-9]+(b[0-9]+|rc[0-9]+)$ ]]; then
echo "is_prerelease=true" >> "$GITHUB_OUTPUT"
echo "Detected prerelease tag: ${GITHUB_REF}"
else
echo "is_prerelease=false" >> "$GITHUB_OUTPUT"
echo "Stable tag or branch: ${GITHUB_REF}"
fi
- uses: actions/setup-node@v6
with:
node-version: 24
@ -47,7 +58,7 @@ jobs:
run: npm ci
- name: Create user docs version snapshot on tag
if: startsWith(github.ref, 'refs/tags/v')
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
working-directory: docs/user
run: |
TAG_VERSION=${GITHUB_REF#refs/tags/}
@ -61,7 +72,7 @@ jobs:
fi
- name: Cleanup old user docs versions
if: startsWith(github.ref, 'refs/tags/v')
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
working-directory: docs/user
run: |
chmod +x ../cleanup-old-versions.sh
@ -80,7 +91,7 @@ jobs:
run: npm ci
- name: Create developer docs version snapshot on tag
if: startsWith(github.ref, 'refs/tags/v')
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
working-directory: docs/developer
run: |
TAG_VERSION=${GITHUB_REF#refs/tags/}
@ -94,7 +105,7 @@ jobs:
fi
- name: Cleanup old developer docs versions
if: startsWith(github.ref, 'refs/tags/v')
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
working-directory: docs/developer
run: |
chmod +x ../cleanup-old-versions.sh
@ -118,7 +129,7 @@ jobs:
# COMMIT VERSION SNAPSHOTS
- name: Commit version snapshots back to repository
if: startsWith(github.ref, 'refs/tags/v')
if: startsWith(github.ref, 'refs/tags/v') && steps.taginfo.outputs.is_prerelease != 'true'
run: |
TAG_VERSION=${GITHUB_REF#refs/tags/}

View file

@ -1,186 +0,0 @@
---
comments: false
---
# API Reference
Documentation of the Tibber GraphQL API used by this integration.
## GraphQL Endpoint
```
https://api.tibber.com/v1-beta/gql
```
**Authentication:** Bearer token in `Authorization` header
## Queries Used
### User Data Query
Fetches home information and metadata:
```graphql
query {
viewer {
homes {
id
appNickname
address {
address1
postalCode
city
country
}
timeZone
currentSubscription {
priceInfo {
current {
currency
}
}
}
meteringPointData {
consumptionEan
gridAreaCode
}
}
}
}
```
**Cached for:** 24 hours
### Price Data Query
Fetches quarter-hourly prices:
```graphql
query($homeId: ID!) {
viewer {
home(id: $homeId) {
currentSubscription {
priceInfo {
range(resolution: QUARTER_HOURLY, first: 384) {
nodes {
total
startsAt
level
}
}
}
}
}
}
}
```
**Parameters:**
- `homeId`: Tibber home identifier
- `resolution`: Always `QUARTER_HOURLY`
- `first`: 384 intervals (4 days of data)
**Cached until:** Midnight local time
## Rate Limits
Tibber API rate limits (as of 2024):
- **5000 requests per hour** per token
- **Burst limit:** 100 requests per minute
Integration stays well below these limits:
- Polls every 15 minutes = 96 requests/day
- User data cached for 24h = 1 request/day
- **Total:** ~100 requests/day per home
## Response Format
### Price Node Structure
```json
{
"total": 0.2456,
"startsAt": "2024-12-06T14:00:00.000+01:00",
"level": "NORMAL"
}
```
**Fields:**
- `total`: Price including VAT and fees (currency's major unit, e.g., EUR)
- `startsAt`: ISO 8601 timestamp with timezone
- `level`: Tibber's own classification (VERY_CHEAP, CHEAP, NORMAL, EXPENSIVE, VERY_EXPENSIVE)
### Currency Information
```json
{
"currency": "EUR"
}
```
Supported currencies:
- `EUR` (Euro) - displayed as ct/kWh
- `NOK` (Norwegian Krone) - displayed as øre/kWh
- `SEK` (Swedish Krona) - displayed as öre/kWh
## Error Handling
### Common Error Responses
**Invalid Token:**
```json
{
"errors": [{
"message": "Unauthorized",
"extensions": {
"code": "UNAUTHENTICATED"
}
}]
}
```
**Rate Limit Exceeded:**
```json
{
"errors": [{
"message": "Too Many Requests",
"extensions": {
"code": "RATE_LIMIT_EXCEEDED"
}
}]
}
```
**Home Not Found:**
```json
{
"errors": [{
"message": "Home not found",
"extensions": {
"code": "NOT_FOUND"
}
}]
}
```
Integration handles these with:
- Exponential backoff retry (3 attempts)
- ConfigEntryAuthFailed for auth errors
- ConfigEntryNotReady for temporary failures
## Data Transformation
Raw API data is enriched with:
- **Trailing 24h average** - Calculated from previous intervals
- **Leading 24h average** - Calculated from future intervals
- **Price difference %** - Deviation from average
- **Custom rating** - Based on user thresholds (different from Tibber's `level`)
See `utils/price.py` for enrichment logic.
---
💡 **External Resources:**
- [Tibber API Documentation](https://developer.tibber.com/docs/overview)
- [GraphQL Explorer](https://developer.tibber.com/explorer)
- [Get API Token](https://developer.tibber.com/settings/access-token)

View file

@ -1,358 +0,0 @@
---
comments: false
---
# Architecture
This document provides a visual overview of the integration's architecture, focusing on end-to-end data flow and caching layers.
For detailed implementation patterns, see [`AGENTS.md`](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md).
---
## End-to-End Data Flow
```mermaid
flowchart TB
%% External Systems
TIBBER[("🌐 Tibber GraphQL API<br/>api.tibber.com")]
HA[("🏠 Home Assistant<br/>Core")]
%% Entry Point
SETUP["__init__.py<br/>async_setup_entry()"]
%% Core Components
API["api.py<br/>TibberPricesApiClient<br/><br/>GraphQL queries"]
COORD["coordinator.py<br/>TibberPricesDataUpdateCoordinator<br/><br/>Orchestrates updates every 15min"]
%% Caching Layers
CACHE_API["💾 API Cache<br/>coordinator/cache.py<br/><br/>HA Storage (persistent)<br/>User: 24h | Prices: until midnight"]
CACHE_TRANS["💾 Transformation Cache<br/>coordinator/data_transformation.py<br/><br/>Memory (enriched prices)<br/>Until config change or midnight"]
CACHE_PERIOD["💾 Period Cache<br/>coordinator/periods.py<br/><br/>Memory (calculated periods)<br/>Hash-based invalidation"]
CACHE_CONFIG["💾 Config Cache<br/>coordinator/*<br/><br/>Memory (parsed options)<br/>Until config change"]
CACHE_TRANS_TEXT["💾 Translation Cache<br/>const.py<br/><br/>Memory (UI strings)<br/>Until HA restart"]
%% Processing Components
TRANSFORM["coordinator/data_transformation.py<br/>DataTransformer<br/><br/>Enrich prices with statistics"]
PERIODS["coordinator/periods.py<br/>PeriodCalculator<br/><br/>Calculate best/peak periods"]
ENRICH["price_utils.py + average_utils.py<br/><br/>Calculate trailing/leading averages<br/>rating_level, differences"]
%% Output Components
SENSORS["sensor/<br/>TibberPricesSensor<br/><br/>120+ price/level/rating sensors"]
BINARY["binary_sensor/<br/>TibberPricesBinarySensor<br/><br/>Period indicators"]
SERVICES["services/<br/><br/>Custom service endpoints<br/>(get_chartdata, ApexCharts)"]
%% Flow Connections
TIBBER -->|"Query user data<br/>Query prices<br/>(yesterday/today/tomorrow)"| API
API -->|"Raw GraphQL response"| COORD
COORD -->|"Check cache first"| CACHE_API
CACHE_API -.->|"Cache hit:<br/>Return cached"| COORD
CACHE_API -.->|"Cache miss:<br/>Fetch from API"| API
COORD -->|"Raw price data"| TRANSFORM
TRANSFORM -->|"Check cache"| CACHE_TRANS
CACHE_TRANS -.->|"Cache hit"| TRANSFORM
CACHE_TRANS -.->|"Cache miss"| ENRICH
ENRICH -->|"Enriched data"| TRANSFORM
TRANSFORM -->|"Enriched price data"| COORD
COORD -->|"Enriched data"| PERIODS
PERIODS -->|"Check cache"| CACHE_PERIOD
CACHE_PERIOD -.->|"Hash match:<br/>Return cached"| PERIODS
CACHE_PERIOD -.->|"Hash mismatch:<br/>Recalculate"| PERIODS
PERIODS -->|"Calculated periods"| COORD
COORD -->|"Complete data<br/>(prices + periods)"| SENSORS
COORD -->|"Complete data"| BINARY
COORD -->|"Data access"| SERVICES
SENSORS -->|"Entity states"| HA
BINARY -->|"Entity states"| HA
SERVICES -->|"Service responses"| HA
%% Config access
CACHE_CONFIG -.->|"Parsed options"| TRANSFORM
CACHE_CONFIG -.->|"Parsed options"| PERIODS
CACHE_TRANS_TEXT -.->|"UI strings"| SENSORS
CACHE_TRANS_TEXT -.->|"UI strings"| BINARY
SETUP -->|"Initialize"| COORD
SETUP -->|"Register"| SENSORS
SETUP -->|"Register"| BINARY
SETUP -->|"Register"| SERVICES
%% Styling
classDef external fill:#e1f5ff,stroke:#0288d1,stroke-width:3px
classDef cache fill:#fff3e0,stroke:#f57c00,stroke-width:2px
classDef processing fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
classDef output fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
class TIBBER,HA external
class CACHE_API,CACHE_TRANS,CACHE_PERIOD,CACHE_CONFIG,CACHE_TRANS_TEXT cache
class TRANSFORM,PERIODS,ENRICH processing
class SENSORS,BINARY,SERVICES output
```
### Flow Description
1. **Setup** (`__init__.py`)
- Integration loads, creates coordinator instance
- Registers entity platforms (sensor, binary_sensor)
- Sets up custom services
2. **Data Fetch** (every 15 minutes)
- Coordinator triggers update via `api.py`
- API client checks **persistent cache** first (`coordinator/cache.py`)
- If cache valid → return cached data
- If cache stale → query Tibber GraphQL API
- Store fresh data in persistent cache (survives HA restart)
3. **Price Enrichment**
- Coordinator passes raw prices to `DataTransformer`
- Transformer checks **transformation cache** (memory)
- If cache valid → return enriched data
- If cache invalid → enrich via `price_utils.py` + `average_utils.py`
- Calculate 24h trailing/leading averages
- Calculate price differences (% from average)
- Assign rating levels (LOW/NORMAL/HIGH)
- Store enriched data in transformation cache
4. **Period Calculation**
- Coordinator passes enriched data to `PeriodCalculator`
- Calculator computes **hash** from prices + config
- If hash matches cache → return cached periods
- If hash differs → recalculate best/peak price periods
- Store periods with new hash
5. **Entity Updates**
- Coordinator provides complete data (prices + periods)
- Sensors read values via unified handlers
- Binary sensors evaluate period states
- Entities update on quarter-hour boundaries (00/15/30/45)
6. **Service Calls**
- Custom services access coordinator data directly
- Return formatted responses (JSON, ApexCharts format)
---
## Caching Architecture
### Overview
The integration uses **5 independent caching layers** for optimal performance:
| Layer | Location | Lifetime | Invalidation | Memory |
|-------|----------|----------|--------------|--------|
| **API Cache** | `coordinator/cache.py` | 24h (user)<br/>Until midnight (prices) | Automatic | 50KB |
| **Translation Cache** | `const.py` | Until HA restart | Never | 5KB |
| **Config Cache** | `coordinator/*` | Until config change | Explicit | 1KB |
| **Period Cache** | `coordinator/periods.py` | Until data/config change | Hash-based | 10KB |
| **Transformation Cache** | `coordinator/data_transformation.py` | Until midnight/config | Automatic | 60KB |
**Total cache overhead:** ~126KB per coordinator instance (main entry + subentries)
### Cache Coordination
```mermaid
flowchart LR
USER[("User changes options")]
MIDNIGHT[("Midnight turnover")]
NEWDATA[("Tomorrow data arrives")]
USER -->|"Explicit invalidation"| CONFIG["Config Cache<br/>❌ Clear"]
USER -->|"Explicit invalidation"| PERIOD["Period Cache<br/>❌ Clear"]
USER -->|"Explicit invalidation"| TRANS["Transformation Cache<br/>❌ Clear"]
MIDNIGHT -->|"Date validation"| API["API Cache<br/>❌ Clear prices"]
MIDNIGHT -->|"Date check"| TRANS
NEWDATA -->|"Hash mismatch"| PERIOD
CONFIG -.->|"Next access"| CONFIG_NEW["Reparse options"]
PERIOD -.->|"Next access"| PERIOD_NEW["Recalculate"]
TRANS -.->|"Next access"| TRANS_NEW["Re-enrich"]
API -.->|"Next access"| API_NEW["Fetch from API"]
classDef invalid fill:#ffebee,stroke:#c62828,stroke-width:2px
classDef rebuild fill:#e8f5e9,stroke:#388e3c,stroke-width:2px
class CONFIG,PERIOD,TRANS,API invalid
class CONFIG_NEW,PERIOD_NEW,TRANS_NEW,API_NEW rebuild
```
**Key insight:** No cascading invalidations - each cache is independent and rebuilds on-demand.
For detailed cache behavior, see [Caching Strategy](./caching-strategy.md).
---
## Component Responsibilities
### Core Components
| Component | File | Responsibility |
|-----------|------|----------------|
| **API Client** | `api.py` | GraphQL queries to Tibber, retry logic, error handling |
| **Coordinator** | `coordinator.py` | Update orchestration, cache management, absolute-time scheduling with boundary tolerance |
| **Data Transformer** | `coordinator/data_transformation.py` | Price enrichment (averages, ratings, differences) |
| **Period Calculator** | `coordinator/periods.py` | Best/peak price period calculation with relaxation |
| **Sensors** | `sensor/` | 80+ entities for prices, levels, ratings, statistics |
| **Binary Sensors** | `binary_sensor/` | Period indicators (best/peak price active) |
| **Services** | `services/` | Custom service endpoints (get_chartdata, get_apexcharts_yaml, refresh_user_data) |
### Sensor Architecture (Calculator Pattern)
The sensor platform uses **Calculator Pattern** for clean separation of concerns (refactored Nov 2025):
| Component | Files | Lines | Responsibility |
|-----------|-------|-------|----------------|
| **Entity Class** | `sensor/core.py` | 909 | Entity lifecycle, coordinator, delegates to calculators |
| **Calculators** | `sensor/calculators/` | 1,838 | Business logic (8 specialized calculators) |
| **Attributes** | `sensor/attributes/` | 1,209 | State presentation (8 specialized modules) |
| **Routing** | `sensor/value_getters.py` | 276 | Centralized sensor → calculator mapping |
| **Chart Export** | `sensor/chart_data.py` | 144 | Service call handling, YAML parsing |
| **Helpers** | `sensor/helpers.py` | 188 | Aggregation functions, utilities |
**Calculator Package** (`sensor/calculators/`):
- `base.py` - Abstract BaseCalculator with coordinator access
- `interval.py` - Single interval calculations (current/next/previous)
- `rolling_hour.py` - 5-interval rolling windows
- `daily_stat.py` - Calendar day min/max/avg statistics
- `window_24h.py` - Trailing/leading 24h windows
- `volatility.py` - Price volatility analysis
- `trend.py` - Complex trend analysis with caching
- `timing.py` - Best/peak price period timing
- `metadata.py` - Home/metering metadata
**Benefits:**
- 58% reduction in core.py (2,170 → 909 lines)
- Clear separation: Calculators (logic) vs Attributes (presentation)
- Independent testability for each calculator
- Easy to add sensors: Choose calculation pattern, add to routing
### Helper Utilities
| Utility | File | Purpose |
|---------|------|---------|
| **Price Utils** | `utils/price.py` | Rating calculation, enrichment, level aggregation |
| **Average Utils** | `utils/average.py` | Trailing/leading 24h average calculations |
| **Entity Utils** | `entity_utils/` | Shared icon/color/attribute logic |
| **Translations** | `const.py` | Translation loading and caching |
---
## Key Patterns
### 1. Dual Translation System
- **Standard translations** (`/translations/*.json`): HA-compliant schema for entity names
- **Custom translations** (`/custom_translations/*.json`): Extended descriptions, usage tips
- Both loaded at integration setup, cached in memory
- Access via `get_translation()` helper function
### 2. Price Data Enrichment
All quarter-hourly price intervals get augmented via `utils/price.py`:
```python
# Original from Tibber API
{
"startsAt": "2025-11-03T14:00:00+01:00",
"total": 0.2534,
"level": "NORMAL"
}
# After enrichment (utils/price.py)
{
"startsAt": "2025-11-03T14:00:00+01:00",
"total": 0.2534,
"level": "NORMAL",
"trailing_avg_24h": 0.2312, # ← Added: 24h trailing average
"difference": 9.6, # ← Added: % diff from trailing avg
"rating_level": "NORMAL" # ← Added: LOW/NORMAL/HIGH based on thresholds
}
```
### 3. Quarter-Hour Precision
- **API polling**: Every 15 minutes (coordinator fetch cycle)
- **Entity updates**: On 00/15/30/45-minute boundaries via `coordinator/listeners.py`
- **Timer scheduling**: Uses `async_track_utc_time_change(minute=[0, 15, 30, 45], second=0)`
- HA may trigger ±few milliseconds before/after exact boundary
- Smart boundary tolerance (±2 seconds) handles scheduling jitter in `sensor/helpers.py`
- If HA schedules at 14:59:58 → rounds to 15:00:00 (shows new interval data)
- If HA restarts at 14:59:30 → stays at 14:45:00 (shows current interval data)
- **Absolute time tracking**: Timer plans for **all future boundaries** (not relative delays)
- Prevents double-updates (if triggered at 14:59:58, next trigger is 15:15:00, not 15:00:00)
- **Result**: Current price sensors update without waiting for next API poll
### 4. Calculator Pattern (Sensor Platform)
Sensors organized by **calculation method** (refactored Nov 2025):
**Unified Handler Methods** (`sensor/core.py`):
- `_get_interval_value(offset, type)` - current/next/previous intervals
- `_get_rolling_hour_value(offset, type)` - 5-interval rolling windows
- `_get_daily_stat_value(day, stat_func)` - calendar day min/max/avg
- `_get_24h_window_value(stat_func)` - trailing/leading statistics
**Routing** (`sensor/value_getters.py`):
- Single source of truth mapping 80+ entity keys to calculator methods
- Organized by calculation type (Interval, Rolling Hour, Daily Stats, etc.)
**Calculators** (`sensor/calculators/`):
- Each calculator inherits from `BaseCalculator` with coordinator access
- Focused responsibility: `IntervalCalculator`, `TrendCalculator`, etc.
- Complex logic isolated (e.g., `TrendCalculator` has internal caching)
**Attributes** (`sensor/attributes/`):
- Separate from business logic, handles state presentation
- Builds extra_state_attributes dicts for entity classes
- Unified builders: `build_sensor_attributes()`, `build_extra_state_attributes()`
**Benefits:**
- Minimal code duplication across 80+ sensors
- Clear separation of concerns (calculation vs presentation)
- Easy to extend: Add sensor → choose pattern → add to routing
- Independent testability for each component
---
## Performance Characteristics
### API Call Reduction
- **Without caching:** 96 API calls/day (every 15 min)
- **With caching:** ~1-2 API calls/day (only when cache expires)
- **Reduction:** ~98%
### CPU Optimization
| Optimization | Location | Savings |
|--------------|----------|---------|
| Config caching | `coordinator/*` | ~50% on config checks |
| Period caching | `coordinator/periods.py` | ~70% on period recalculation |
| Lazy logging | Throughout | ~15% on log-heavy operations |
| Import optimization | Module structure | ~20% faster loading |
### Memory Usage
- **Per coordinator instance:** ~126KB cache overhead
- **Typical setup:** 1 main + 2 subentries = ~378KB total
- **Redundancy eliminated:** 14% reduction (10KB saved per coordinator)
---
## Related Documentation
- **[Timer Architecture](./timer-architecture.md)** - Timer system, scheduling, coordination (3 independent timers)
- **[Caching Strategy](./caching-strategy.md)** - Detailed cache behavior, invalidation, debugging
- **[Setup Guide](./setup.md)** - Development environment setup
- **[Testing Guide](./testing.md)** - How to test changes
- **[Release Management](./release-management.md)** - Release workflow and versioning
- **[AGENTS.md](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md)** - Complete reference for AI development

View file

@ -1,447 +0,0 @@
---
comments: false
---
# Caching Strategy
This document explains all caching mechanisms in the Tibber Prices integration, their purpose, invalidation logic, and lifetime.
For timer coordination and scheduling details, see [Timer Architecture](./timer-architecture.md).
## Overview
The integration uses **4 distinct caching layers** with different purposes and lifetimes:
1. **Persistent API Data Cache** (HA Storage) - Hours to days
2. **Translation Cache** (Memory) - Forever (until HA restart)
3. **Config Dictionary Cache** (Memory) - Until config changes
4. **Period Calculation Cache** (Memory) - Until price data or config changes
## 1. Persistent API Data Cache
**Location:** `coordinator/cache.py` → HA Storage (`.storage/tibber_prices.<entry_id>`)
**Purpose:** Reduce API calls to Tibber by caching user data and price data between HA restarts.
**What is cached:**
- **Price data** (`price_data`): Day before yesterday/yesterday/today/tomorrow price intervals with enriched fields (384 intervals total)
- **User data** (`user_data`): Homes, subscriptions, features from Tibber GraphQL `viewer` query
- **Timestamps**: Last update times for validation
**Lifetime:**
- **Price data**: Until midnight turnover (cleared daily at 00:00 local time)
- **User data**: 24 hours (refreshed daily)
- **Survives**: HA restarts via persistent Storage
**Invalidation triggers:**
1. **Midnight turnover** (Timer #2 in coordinator):
```python
# coordinator/day_transitions.py
def _handle_midnight_turnover() -> None:
self._cached_price_data = None # Force fresh fetch for new day
self._last_price_update = None
await self.store_cache()
```
2. **Cache validation on load**:
```python
# coordinator/cache.py
def is_cache_valid(cache_data: CacheData) -> bool:
# Checks if price data is from a previous day
if today_date < local_now.date(): # Yesterday's data
return False
```
3. **Tomorrow data check** (after 13:00):
```python
# coordinator/data_fetching.py
if tomorrow_missing or tomorrow_invalid:
return "tomorrow_check" # Update needed
```
**Why this cache matters:** Reduces API load on Tibber (~192 intervals per fetch), speeds up HA restarts, enables offline operation until cache expires.
---
## 2. Translation Cache
**Location:** `const.py``_TRANSLATIONS_CACHE` and `_STANDARD_TRANSLATIONS_CACHE` (in-memory dicts)
**Purpose:** Avoid repeated file I/O when accessing entity descriptions, UI strings, etc.
**What is cached:**
- **Standard translations** (`/translations/*.json`): Config flow, selector options, entity names
- **Custom translations** (`/custom_translations/*.json`): Entity descriptions, usage tips, long descriptions
**Lifetime:**
- **Forever** (until HA restart)
- No invalidation during runtime
**When populated:**
- At integration setup: `async_load_translations(hass, "en")` in `__init__.py`
- Lazy loading: If translation missing, attempts file load once
**Access pattern:**
```python
# Non-blocking synchronous access from cached data
description = get_translation("binary_sensor.best_price_period.description", "en")
```
**Why this cache matters:** Entity attributes are accessed on every state update (~15 times per hour per entity). File I/O would block the event loop. Cache enables synchronous, non-blocking attribute generation.
---
## 3. Config Dictionary Cache
**Location:** `coordinator/data_transformation.py` and `coordinator/periods.py` (per-instance fields)
**Purpose:** Avoid ~30-40 `options.get()` calls on every coordinator update (every 15 minutes).
**What is cached:**
### DataTransformer Config Cache
```python
{
"thresholds": {"low": 15, "high": 35},
"volatility_thresholds": {"moderate": 15.0, "high": 25.0, "very_high": 40.0},
# ... 20+ more config fields
}
```
### PeriodCalculator Config Cache
```python
{
"best": {"flex": 0.15, "min_distance_from_avg": 5.0, "min_period_length": 60},
"peak": {"flex": 0.15, "min_distance_from_avg": 5.0, "min_period_length": 60}
}
```
**Lifetime:**
- Until `invalidate_config_cache()` is called
- Built once on first use per coordinator update cycle
**Invalidation trigger:**
- **Options change** (user reconfigures integration):
```python
# coordinator/core.py
async def _handle_options_update(...) -> None:
self._data_transformer.invalidate_config_cache()
self._period_calculator.invalidate_config_cache()
await self.async_request_refresh()
```
**Performance impact:**
- **Before:** ~30 dict lookups + type conversions per update = ~50μs
- **After:** 1 cache check = ~1μs
- **Savings:** ~98% (50μs → 1μs per update)
**Why this cache matters:** Config is read multiple times per update (transformation + period calculation + validation). Caching eliminates redundant lookups without changing behavior.
---
## 4. Period Calculation Cache
**Location:** `coordinator/periods.py``PeriodCalculator._cached_periods`
**Purpose:** Avoid expensive period calculations (~100-500ms) when price data and config haven't changed.
**What is cached:**
```python
{
"best_price": {
"periods": [...], # Calculated period objects
"intervals": [...], # All intervals in periods
"metadata": {...} # Config snapshot
},
"best_price_relaxation": {"relaxation_active": bool, ...},
"peak_price": {...},
"peak_price_relaxation": {...}
}
```
**Cache key:** Hash of relevant inputs
```python
hash_data = (
today_signature, # (startsAt, rating_level) for each interval
tuple(best_config.items()), # Best price config
tuple(peak_config.items()), # Peak price config
best_level_filter, # Level filter overrides
peak_level_filter
)
```
**Lifetime:**
- Until price data changes (today's intervals modified)
- Until config changes (flex, thresholds, filters)
- Recalculated at midnight (new today data)
**Invalidation triggers:**
1. **Config change** (explicit):
```python
def invalidate_config_cache() -> None:
self._cached_periods = None
self._last_periods_hash = None
```
2. **Price data change** (automatic via hash mismatch):
```python
current_hash = self._compute_periods_hash(price_info)
if self._last_periods_hash != current_hash:
# Cache miss - recalculate
```
**Cache hit rate:**
- **High:** During normal operation (coordinator updates every 15min, price data unchanged)
- **Low:** After midnight (new today data) or when tomorrow data arrives (~13:00-14:00)
**Performance impact:**
- **Period calculation:** ~100-500ms (depends on interval count, relaxation attempts)
- **Cache hit:** `<`1ms (hash comparison + dict lookup)
- **Savings:** ~70% of calculation time (most updates hit cache)
**Why this cache matters:** Period calculation is CPU-intensive (filtering, gap tolerance, relaxation). Caching avoids recalculating unchanged periods 3-4 times per hour.
---
## 5. Transformation Cache (Price Enrichment Only)
**Location:** `coordinator/data_transformation.py``_cached_transformed_data`
**Status:** ✅ **Clean separation** - enrichment only, no redundancy
**What is cached:**
```python
{
"timestamp": ...,
"homes": {...},
"priceInfo": {...}, # Enriched price data (trailing_avg_24h, difference, rating_level)
# NO periods - periods are exclusively managed by PeriodCalculator
}
```
**Purpose:** Avoid re-enriching price data when config unchanged between midnight checks.
**Current behavior:**
- Caches **only enriched price data** (price + statistics)
- **Does NOT cache periods** (handled by Period Calculation Cache)
- Invalidated when:
- Config changes (thresholds affect enrichment)
- Midnight turnover detected
- New update cycle begins
**Architecture:**
- DataTransformer: Handles price enrichment only
- PeriodCalculator: Handles period calculation only (with hash-based cache)
- Coordinator: Assembles final data on-demand from both caches
**Memory savings:** Eliminating redundant period storage saves ~10KB per coordinator (14% reduction).
---
## Cache Invalidation Flow
### User Changes Options (Config Flow)
```
User saves options
config_entry.add_update_listener() triggers
coordinator._handle_options_update()
├─> DataTransformer.invalidate_config_cache()
│ └─> _config_cache = None
│ _config_cache_valid = False
│ _cached_transformed_data = None
└─> PeriodCalculator.invalidate_config_cache()
└─> _config_cache = None
_config_cache_valid = False
_cached_periods = None
_last_periods_hash = None
coordinator.async_request_refresh()
Fresh data fetch with new config
```
### Midnight Turnover (Day Transition)
```
Timer #2 fires at 00:00
coordinator._handle_midnight_turnover()
├─> Clear persistent cache
│ └─> _cached_price_data = None
│ _last_price_update = None
└─> Clear transformation cache
└─> _cached_transformed_data = None
_last_transformation_config = None
Period cache auto-invalidates (hash mismatch on new "today")
Fresh API fetch for new day
```
### Tomorrow Data Arrives (~13:00)
```
Coordinator update cycle
should_update_price_data() checks tomorrow
Tomorrow data missing/invalid
API fetch with new tomorrow data
Price data hash changes (new intervals)
Period cache auto-invalidates (hash mismatch)
Periods recalculated with tomorrow included
```
---
## Cache Coordination
**All caches work together:**
```
Persistent Storage (HA restart)
API Data Cache (price_data, user_data)
├─> Enrichment (add rating_level, difference, etc.)
│ ↓
│ Transformation Cache (_cached_transformed_data)
└─> Period Calculation
Period Cache (_cached_periods)
Config Cache (avoid re-reading options)
Translation Cache (entity descriptions)
```
**No cache invalidation cascades:**
- Config cache invalidation is **explicit** (on options update)
- Period cache invalidation is **automatic** (via hash mismatch)
- Transformation cache invalidation is **automatic** (on midnight/config change)
- Translation cache is **never invalidated** (read-only after load)
**Thread safety:**
- All caches are accessed from `MainThread` only (Home Assistant event loop)
- No locking needed (single-threaded execution model)
---
## Performance Characteristics
### Typical Operation (No Changes)
```
Coordinator Update (every 15 min)
├─> API fetch: SKIP (cache valid)
├─> Config dict build: ~1μs (cached)
├─> Period calculation: ~1ms (cached, hash match)
├─> Transformation: ~10ms (enrichment only, periods cached)
└─> Entity updates: ~5ms (translation cache hit)
Total: ~16ms (down from ~600ms without caching)
```
### After Midnight Turnover
```
Coordinator Update (00:00)
├─> API fetch: ~500ms (cache cleared, fetch new day)
├─> Config dict build: ~50μs (rebuild, no cache)
├─> Period calculation: ~200ms (cache miss, recalculate)
├─> Transformation: ~50ms (re-enrich, rebuild)
└─> Entity updates: ~5ms (translation cache still valid)
Total: ~755ms (expected once per day)
```
### After Config Change
```
Options Update
├─> Cache invalidation: `<`1ms
├─> Coordinator refresh: ~600ms
│ ├─> API fetch: SKIP (data unchanged)
│ ├─> Config rebuild: ~50μs
│ ├─> Period recalculation: ~200ms (new thresholds)
│ ├─> Re-enrichment: ~50ms
│ └─> Entity updates: ~5ms
└─> Total: ~600ms (expected on manual reconfiguration)
```
---
## Summary Table
| Cache Type | Lifetime | Size | Invalidation | Purpose |
|------------|----------|------|--------------|---------|
| **API Data** | Hours to 1 day | ~50KB | Midnight, validation | Reduce API calls |
| **Translations** | Forever (until HA restart) | ~5KB | Never | Avoid file I/O |
| **Config Dicts** | Until options change | `<`1KB | Explicit (options update) | Avoid dict lookups |
| **Period Calculation** | Until data/config change | ~10KB | Auto (hash mismatch) | Avoid CPU-intensive calculation |
| **Transformation** | Until midnight/config change | ~50KB | Auto (midnight/config) | Avoid re-enrichment |
**Total memory overhead:** ~116KB per coordinator instance (main + subentries)
**Benefits:**
- 97% reduction in API calls (from every 15min to once per day)
- 70% reduction in period calculation time (cache hits during normal operation)
- 98% reduction in config access time (30+ lookups → 1 cache check)
- Zero file I/O during runtime (translations cached at startup)
**Trade-offs:**
- Memory usage: ~116KB per home (negligible for modern systems)
- Code complexity: 5 cache invalidation points (well-tested, documented)
- Debugging: Must understand cache lifetime when investigating stale data issues
---
## Debugging Cache Issues
### Symptom: Stale data after config change
**Check:**
1. Is `_handle_options_update()` called? (should see "Options updated" log)
2. Are `invalidate_config_cache()` methods executed?
3. Does `async_request_refresh()` trigger?
**Fix:** Ensure `config_entry.add_update_listener()` is registered in coordinator init.
### Symptom: Period calculation not updating
**Check:**
1. Verify hash changes when data changes: `_compute_periods_hash()`
2. Check `_last_periods_hash` vs `current_hash`
3. Look for "Using cached period calculation" vs "Calculating periods" logs
**Fix:** Hash function may not include all relevant data. Review `_compute_periods_hash()` inputs.
### Symptom: Yesterday's prices shown as today
**Check:**
1. `is_cache_valid()` logic in `coordinator/cache.py`
2. Midnight turnover execution (Timer #2)
3. Cache clear confirmation in logs
**Fix:** Timer may not be firing. Check `_schedule_midnight_turnover()` registration.
### Symptom: Missing translations
**Check:**
1. `async_load_translations()` called at startup?
2. Translation files exist in `/translations/` and `/custom_translations/`?
3. Cache population: `_TRANSLATIONS_CACHE` keys
**Fix:** Re-install integration or restart HA to reload translation files.
---
## Related Documentation
- **[Timer Architecture](./timer-architecture.md)** - Timer system, scheduling, midnight coordination
- **[Architecture](./architecture.md)** - Overall system design, data flow
- **[AGENTS.md](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md)** - Complete reference for AI development

View file

@ -1,121 +0,0 @@
---
comments: false
---
# Coding Guidelines
> **Note:** For complete coding standards, see [`AGENTS.md`](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md).
## Code Style
- **Formatter/Linter**: Ruff (replaces Black, Flake8, isort)
- **Max line length**: 120 characters
- **Max complexity**: 25 (McCabe)
- **Target**: Python 3.13
Run before committing:
```bash
./scripts/lint # Auto-fix issues
./scripts/release/hassfest # Validate integration structure
```
## Naming Conventions
### Class Names
**All public classes MUST use the integration name as prefix.**
This is a Home Assistant standard to avoid naming conflicts between integrations.
```python
# ✅ CORRECT
class TibberPricesApiClient:
class TibberPricesDataUpdateCoordinator:
class TibberPricesSensor:
# ❌ WRONG - Missing prefix
class ApiClient:
class DataFetcher:
class TimeService:
```
**When prefix is required:**
- Public classes used across multiple modules
- All exception classes
- All coordinator and entity classes
- Data classes (dataclasses, NamedTuples) used as public APIs
**When prefix can be omitted:**
- Private helper classes within a single module (prefix with `_` underscore)
- Type aliases and callbacks (e.g., `TimeServiceCallback`)
- Small internal NamedTuples for function returns
**Private Classes:**
If a helper class is ONLY used within a single module file, prefix it with underscore:
```python
# ✅ Private class - used only in this file
class _InternalHelper:
"""Helper used only within this module."""
pass
# ❌ Wrong - no prefix but used across modules
class DataFetcher: # Should be TibberPricesDataFetcher
pass
```
**Note:** Currently (Nov 2025), this project has **NO private classes** - all classes are used across module boundaries.
**Current Technical Debt:**
Many existing classes lack the `TibberPrices` prefix. Before refactoring:
1. Document the plan in `/planning/class-naming-refactoring.md`
2. Use `multi_replace_string_in_file` for bulk renames
3. Test thoroughly after each module
See [`AGENTS.md`](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md) for complete list of classes needing rename.
## Import Order
1. Python stdlib (specific types only)
2. Third-party (`homeassistant.*`, `aiohttp`)
3. Local (`.api`, `.const`)
## Critical Patterns
### Time Handling
Always use `dt_util` from `homeassistant.util`:
```python
from homeassistant.util import dt as dt_util
price_time = dt_util.parse_datetime(starts_at)
price_time = dt_util.as_local(price_time) # Convert to HA timezone
now = dt_util.now()
```
### Translation Loading
```python
# In __init__.py async_setup_entry:
await async_load_translations(hass, "en")
await async_load_standard_translations(hass, "en")
```
### Price Data Enrichment
Always enrich raw API data:
```python
from .price_utils import enrich_price_info_with_differences
enriched = enrich_price_info_with_differences(
price_info_data,
thresholds,
)
```
See [`AGENTS.md`](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md) for complete guidelines.

View file

@ -1,216 +0,0 @@
# Contributing Guide
Welcome! This guide helps you contribute to the Tibber Prices integration.
## Getting Started
### Prerequisites
- Git
- VS Code with Remote Containers extension
- Docker Desktop
### Fork and Clone
1. Fork the repository on GitHub
2. Clone your fork:
```bash
git clone https://github.com/YOUR_USERNAME/hass.tibber_prices.git
cd hass.tibber_prices
```
3. Open in VS Code
4. Click "Reopen in Container" when prompted
The DevContainer will set up everything automatically.
## Development Workflow
### 1. Create a Branch
```bash
git checkout -b feature/your-feature-name
# or
git checkout -b fix/issue-123-description
```
**Branch naming:**
- `feature/` - New features
- `fix/` - Bug fixes
- `docs/` - Documentation only
- `refactor/` - Code restructuring
- `test/` - Test improvements
### 2. Make Changes
Edit code, following [Coding Guidelines](coding-guidelines.md).
**Run checks frequently:**
```bash
./scripts/type-check # Pyright type checking
./scripts/lint # Ruff linting (auto-fix)
./scripts/test # Run tests
```
### 3. Test Locally
```bash
./scripts/develop # Start HA with integration loaded
```
Access at http://localhost:8123
### 4. Write Tests
Add tests in `/tests/` for new features:
```python
@pytest.mark.unit
async def test_your_feature(hass, coordinator):
"""Test your new feature."""
# Arrange
coordinator.data = {...}
# Act
result = your_function(coordinator.data)
# Assert
assert result == expected_value
```
Run your test:
```bash
./scripts/test tests/test_your_feature.py -v
```
### 5. Commit Changes
Follow [Conventional Commits](https://www.conventionalcommits.org/):
```bash
git add .
git commit -m "feat(sensors): add volatility trend sensor
Add new sensor showing 3-hour volatility trend direction.
Includes attributes with historical volatility data.
Impact: Users can predict when prices will stabilize or continue fluctuating."
```
**Commit types:**
- `feat:` - New feature
- `fix:` - Bug fix
- `docs:` - Documentation
- `refactor:` - Code restructuring
- `test:` - Test changes
- `chore:` - Maintenance
**Add scope when relevant:**
- `feat(sensors):` - Sensor platform
- `fix(coordinator):` - Data coordinator
- `docs(user):` - User documentation
### 6. Push and Create PR
```bash
git push origin your-branch-name
```
Then open Pull Request on GitHub.
## Pull Request Guidelines
### PR Template
Title: Short, descriptive (50 chars max)
Description should include:
```markdown
## What
Brief description of changes
## Why
Problem being solved or feature rationale
## How
Implementation approach
## Testing
- [ ] Manual testing in Home Assistant
- [ ] Unit tests added/updated
- [ ] Type checking passes
- [ ] Linting passes
## Breaking Changes
(If any - describe migration path)
## Related Issues
Closes #123
```
### PR Checklist
Before submitting:
- [ ] Code follows [Coding Guidelines](coding-guidelines.md)
- [ ] All tests pass (`./scripts/test`)
- [ ] Type checking passes (`./scripts/type-check`)
- [ ] Linting passes (`./scripts/lint-check`)
- [ ] Documentation updated (if needed)
- [ ] AGENTS.md updated (if patterns changed)
- [ ] Commit messages follow Conventional Commits
### Review Process
1. **Automated checks** run (CI/CD)
2. **Maintainer review** (usually within 3 days)
3. **Address feedback** if requested
4. **Approval** → Maintainer merges
## Code Review Tips
### What Reviewers Look For
✅ **Good:**
- Clear, self-explanatory code
- Appropriate comments for complex logic
- Tests covering edge cases
- Type hints on all functions
- Follows existing patterns
❌ **Avoid:**
- Large PRs (>500 lines) - split into smaller ones
- Mixing unrelated changes
- Missing tests for new features
- Breaking changes without migration path
- Copy-pasted code (refactor into shared functions)
### Responding to Feedback
- Don't take it personally - we're improving code together
- Ask questions if feedback unclear
- Push additional commits to address comments
- Mark conversations as resolved when fixed
## Finding Issues to Work On
Good first issues are labeled:
- `good first issue` - Beginner-friendly
- `help wanted` - Maintainers welcome contributions
- `documentation` - Docs improvements
Comment on issue before starting work to avoid duplicates.
## Communication
- **GitHub Issues** - Bug reports, feature requests
- **Pull Requests** - Code discussion
- **Discussions** - General questions, ideas
Be respectful, constructive, and patient. We're all volunteers! 🙏
---
💡 **Related:**
- [Setup Guide](setup.md) - DevContainer setup
- [Coding Guidelines](coding-guidelines.md) - Style guide
- [Testing](testing.md) - Writing tests
- [Release Management](release-management.md) - How releases work

View file

@ -1,286 +0,0 @@
---
comments: false
---
# Critical Behavior Patterns - Testing Guide
**Purpose:** This documentation lists essential behavior patterns that must be tested to ensure production-quality code and prevent resource leaks.
**Last Updated:** 2025-11-22
**Test Coverage:** 41 tests implemented (100% of critical patterns)
## 🎯 Why Are These Tests Critical?
Home Assistant integrations run **continuously** in the background. Resource leaks lead to:
- **Memory Leaks**: RAM usage grows over days/weeks until HA becomes unstable
- **Callback Leaks**: Listeners remain registered after entity removal → CPU load increases
- **Timer Leaks**: Timers continue running after unload → unnecessary background tasks
- **File Handle Leaks**: Storage files remain open → system resources exhausted
## ✅ Test Categories
### 1. Resource Cleanup (Memory Leak Prevention)
**File:** `tests/test_resource_cleanup.py`
#### 1.1 Listener Cleanup ✅
**What is tested:**
- Time-sensitive listeners are correctly removed (`async_add_time_sensitive_listener()`)
- Minute-update listeners are correctly removed (`async_add_minute_update_listener()`)
- Lifecycle callbacks are correctly unregistered (`register_lifecycle_callback()`)
- Sensor cleanup removes ALL registered listeners
- Binary sensor cleanup removes ALL registered listeners
**Why critical:**
- Each registered listener holds references to Entity + Coordinator
- Without cleanup: Entities are not freed by GC → Memory Leak
- With 80+ sensors × 3 listener types = 240+ callbacks that must be cleanly removed
**Code Locations:**
- `coordinator/listeners.py``async_add_time_sensitive_listener()`, `async_add_minute_update_listener()`
- `coordinator/core.py``register_lifecycle_callback()`
- `sensor/core.py``async_will_remove_from_hass()`
- `binary_sensor/core.py``async_will_remove_from_hass()`
#### 1.2 Timer Cleanup ✅
**What is tested:**
- Quarter-hour timer is cancelled and reference cleared
- Minute timer is cancelled and reference cleared
- Both timers are cancelled together
- Cleanup works even when timers are `None`
**Why critical:**
- Uncancelled timers continue running after integration unload
- HA's `async_track_utc_time_change()` creates persistent callbacks
- Without cleanup: Timers keep firing → CPU load + unnecessary coordinator updates
**Code Locations:**
- `coordinator/listeners.py``cancel_timers()`
- `coordinator/core.py``async_shutdown()`
#### 1.3 Config Entry Cleanup ✅
**What is tested:**
- Options update listener is registered via `async_on_unload()`
- Cleanup function is correctly passed to `async_on_unload()`
**Why critical:**
- `entry.add_update_listener()` registers permanent callback
- Without `async_on_unload()`: Listener remains active after reload → duplicate updates
- Pattern: `entry.async_on_unload(entry.add_update_listener(handler))`
**Code Locations:**
- `coordinator/core.py``__init__()` (listener registration)
- `__init__.py``async_unload_entry()`
### 2. Cache Invalidation ✅
**File:** `tests/test_resource_cleanup.py`
#### 2.1 Config Cache Invalidation
**What is tested:**
- DataTransformer config cache is invalidated on options change
- PeriodCalculator config + period cache is invalidated
- Trend calculator cache is cleared on coordinator update
**Why critical:**
- Stale config → Sensors use old user settings
- Stale period cache → Incorrect best/peak price periods
- Stale trend cache → Outdated trend analysis
**Code Locations:**
- `coordinator/data_transformation.py``invalidate_config_cache()`
- `coordinator/periods.py``invalidate_config_cache()`
- `sensor/calculators/trend.py``clear_trend_cache()`
### 3. Storage Cleanup ✅
**File:** `tests/test_resource_cleanup.py` + `tests/test_coordinator_shutdown.py`
#### 3.1 Persistent Storage Removal
**What is tested:**
- Storage file is deleted on config entry removal
- Cache is saved on shutdown (no data loss)
**Why critical:**
- Without storage removal: Old files remain after uninstallation
- Without cache save on shutdown: Data loss on HA restart
- Storage path: `.storage/tibber_prices.{entry_id}`
**Code Locations:**
- `__init__.py``async_remove_entry()`
- `coordinator/core.py``async_shutdown()`
### 4. Timer Scheduling ✅
**File:** `tests/test_timer_scheduling.py`
**What is tested:**
- Quarter-hour timer is registered with correct parameters
- Minute timer is registered with correct parameters
- Timers can be re-scheduled (override old timer)
- Midnight turnover detection works correctly
**Why critical:**
- Wrong timer parameters → Entities update at wrong times
- Without timer override on re-schedule → Multiple parallel timers → Performance problem
### 5. Sensor-to-Timer Assignment ✅
**File:** `tests/test_sensor_timer_assignment.py`
**What is tested:**
- All `TIME_SENSITIVE_ENTITY_KEYS` are valid entity keys
- All `MINUTE_UPDATE_ENTITY_KEYS` are valid entity keys
- Both lists are disjoint (no overlap)
- Sensor and binary sensor platforms are checked
**Why critical:**
- Wrong timer assignment → Sensors update at wrong times
- Overlap → Duplicate updates → Performance problem
## 🚨 Additional Analysis (Nice-to-Have Patterns)
These patterns were analyzed and classified as **not critical**:
### 6. Async Task Management
**Current Status:** Fire-and-forget pattern for short tasks
- `sensor/core.py` → Chart data refresh (short-lived, max 1-2 seconds)
- `coordinator/core.py` → Cache storage (short-lived, max 100ms)
**Why no tests needed:**
- No long-running tasks (all < 2 seconds)
- HA's event loop handles short tasks automatically
- Task exceptions are already logged
**If needed:** `_chart_refresh_task` tracking + cancel in `async_will_remove_from_hass()`
### 7. API Session Cleanup
**Current Status:** ✅ Correctly implemented
- `async_get_clientsession(hass)` is used (shared session)
- No new sessions are created
- HA manages session lifecycle automatically
**Code:** `api/client.py` + `__init__.py`
### 8. Translation Cache Memory
**Current Status:** ✅ Bounded cache
- Max ~5-10 languages × 5KB = 50KB total
- Module-level cache without re-loading
- Practically no memory issue
**Code:** `const.py``_TRANSLATIONS_CACHE`, `_STANDARD_TRANSLATIONS_CACHE`
### 9. Coordinator Data Structure Integrity
**Current Status:** Manually tested via `./scripts/develop`
- Midnight turnover works correctly (observed over several days)
- Missing keys are handled via `.get()` with defaults
- 80+ sensors access `coordinator.data` without errors
**Structure:**
```python
coordinator.data = {
"user_data": {...},
"priceInfo": [...], # Flat list of all enriched intervals
"currency": "EUR" # Top-level for easy access
}
```
### 10. Service Response Memory
**Current Status:** HA's response lifecycle
- HA automatically frees service responses after return
- ApexCharts ~20KB response is one-time per call
- No response accumulation in integration code
**Code:** `services/apexcharts.py`
## 📊 Test Coverage Status
### ✅ Implemented Tests (41 total)
| Category | Status | Tests | File | Coverage |
|----------|--------|-------|------|----------|
| Listener Cleanup | ✅ | 5 | `test_resource_cleanup.py` | 100% |
| Timer Cleanup | ✅ | 4 | `test_resource_cleanup.py` | 100% |
| Config Entry Cleanup | ✅ | 1 | `test_resource_cleanup.py` | 100% |
| Cache Invalidation | ✅ | 3 | `test_resource_cleanup.py` | 100% |
| Storage Cleanup | ✅ | 1 | `test_resource_cleanup.py` | 100% |
| Storage Persistence | ✅ | 2 | `test_coordinator_shutdown.py` | 100% |
| Timer Scheduling | ✅ | 8 | `test_timer_scheduling.py` | 100% |
| Sensor-Timer Assignment | ✅ | 17 | `test_sensor_timer_assignment.py` | 100% |
| **TOTAL** | **✅** | **41** | | **100% (critical)** |
### 📋 Analyzed but Not Implemented (Nice-to-Have)
| Category | Status | Rationale |
|----------|--------|-----------|
| Async Task Management | 📋 | Fire-and-forget pattern used (no long-running tasks) |
| API Session Cleanup | ✅ | Pattern correct (`async_get_clientsession` used) |
| Translation Cache | ✅ | Cache size bounded (~50KB max for 10 languages) |
| Data Structure Integrity | 📋 | Would add test time without finding real issues |
| Service Response Memory | 📋 | HA automatically frees service responses |
**Legend:**
- ✅ = Fully tested or pattern verified correct
- 📋 = Analyzed, low priority for testing (no known issues)
## 🎯 Development Status
### ✅ All Critical Patterns Tested
All essential memory leak prevention patterns are covered by 41 tests:
- ✅ Listeners are correctly removed (no callback leaks)
- ✅ Timers are cancelled (no background task leaks)
- ✅ Config entry cleanup works (no dangling listeners)
- ✅ Caches are invalidated (no stale data issues)
- ✅ Storage is saved and cleaned up (no data loss)
- ✅ Timer scheduling works correctly (no update issues)
- ✅ Sensor-timer assignment is correct (no wrong updates)
### 📋 Nice-to-Have Tests (Optional)
If problems arise in the future, these tests can be added:
1. **Async Task Management** - Pattern analyzed (fire-and-forget for short tasks)
2. **Data Structure Integrity** - Midnight rotation manually tested
3. **Service Response Memory** - HA's response lifecycle automatic
**Conclusion:** The integration has production-quality test coverage for all critical resource leak patterns.
## 🔍 How to Run Tests
```bash
# Run all resource cleanup tests (14 tests)
./scripts/test tests/test_resource_cleanup.py -v
# Run all critical pattern tests (41 tests)
./scripts/test tests/test_resource_cleanup.py tests/test_coordinator_shutdown.py \
tests/test_timer_scheduling.py tests/test_sensor_timer_assignment.py -v
# Run all tests with coverage
./scripts/test --cov=custom_components.tibber_prices --cov-report=html
# Type checking and linting
./scripts/check
# Manual memory leak test
# 1. Start HA: ./scripts/develop
# 2. Monitor RAM: watch -n 1 'ps aux | grep home-assistant'
# 3. Reload integration multiple times (HA UI: Settings → Devices → Tibber Prices → Reload)
# 4. RAM should stabilize (not grow continuously)
```
## 📚 References
- **Home Assistant Cleanup Patterns**: https://developers.home-assistant.io/docs/integration_setup_failures/#cleanup
- **Async Best Practices**: https://developers.home-assistant.io/docs/asyncio_101/
- **Memory Profiling**: https://docs.python.org/3/library/tracemalloc.html

View file

@ -1,230 +0,0 @@
# Debugging Guide
Tips and techniques for debugging the Tibber Prices integration during development.
## Logging
### Enable Debug Logging
Add to `configuration.yaml`:
```yaml
logger:
default: info
logs:
custom_components.tibber_prices: debug
```
Restart Home Assistant to apply.
### Key Log Messages
**Coordinator Updates:**
```
[custom_components.tibber_prices.coordinator] Successfully fetched price data
[custom_components.tibber_prices.coordinator] Cache valid, using cached data
[custom_components.tibber_prices.coordinator] Midnight turnover detected, clearing cache
```
**Period Calculation:**
```
[custom_components.tibber_prices.coordinator.periods] Calculating BEST PRICE periods: flex=15.0%
[custom_components.tibber_prices.coordinator.periods] Day 2024-12-06: Found 2 periods
[custom_components.tibber_prices.coordinator.periods] Period 1: 02:00-05:00 (12 intervals)
```
**API Errors:**
```
[custom_components.tibber_prices.api] API request failed: Unauthorized
[custom_components.tibber_prices.api] Retrying (attempt 2/3) after 2.0s
```
## VS Code Debugging
### Launch Configuration
`.vscode/launch.json`:
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "Home Assistant",
"type": "debugpy",
"request": "launch",
"module": "homeassistant",
"args": ["-c", "config", "--debug"],
"justMyCode": false,
"env": {
"PYTHONPATH": "${workspaceFolder}/.venv/lib/python3.13/site-packages"
}
}
]
}
```
### Set Breakpoints
**Coordinator update:**
```python
# coordinator/core.py
async def _async_update_data(self) -> dict:
"""Fetch data from API."""
breakpoint() # Or set VS Code breakpoint
```
**Period calculation:**
```python
# coordinator/period_handlers/core.py
def calculate_periods(...) -> list[dict]:
"""Calculate best/peak price periods."""
breakpoint()
```
## pytest Debugging
### Run Single Test with Output
```bash
.venv/bin/python -m pytest tests/test_period_calculation.py::test_midnight_crossing -v -s
```
**Flags:**
- `-v` - Verbose output
- `-s` - Show print statements
- `-k pattern` - Run tests matching pattern
### Debug Test in VS Code
Set breakpoint in test file, use "Debug Test" CodeLens.
### Useful Test Patterns
**Print coordinator data:**
```python
def test_something(coordinator):
print(f"Coordinator data: {coordinator.data}")
print(f"Price info count: {len(coordinator.data['priceInfo'])}")
```
**Inspect period attributes:**
```python
def test_periods(hass, coordinator):
periods = coordinator.data.get('best_price_periods', [])
for period in periods:
print(f"Period: {period['start']} to {period['end']}")
print(f" Intervals: {len(period['intervals'])}")
```
## Common Issues
### Integration Not Loading
**Check:**
```bash
grep "tibber_prices" config/home-assistant.log
```
**Common causes:**
- Syntax error in Python code → Check logs for traceback
- Missing dependency → Run `uv sync`
- Wrong file permissions → `chmod +x scripts/*`
### Sensors Not Updating
**Check coordinator state:**
```python
# In Developer Tools > Template
{{ states.sensor.tibber_home_current_interval_price.last_updated }}
```
**Debug in code:**
```python
# Add logging in sensor/core.py
_LOGGER.debug("Updating sensor %s: old=%s new=%s",
self.entity_id, self._attr_native_value, new_value)
```
### Period Calculation Wrong
**Enable detailed period logs:**
```python
# coordinator/period_handlers/period_building.py
_LOGGER.debug("Candidate intervals: %s",
[(i['startsAt'], i['total']) for i in candidates])
```
**Check filter statistics:**
```
[period_building] Flex filter blocked: 45 intervals
[period_building] Min distance blocked: 12 intervals
[period_building] Level filter blocked: 8 intervals
```
## Performance Profiling
### Time Execution
```python
import time
start = time.perf_counter()
result = expensive_function()
duration = time.perf_counter() - start
_LOGGER.debug("Function took %.3fs", duration)
```
### Memory Usage
```python
import tracemalloc
tracemalloc.start()
# ... your code ...
current, peak = tracemalloc.get_traced_memory()
_LOGGER.debug("Memory: current=%d peak=%d", current, peak)
tracemalloc.stop()
```
### Profile with cProfile
```bash
python -m cProfile -o profile.stats -m homeassistant -c config
python -m pstats profile.stats
# Then: sort cumtime, stats 20
```
## Live Debugging in Running HA
### Remote Debugging with debugpy
Add to coordinator code:
```python
import debugpy
debugpy.listen(5678)
_LOGGER.info("Waiting for debugger attach on port 5678")
debugpy.wait_for_client()
```
Connect from VS Code with remote attach configuration.
### IPython REPL
Install in container:
```bash
uv pip install ipython
```
Add breakpoint:
```python
from IPython import embed
embed() # Drops into interactive shell
```
---
💡 **Related:**
- [Testing Guide](testing.md) - Writing and running tests
- [Setup Guide](setup.md) - Development environment
- [Architecture](architecture.md) - Code structure

View file

@ -1,185 +0,0 @@
# Developer Documentation
This section contains documentation for contributors and maintainers of the **Tibber Prices custom integration**.
:::info Community Project
This is an independent, community-maintained custom integration for Home Assistant. It is **not** an official Tibber product and is **not** affiliated with Tibber AS.
:::
## 📚 Developer Guides
- **[Setup](setup.md)** - DevContainer, environment setup, and dependencies
- **[Architecture](architecture.md)** - Code structure, patterns, and conventions
- **[Period Calculation Theory](period-calculation-theory.md)** - Mathematical foundations, Flex/Distance interaction, Relaxation strategy
- **[Timer Architecture](timer-architecture.md)** - Timer system, scheduling, coordination (3 independent timers)
- **[Caching Strategy](caching-strategy.md)** - Cache layers, invalidation, debugging
- **[Testing](testing.md)** - How to run tests and write new test cases
- **[Release Management](release-management.md)** - Release workflow and versioning process
- **[Coding Guidelines](coding-guidelines.md)** - Style guide, linting, and best practices
- **[Refactoring Guide](refactoring-guide.md)** - How to plan and execute major refactorings
## 🤖 AI Documentation
The main AI/Copilot documentation is in [`AGENTS.md`](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md). This file serves as long-term memory for AI assistants and contains:
- Detailed architectural patterns
- Code quality rules and conventions
- Development workflow guidance
- Common pitfalls and anti-patterns
- Project-specific patterns and utilities
**Important:** When proposing changes to patterns or conventions, always update [`AGENTS.md`](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md) to keep AI guidance consistent.
### AI-Assisted Development
This integration is developed with extensive AI assistance (GitHub Copilot, Claude, and other AI tools). The AI handles:
- **Pattern Recognition**: Understanding and applying Home Assistant best practices
- **Code Generation**: Implementing features with proper type hints, error handling, and documentation
- **Refactoring**: Maintaining consistency across the codebase during structural changes
- **Translation Management**: Keeping 5 language files synchronized
- **Documentation**: Generating and maintaining comprehensive documentation
**Quality Assurance:**
- Automated linting with Ruff (120-char line length, max complexity 25)
- Home Assistant's type checking and validation
- Real-world testing in development environment
- Code review by maintainer before merging
**Benefits:**
- Rapid feature development while maintaining quality
- Consistent code patterns across all modules
- Comprehensive documentation maintained alongside code
- Quick bug fixes with proper understanding of context
**Limitations:**
- AI may occasionally miss edge cases or subtle bugs
- Some complex Home Assistant patterns may need human review
- Translation quality depends on AI's understanding of target language
- User feedback is crucial for discovering real-world issues
If you're working with AI tools on this project, the [`AGENTS.md`](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md) file provides the context and patterns that ensure consistency.
## 🚀 Quick Start for Contributors
1. **Fork and clone** the repository
2. **Open in DevContainer** (VS Code: "Reopen in Container")
3. **Run setup**: `./scripts/setup/setup` (happens automatically via `postCreateCommand`)
4. **Start development environment**: `./scripts/develop`
5. **Make your changes** following the [Coding Guidelines](coding-guidelines.md)
6. **Run linting**: `./scripts/lint`
7. **Validate integration**: `./scripts/release/hassfest`
8. **Test your changes** in the running Home Assistant instance
9. **Commit using Conventional Commits** format
10. **Open a Pull Request** with clear description
## 🛠️ Development Tools
The project includes several helper scripts in `./scripts/`:
- `bootstrap` - Initial setup of dependencies
- `develop` - Start Home Assistant in debug mode (auto-cleans .egg-info)
- `clean` - Remove build artifacts and caches
- `lint` - Auto-fix code issues with ruff
- `lint-check` - Check code without modifications (CI mode)
- `hassfest` - Validate integration structure (JSON, Python syntax, required files)
- `setup` - Install development tools (git-cliff, @github/copilot)
- `prepare-release` - Prepare a new release (bump version, create tag)
- `generate-release-notes` - Generate release notes from commits
## 📦 Project Structure
```
custom_components/tibber_prices/
├── __init__.py # Integration setup
├── coordinator.py # Data update coordinator with caching
├── api.py # Tibber GraphQL API client
├── price_utils.py # Price enrichment functions
├── average_utils.py # Average calculation utilities
├── sensor/ # Sensor platform (package)
│ ├── __init__.py # Platform setup
│ ├── core.py # TibberPricesSensor class
│ ├── definitions.py # Entity descriptions
│ ├── helpers.py # Pure helper functions
│ └── attributes.py # Attribute builders
├── binary_sensor.py # Binary sensor platform
├── entity_utils/ # Shared entity helpers
│ ├── icons.py # Icon mapping logic
│ ├── colors.py # Color mapping logic
│ └── attributes.py # Common attribute builders
├── services.py # Custom services
├── config_flow.py # UI configuration flow
├── const.py # Constants and helpers
├── translations/ # Standard HA translations
└── custom_translations/ # Extended translations (descriptions)
```
## 🔍 Key Concepts
**DataUpdateCoordinator Pattern:**
- Centralized data fetching and caching
- Automatic entity updates on data changes
- Persistent storage via `Store`
- Quarter-hour boundary refresh scheduling
**Price Data Enrichment:**
- Raw API data is enriched with statistical analysis
- Trailing/leading 24h averages calculated per interval
- Price differences and ratings added
- All via pure functions in `price_utils.py`
**Translation System:**
- Dual system: `/translations/` (HA schema) + `/custom_translations/` (extended)
- Both must stay in sync across all languages (de, en, nb, nl, sv)
- Async loading at integration setup
## 🧪 Testing
```bash
# Validate integration structure
./scripts/release/hassfest
# Run all tests
pytest tests/
# Run specific test file
pytest tests/test_coordinator.py
# Run with coverage
pytest --cov=custom_components.tibber_prices tests/
```
## 📝 Documentation Standards
Documentation is organized in two Docusaurus sites:
- **User docs** (`docs/user/`): Installation, configuration, usage guides
- Markdown files in `docs/user/docs/*.md`
- Navigation managed via `docs/user/sidebars.ts`
- **Developer docs** (`docs/developer/`): Architecture, patterns, contribution guides
- Markdown files in `docs/developer/docs/*.md`
- Navigation managed via `docs/developer/sidebars.ts`
- **AI guidance**: `AGENTS.md` (patterns, conventions, long-term memory)
**Best practices:**
- Use clear examples and code snippets
- Keep docs up-to-date with code changes
- Add new pages to appropriate `sidebars.ts` for navigation
## 🤝 Contributing
See [CONTRIBUTING.md](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/CONTRIBUTING.md) for detailed contribution guidelines, code of conduct, and pull request process.
## 📄 License
This project is licensed under the [MIT License](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/LICENSE).
---
**Note:** This documentation is for developers. End users should refer to the [User Documentation](https://jpawlowski.github.io/hass.tibber_prices/user/).

View file

@ -1,322 +0,0 @@
# Performance Optimization
Guidelines for maintaining and improving integration performance.
## Performance Goals
Target metrics:
- **Coordinator update**: &lt;500ms (typical: 200-300ms)
- **Sensor update**: &lt;10ms per sensor
- **Period calculation**: &lt;100ms (typical: 20-50ms)
- **Memory footprint**: &lt;10MB per home
- **API calls**: &lt;100 per day per home
## Profiling
### Timing Decorator
Use for performance-critical functions:
```python
import time
import functools
def timing(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
duration = time.perf_counter() - start
_LOGGER.debug("%s took %.3fms", func.__name__, duration * 1000)
return result
return wrapper
@timing
def expensive_calculation():
# Your code here
```
### Memory Profiling
```python
import tracemalloc
tracemalloc.start()
# Run your code
current, peak = tracemalloc.get_traced_memory()
_LOGGER.info("Memory: current=%.2fMB peak=%.2fMB",
current / 1024**2, peak / 1024**2)
tracemalloc.stop()
```
### Async Profiling
```bash
# Install aioprof
uv pip install aioprof
# Run with profiling
python -m aioprof homeassistant -c config
```
## Optimization Patterns
### Caching
**1. Persistent Cache** (API data):
```python
# Already implemented in coordinator/cache.py
store = Store(hass, STORAGE_VERSION, STORAGE_KEY)
data = await store.async_load()
```
**2. Translation Cache** (in-memory):
```python
# Already implemented in const.py
_TRANSLATION_CACHE: dict[str, dict] = {}
def get_translation(path: str, language: str) -> dict:
cache_key = f"{path}_{language}"
if cache_key not in _TRANSLATION_CACHE:
_TRANSLATION_CACHE[cache_key] = load_translation(path, language)
return _TRANSLATION_CACHE[cache_key]
```
**3. Config Cache** (invalidated on options change):
```python
class DataTransformer:
def __init__(self):
self._config_cache: dict | None = None
def get_config(self) -> dict:
if self._config_cache is None:
self._config_cache = self._build_config()
return self._config_cache
def invalidate_config_cache(self):
self._config_cache = None
```
### Lazy Loading
**Load data only when needed:**
```python
@property
def extra_state_attributes(self) -> dict | None:
"""Return attributes."""
# Calculate only when accessed
if self.entity_description.key == "complex_sensor":
return self._calculate_complex_attributes()
return None
```
### Bulk Operations
**Process multiple items at once:**
```python
# ❌ Slow - loop with individual operations
for interval in intervals:
enriched = enrich_single_interval(interval)
results.append(enriched)
# ✅ Fast - bulk processing
results = enrich_intervals_bulk(intervals)
```
### Async Best Practices
**1. Concurrent API calls:**
```python
# ❌ Sequential (slow)
user_data = await fetch_user_data()
price_data = await fetch_price_data()
# ✅ Concurrent (fast)
user_data, price_data = await asyncio.gather(
fetch_user_data(),
fetch_price_data()
)
```
**2. Don't block event loop:**
```python
# ❌ Blocking
result = heavy_computation() # Blocks for seconds
# ✅ Non-blocking
result = await hass.async_add_executor_job(heavy_computation)
```
## Memory Management
### Avoid Memory Leaks
**1. Clear references:**
```python
class Coordinator:
async def async_shutdown(self):
"""Clean up resources."""
self._listeners.clear()
self._data = None
self._cache = None
```
**2. Use weak references for callbacks:**
```python
import weakref
class Manager:
def __init__(self):
self._callbacks: list[weakref.ref] = []
def register(self, callback):
self._callbacks.append(weakref.ref(callback))
```
### Efficient Data Structures
**Use appropriate types:**
```python
# ❌ List for lookups (O(n))
if timestamp in timestamp_list:
...
# ✅ Set for lookups (O(1))
if timestamp in timestamp_set:
...
# ❌ List comprehension with filter
results = [x for x in items if condition(x)]
# ✅ Generator for large datasets
results = (x for x in items if condition(x))
```
## Coordinator Optimization
### Minimize API Calls
**Already implemented:**
- Cache valid until midnight
- User data cached for 24h
- Only poll when tomorrow data expected
**Monitor API usage:**
```python
_LOGGER.debug("API call: %s (cache_age=%s)",
endpoint, cache_age)
```
### Smart Updates
**Only update when needed:**
```python
async def _async_update_data(self) -> dict:
"""Fetch data from API."""
if self._is_cache_valid():
_LOGGER.debug("Using cached data")
return self.data
# Fetch new data
return await self._fetch_data()
```
## Database Impact
### State Class Selection
**Affects long-term statistics storage:**
```python
# ❌ MEASUREMENT for prices (stores every change)
state_class=SensorStateClass.MEASUREMENT # ~35K records/year
# ✅ None for prices (no long-term stats)
state_class=None # Only current state
# ✅ TOTAL for counters only
state_class=SensorStateClass.TOTAL # For cumulative values
```
### Attribute Size
**Keep attributes minimal:**
```python
# ❌ Large nested structures (KB per update)
attributes = {
"all_intervals": [...], # 384 intervals
"full_history": [...], # Days of data
}
# ✅ Essential data only (bytes per update)
attributes = {
"timestamp": "...",
"rating_level": "...",
"next_interval": "...",
}
```
## Testing Performance
### Benchmark Tests
```python
import pytest
import time
@pytest.mark.benchmark
def test_period_calculation_performance(coordinator):
"""Period calculation should complete in &lt;100ms."""
start = time.perf_counter()
periods = calculate_periods(coordinator.data)
duration = time.perf_counter() - start
assert duration < 0.1, f"Too slow: {duration:.3f}s"
```
### Load Testing
```python
@pytest.mark.integration
async def test_multiple_homes_performance(hass):
"""Test with 10 homes."""
coordinators = []
for i in range(10):
coordinator = create_coordinator(hass, home_id=f"home_{i}")
await coordinator.async_refresh()
coordinators.append(coordinator)
# Verify memory usage
# Verify update times
```
## Monitoring in Production
### Log Performance Metrics
```python
@timing
async def _async_update_data(self) -> dict:
"""Fetch data with timing."""
result = await self._fetch_data()
_LOGGER.info("Update completed in %.2fs", timing_duration)
return result
```
### Memory Tracking
```python
import psutil
import os
process = psutil.Process(os.getpid())
memory_mb = process.memory_info().rss / 1024**2
_LOGGER.debug("Current memory usage: %.2f MB", memory_mb)
```
---
💡 **Related:**
- [Caching Strategy](caching-strategy.md) - Cache layers
- [Architecture](architecture.md) - System design
- [Debugging](debugging.md) - Profiling tools

View file

@ -1,290 +0,0 @@
# Recorder History Optimization
**Status**: ✅ IMPLEMENTED
**Last Updated**: 2025-12-07
## Overview
This document describes the implementation of `_unrecorded_attributes` for Tibber Prices entities to prevent Home Assistant Recorder database bloat by excluding non-essential attributes from historical data storage.
**Reference**: [HA Developer Docs - Excluding State Attributes](https://developers.home-assistant.io/docs/core/entity/#excluding-state-attributes-from-recorder-history)
## Implementation
Both `TibberPricesSensor` and `TibberPricesBinarySensor` implement `_unrecorded_attributes` as a class-level `frozenset` to exclude attributes that don't provide value in historical data analysis.
### Pattern
```python
class TibberPricesSensor(TibberPricesEntity, SensorEntity):
"""tibber_prices Sensor class."""
_unrecorded_attributes = frozenset(
{
"description",
"usage_tips",
# ... more attributes
}
)
```
**Key Points:**
- Must be a **class attribute** (not instance attribute)
- Use `frozenset` for immutability and performance
- Applied automatically by Home Assistant's Recorder component
## Categories of Excluded Attributes
### 1. Descriptions/Help Text
**Attributes:** `description`, `usage_tips`
**Reason:** Static, large text strings (100-500 chars each) that:
- Never change or change very rarely
- Don't provide analytical value in history
- Consume significant database space when recorded every state change
- Can be retrieved from translation files when needed
**Impact:** ~500-1000 bytes saved per state change
### 2. Large Nested Structures
**Attributes:**
- `periods` (binary_sensor) - Array of all period summaries
- `data` (chart_data_export) - Complete price data arrays
- `trend_attributes` - Detailed trend analysis
- `current_trend_attributes` - Current trend details
- `trend_change_attributes` - Trend change analysis
- `volatility_attributes` - Detailed volatility breakdown
**Reason:** Complex nested data structures that are:
- Serialized to JSON for storage (expensive)
- Create large database rows (2-20 KB each)
- Slow down history queries
- Provide limited value in historical analysis (current state usually sufficient)
**Impact:** ~10-30 KB saved per state change for affected sensors
**Example - periods array:**
```json
{
"periods": [
{
"start": "2025-12-07T06:00:00+01:00",
"end": "2025-12-07T08:00:00+01:00",
"duration_minutes": 120,
"price_mean": 18.5,
"price_median": 18.3,
"price_min": 17.2,
"price_max": 19.8,
// ... 10+ more attributes × 10-20 periods
}
]
}
```
### 3. Frequently Changing Diagnostics
**Attributes:** `icon_color`, `cache_age`, `cache_validity`, `data_completeness`, `data_status`
**Reason:**
- Change every update cycle (every 15 minutes or more frequently)
- Don't provide long-term analytical value
- Create state changes even when core values haven't changed
- Clutter history with cosmetic changes
- Can be reconstructed from other attributes if needed
**Impact:** Prevents unnecessary state writes when only cosmetic attributes change
**Example:** `icon_color` changes from `#00ff00` to `#ffff00` but price hasn't changed → No state write needed
### 4. Static/Rarely Changing Configuration
**Attributes:** `tomorrow_expected_after`, `level_value`, `rating_value`, `level_id`, `rating_id`, `currency`, `resolution`, `yaxis_min`, `yaxis_max`
**Reason:**
- Configuration values that rarely change
- Wastes space when recorded repeatedly
- Can be derived from other attributes or from entity state
**Impact:** ~100-200 bytes saved per state change
### 5. Temporary/Time-Bound Data
**Attributes:** `next_api_poll`, `next_midnight_turnover`, `last_api_fetch`, `last_cache_update`, `last_turnover`, `last_error`, `error`
**Reason:**
- Only relevant at moment of reading
- Won't be valid after some time
- Similar to `entity_picture` in HA core image entities
- Superseded by next update
**Impact:** ~200-400 bytes saved per state change
**Example:** `next_api_poll: "2025-12-07T14:30:00"` stored at 14:15 is useless when viewing history at 15:00
### 6. Relaxation Details
**Attributes:** `relaxation_level`, `relaxation_threshold_original_%`, `relaxation_threshold_applied_%`
**Reason:**
- Detailed technical information not needed for historical analysis
- Only useful for debugging during active development
- Boolean `relaxation_active` is kept for high-level analysis
**Impact:** ~50-100 bytes saved per state change
### 7. Redundant/Derived Data
**Attributes:** `price_spread`, `volatility`, `diff_%`, `rating_difference_%`, `period_price_diff_from_daily_min`, `period_price_diff_from_daily_min_%`, `periods_total`, `periods_remaining`
**Reason:**
- Can be calculated from other attributes
- Redundant information
- Doesn't add analytical value to history
**Impact:** ~100-200 bytes saved per state change
**Example:** `price_spread = price_max - price_min` (both are recorded, so spread can be calculated)
## Attributes That ARE Recorded
These attributes **remain in history** because they provide essential analytical value:
### Time-Series Core
- `timestamp` - Critical for time-series analysis (ALWAYS FIRST)
- All price values - Core sensor states
### Diagnostics & Tracking
- `cache_age_minutes` - Numeric value for diagnostics tracking over time
- `updates_today` - Tracking API usage patterns
### Data Completeness
- `interval_count`, `intervals_available` - Data completeness metrics
- `yesterday_available`, `today_available`, `tomorrow_available` - Boolean status
### Period Data
- `start`, `end`, `duration_minutes` - Core period timing
- `price_mean`, `price_median`, `price_min`, `price_max` - Core price statistics
### High-Level Status
- `relaxation_active` - Whether relaxation was used (boolean, useful for analyzing when periods needed relaxation)
## Expected Database Impact
### Space Savings
**Per state change:**
- Before: ~3-8 KB average
- After: ~0.5-1.5 KB average
- **Reduction: 60-85%**
**Daily per sensor:**
| Sensor Type | Updates/Day | Before | After | Savings |
|------------|-------------|--------|-------|---------|
| High-frequency (15min) | 96 | ~290 KB | ~140 KB | 50% |
| Low-frequency (6h) | 4 | ~32 KB | ~6 KB | 80% |
### Most Impactful Exclusions
1. **`periods` array** (binary_sensor) - Saves 2-5 KB per state
2. **`data`** (chart_data_export) - Saves 5-20 KB per state
3. **`trend_attributes`** - Saves 1-2 KB per state
4. **`description`/`usage_tips`** - Saves 500-1000 bytes per state
5. **`icon_color`** - Prevents unnecessary state changes
### Real-World Impact
For a typical installation with:
- 80+ sensors
- Updates every 15 minutes
- ~10 sensors updating every minute
**Before:** ~1.5 GB per month
**After:** ~400-500 MB per month
**Savings:** ~1 GB per month (~66% reduction)
## Implementation Files
- **Sensor Platform**: `custom_components/tibber_prices/sensor/core.py`
- Class: `TibberPricesSensor`
- 47 attributes excluded
- **Binary Sensor Platform**: `custom_components/tibber_prices/binary_sensor/core.py`
- Class: `TibberPricesBinarySensor`
- 30 attributes excluded
## When to Update _unrecorded_attributes
### Add to Exclusion List When:
✅ Adding new **description/help text** attributes
✅ Adding **large nested structures** (arrays, complex objects)
✅ Adding **frequently changing diagnostic info** (colors, formatted strings)
✅ Adding **temporary/time-bound data** (timestamps that become stale)
✅ Adding **redundant/derived calculations**
### Keep in History When:
**Core price/timing data** needed for analysis
**Boolean status flags** that show state transitions
**Numeric counters** useful for tracking patterns
**Data that helps understand system behavior** over time
## Decision Framework
When adding a new attribute, ask:
1. **Will this be useful in history queries 1 week from now?**
- No → Exclude
- Yes → Keep
2. **Can this be calculated from other recorded attributes?**
- Yes → Exclude
- No → Keep
3. **Is this primarily for current UI display?**
- Yes → Exclude
- No → Keep
4. **Does this change frequently without indicating state change?**
- Yes → Exclude
- No → Keep
5. **Is this larger than 100 bytes and not essential for analysis?**
- Yes → Exclude
- No → Keep
## Testing
After modifying `_unrecorded_attributes`:
1. **Restart Home Assistant** to apply changes
2. **Check Recorder database size** before/after
3. **Verify essential attributes** still appear in history
4. **Confirm excluded attributes** don't appear in new state writes
**SQL Query to check attribute presence:**
```sql
SELECT
state_id,
attributes
FROM states
WHERE entity_id = 'sensor.tibber_home_current_interval_price'
ORDER BY last_updated DESC
LIMIT 5;
```
## Maintenance Notes
- ✅ Must be a **class attribute** (instance attributes are ignored)
- ✅ Use `frozenset` for immutability
- ✅ Only affects **new** state writes (doesn't purge existing history)
- ✅ Attributes still available via `entity.attributes` in templates/automations
- ✅ Only prevents **storage** in Recorder, not runtime availability
## References
- [HA Developer Docs - Excluding State Attributes](https://developers.home-assistant.io/docs/core/entity/#excluding-state-attributes-from-recorder-history)
- Implementation PR: [Link when merged]
- Related Issue: [Link if applicable]

View file

@ -1,414 +0,0 @@
# Refactoring Guide
This guide explains how to plan and execute major refactorings in this project.
## When to Plan a Refactoring
Not every code change needs a detailed plan. Create a refactoring plan when:
🔴 **Major changes requiring planning:**
- Splitting modules into packages (>5 files affected, >500 lines moved)
- Architectural changes (new packages, module restructuring)
- Breaking changes (API changes, config format migrations)
🟡 **Medium changes that might benefit from planning:**
- Complex features with multiple moving parts
- Changes affecting many files (>3 files, unclear best approach)
- Refactorings with unclear scope
🟢 **Small changes - no planning needed:**
- Bug fixes (straightforward, `<`100 lines)
- Small features (`<`3 files, clear approach)
- Documentation updates
- Cosmetic changes (formatting, renaming)
## The Planning Process
### 1. Create a Planning Document
Create a file in the `planning/` directory (git-ignored for free iteration):
```bash
# Example:
touch planning/my-feature-refactoring-plan.md
```
**Note:** The `planning/` directory is git-ignored, so you can iterate freely without polluting git history.
### 2. Use the Planning Template
Every planning document should include:
```markdown
# <Feature> Refactoring Plan
**Status**: 🔄 PLANNING | 🚧 IN PROGRESS | ✅ COMPLETED | ❌ CANCELLED
**Created**: YYYY-MM-DD
**Last Updated**: YYYY-MM-DD
## Problem Statement
- What's the issue?
- Why does it need fixing?
- Current pain points
## Proposed Solution
- High-level approach
- File structure (before/after)
- Module responsibilities
## Migration Strategy
- Phase-by-phase breakdown
- File lifecycle (CREATE/MODIFY/DELETE/RENAME)
- Dependencies between phases
- Testing checkpoints
## Risks & Mitigation
- What could go wrong?
- How to prevent it?
- Rollback strategy
## Success Criteria
- Measurable improvements
- Testing requirements
- Verification steps
```
See `planning/README.md` for detailed template explanation.
### 3. Iterate Freely
Since `planning/` is git-ignored:
- Draft multiple versions
- Get AI assistance without commit pressure
- Refine until the plan is solid
- No need to clean up intermediate versions
### 4. Implementation Phase
Once plan is approved:
- Follow the phases defined in the plan
- Test after each phase (don't skip!)
- Update plan if issues discovered
- Track progress through phase status
### 5. After Completion
**Option A: Archive in docs/development/**
If the plan has lasting value (successful pattern, reusable approach):
```bash
mv planning/my-feature-refactoring-plan.md docs/development/
git add docs/development/my-feature-refactoring-plan.md
git commit -m "docs: archive successful refactoring plan"
```
**Option B: Delete**
If the plan served its purpose and code is the source of truth:
```bash
rm planning/my-feature-refactoring-plan.md
```
**Option C: Keep locally (not committed)**
For "why we didn't do X" reference:
```bash
mkdir -p planning/archive
mv planning/my-feature-refactoring-plan.md planning/archive/
# Still git-ignored, just organized
```
## Real-World Example
The **sensor/ package refactoring** (Nov 2025) is a successful example:
**Before:**
- `sensor.py` - 2,574 lines, hard to navigate
**After:**
- `sensor/` package with 5 focused modules
- Each module `<`800 lines
- Clear separation of concerns
**Process:**
1. Created `planning/module-splitting-plan.md` (now in `docs/development/`)
2. Defined 6 phases with clear file lifecycle
3. Implemented phase by phase
4. Tested after each phase
5. Documented in AGENTS.md
6. Moved plan to `docs/development/` as reference
**Key learnings:**
- Temporary `_impl.py` files avoid Python package conflicts
- Test after EVERY phase (don't accumulate changes)
- Clear file lifecycle (CREATE/MODIFY/DELETE/RENAME)
- Phase-by-phase approach enables safe rollback
**Note:** The complete module splitting plan was documented during implementation but has been superseded by the actual code structure.
## Phase-by-Phase Implementation
### Why Phases Matter
Breaking refactorings into phases:
- ✅ Enables testing after each change (catch bugs early)
- ✅ Allows rollback to last good state
- ✅ Makes progress visible
- ✅ Reduces cognitive load (focus on one thing)
- ❌ Takes more time (but worth it!)
### Phase Structure
Each phase should:
1. **Have clear goal** - What's being changed?
2. **Document file lifecycle** - CREATE/MODIFY/DELETE/RENAME
3. **Define success criteria** - How to verify it worked?
4. **Include testing steps** - What to test?
5. **Estimate time** - Realistic time budget
### Example Phase Documentation
```markdown
### Phase 3: Extract Helper Functions (Session 3)
**Goal**: Move pure utility functions to helpers.py
**File Lifecycle**:
- ✨ CREATE `sensor/helpers.py` (utility functions)
- ✏️ MODIFY `sensor/core.py` (import from helpers.py)
**Steps**:
1. Create sensor/helpers.py
2. Move pure functions (no state, no self)
3. Add comprehensive docstrings
4. Update imports in core.py
**Estimated time**: 45 minutes
**Success criteria**:
- ✅ All pure functions moved
- ✅ `./scripts/lint-check` passes
- ✅ HA starts successfully
- ✅ All entities work correctly
```
## Testing Strategy
### After Each Phase
Minimum testing checklist:
```bash
# 1. Linting passes
./scripts/lint-check
# 2. Home Assistant starts
./scripts/develop
# Watch for startup errors in logs
# 3. Integration loads
# Check: Settings → Devices & Services → Tibber Prices
# Verify: All entities appear
# 4. Basic functionality
# Test: Data updates without errors
# Check: Entity states update correctly
```
### Comprehensive Testing (Final Phase)
After completing all phases:
- Test all entities (sensors, binary sensors)
- Test configuration flow (add/modify/remove)
- Test options flow (change settings)
- Test services (custom service calls)
- Test error handling (disconnect API, invalid data)
- Test caching (restart HA, verify cache loads)
- Test time-based updates (quarter-hour refresh)
## Common Pitfalls
### ❌ Skip Planning for Large Changes
**Problem:** "This seems straightforward, I'll just start coding..."
**Result:** Halfway through, realize the approach doesn't work. Wasted time.
**Solution:** If unsure, spend 30 minutes on a rough plan. Better to plan and discard than get stuck.
### ❌ Implement All Phases at Once
**Problem:** "I'll do all phases, then test everything..."
**Result:** 10+ files changed, 2000+ lines modified, hard to debug if something breaks.
**Solution:** Test after EVERY phase. Commit after each successful phase.
### ❌ Forget to Update Documentation
**Problem:** Code is refactored, but AGENTS.md and docs/ still reference old structure.
**Result:** AI/humans get confused by outdated documentation.
**Solution:** Include "Documentation Phase" at the end of every refactoring plan.
### ❌ Ignore the Planning Directory
**Problem:** "I'll just create the plan in docs/ directly..."
**Result:** Git history polluted with draft iterations, or pressure to "commit something" too early.
**Solution:** Always use `planning/` for work-in-progress. Move to `docs/` only when done.
## Integration with AI Development
This project uses AI heavily (GitHub Copilot, Claude). The planning process supports AI development:
**AI reads from:**
- `AGENTS.md` - Long-term memory, patterns, conventions (AI-focused)
- `docs/development/` - Human-readable guides (human-focused)
- `planning/` - Active refactoring plans (shared context)
**AI updates:**
- `AGENTS.md` - When patterns change
- `planning/*.md` - During refactoring implementation
- `docs/development/` - After successful completion
**Why separate AGENTS.md and docs/development/?**
- `AGENTS.md`: Technical, comprehensive, AI-optimized
- `docs/development/`: Practical, focused, human-optimized
- Both stay in sync but serve different audiences
See [AGENTS.md](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md) section "Planning Major Refactorings" for AI-specific guidance.
## Tools and Resources
### Planning Directory
- `planning/` - Git-ignored workspace for drafts
- `planning/README.md` - Detailed planning documentation
- `planning/*.md` - Active refactoring plans
### Example Plans
- `docs/development/module-splitting-plan.md` - ✅ Completed, archived
- `planning/config-flow-refactoring-plan.md` - 🔄 Planned (1013 lines → 4 modules)
- `planning/binary-sensor-refactoring-plan.md` - 🔄 Planned (644 lines → 4 modules)
- `planning/coordinator-refactoring-plan.md` - 🔄 Planned (1446 lines, high complexity)
### Helper Scripts
```bash
./scripts/lint-check # Verify code quality
./scripts/develop # Start HA for testing
./scripts/lint # Auto-fix issues
```
## FAQ
### Q: When should I create a plan vs. just start coding?
**A:** If you're asking this question, you probably need a plan. 😊
Simple rule: If you can't describe the entire change in 3 sentences, create a plan.
### Q: How detailed should the plan be?
**A:** Detailed enough to execute without major surprises, but not a line-by-line script.
Good plan level:
- Lists all files affected (CREATE/MODIFY/DELETE)
- Defines phases with clear boundaries
- Includes testing strategy
- Estimates time per phase
Too detailed:
- Exact code snippets for every change
- Line-by-line instructions
Too vague:
- "Refactor sensor.py to be better"
- No phase breakdown
- No testing strategy
### Q: What if the plan changes during implementation?
**A:** Update the plan! Planning documents are living documents.
If you discover:
- Better approach → Update "Proposed Solution"
- More phases needed → Add to "Migration Strategy"
- New risks → Update "Risks & Mitigation"
Document WHY the plan changed (helps future refactorings).
### Q: Should every refactoring follow this process?
**A:** No! Use judgment:
- **Small changes (`<`100 lines, clear approach)**: Just do it, no plan needed
- **Medium changes (unclear scope)**: Write rough outline, refine if needed
- **Large changes (>500 lines, >5 files)**: Full planning process
### Q: How do I know when a refactoring is successful?
**A:** Check the "Success Criteria" from your plan:
Typical criteria:
- ✅ All linting checks pass
- ✅ HA starts without errors
- ✅ All entities functional
- ✅ No regressions (existing features work)
- ✅ Code easier to understand/modify
- ✅ Documentation updated
If you can't tick all boxes, the refactoring isn't done.
## Summary
**Key takeaways:**
1. **Plan when scope is unclear** (>500 lines, >5 files, breaking changes)
2. **Use planning/ directory** for free iteration (git-ignored)
3. **Work in phases** and test after each phase
4. **Document file lifecycle** (CREATE/MODIFY/DELETE/RENAME)
5. **Update documentation** after completion (AGENTS.md, docs/)
6. **Archive or delete** plan after implementation
**Remember:** Good planning prevents half-finished refactorings and makes rollback easier when things go wrong.
---
**Next steps:**
- Read `planning/README.md` for detailed template
- Check `docs/development/module-splitting-plan.md` for real example
- Browse `planning/` for active refactoring plans

View file

@ -1,365 +0,0 @@
---
comments: false
---
# Release Notes Generation
This project supports **three ways** to generate release notes from conventional commits, plus **automatic version management**.
## 🚀 Quick Start: Preparing a Release
**Recommended workflow (automatic & foolproof):**
```bash
# 1. Use the helper script to prepare release
./scripts/release/prepare 0.3.0
# This will:
# - Update manifest.json version to 0.3.0
# - Create commit: "chore(release): bump version to 0.3.0"
# - Create tag: v0.3.0
# - Show you what will be pushed
# 2. Review and push when ready
git push origin main v0.3.0
# 3. CI/CD automatically:
# - Detects the new tag
# - Generates release notes (excluding version bump commit)
# - Creates GitHub release
```
**If you forget to bump manifest.json:**
```bash
# Just edit manifest.json manually and commit
vim custom_components/tibber_prices/manifest.json # "version": "0.3.0"
git commit -am "chore(release): bump version to 0.3.0"
git push
# Auto-Tag workflow detects manifest.json change and creates tag automatically!
# Then Release workflow kicks in and creates the GitHub release
```
---
## 📋 Release Options
### 1. GitHub UI Button (Easiest)
Use GitHub's built-in release notes generator:
1. Go to [Releases](https://github.com/jpawlowski/hass.tibber_prices/releases)
2. Click "Draft a new release"
3. Select your tag
4. Click "Generate release notes" button
5. Edit if needed and publish
**Uses:** `.github/release.yml` configuration
**Best for:** Quick releases, works with PRs that have labels
**Note:** Direct commits appear in "Other Changes" category
---
### 2. Local Script (Intelligent)
Run `./scripts/release/generate-notes` to parse conventional commits locally.
**Automatic backend detection:**
```bash
# Generate from latest tag to HEAD
./scripts/release/generate-notes
# Generate between specific tags
./scripts/release/generate-notes v1.0.0 v1.1.0
# Generate from tag to HEAD
./scripts/release/generate-notes v1.0.0 HEAD
```
**Force specific backend:**
```bash
# Use AI (GitHub Copilot CLI)
RELEASE_NOTES_BACKEND=copilot ./scripts/release/generate-notes
# Use git-cliff (template-based)
RELEASE_NOTES_BACKEND=git-cliff ./scripts/release/generate-notes
# Use manual parsing (grep/awk fallback)
RELEASE_NOTES_BACKEND=manual ./scripts/release/generate-notes
```
**Disable AI** (useful for CI/CD):
```bash
USE_AI=false ./scripts/release/generate-notes
```
#### Backend Priority
The script automatically selects the best available backend:
1. **GitHub Copilot CLI** - AI-powered, context-aware (best quality)
2. **git-cliff** - Fast Rust tool with templates (reliable)
3. **Manual** - Simple grep/awk parsing (always works)
In CI/CD (`$CI` or `$GITHUB_ACTIONS`), AI is automatically disabled.
#### Installing Optional Backends
**In DevContainer (automatic):**
git-cliff is automatically installed when the DevContainer is built:
- **Rust toolchain**: Installed via `ghcr.io/devcontainers/features/rust:1` (minimal profile)
- **git-cliff**: Installed via cargo in `scripts/setup/setup`
Simply rebuild the container (VS Code: "Dev Containers: Rebuild Container") and git-cliff will be available.
**Manual installation (outside DevContainer):**
**git-cliff** (template-based):
```bash
# See: https://git-cliff.org/docs/installation
# macOS
brew install git-cliff
# Cargo (all platforms)
cargo install git-cliff
# Manual binary download
wget https://github.com/orhun/git-cliff/releases/latest/download/git-cliff-x86_64-unknown-linux-gnu.tar.gz
tar -xzf git-cliff-*.tar.gz
sudo mv git-cliff-*/git-cliff /usr/local/bin/
```
---
### 3. CI/CD Automation
Automatic release notes on tag push.
**Workflow:** `.github/workflows/release.yml`
**Triggers:** Version tags (`v1.0.0`, `v2.1.3`, etc.)
```bash
# Create and push a tag to trigger automatic release
git tag v1.0.0
git push origin v1.0.0
# GitHub Actions will:
# 1. Detect the new tag
# 2. Generate release notes using git-cliff
# 3. Create a GitHub release automatically
```
**Backend:** Uses `git-cliff` (AI disabled in CI for reliability)
---
## 📝 Output Format
All methods produce GitHub-flavored Markdown with emoji categories:
```markdown
## 🎉 New Features
- **scope**: Description ([abc1234](link-to-commit))
## 🐛 Bug Fixes
- **scope**: Description ([def5678](link-to-commit))
## 📚 Documentation
- **scope**: Description ([ghi9012](link-to-commit))
## 🔧 Maintenance & Refactoring
- **scope**: Description ([jkl3456](link-to-commit))
## 🧪 Testing
- **scope**: Description ([mno7890](link-to-commit))
```
---
## 🎯 When to Use Which
| Method | Use Case | Pros | Cons |
|--------|----------|------|------|
| **Helper Script** | Normal releases | Foolproof, automatic | Requires script |
| **Auto-Tag Workflow** | Forgot script | Safety net, automatic tagging | Still need manifest bump |
| **GitHub Button** | Manual quick release | Easy, no script | Limited categorization |
| **Local Script** | Testing release notes | Preview before release | Manual process |
| **CI/CD** | After tag push | Fully automatic | Needs tag first |
---
## 🔄 Complete Release Workflows
### Workflow A: Using Helper Script (Recommended)
```bash
# Step 1: Prepare release (all-in-one)
./scripts/release/prepare 0.3.0
# Step 2: Review changes
git log -1 --stat
git show v0.3.0
# Step 3: Push when ready
git push origin main v0.3.0
# Done! CI/CD creates the release automatically
```
**What happens:**
1. Script bumps manifest.json → commits → creates tag locally
2. You push commit + tag together
3. Release workflow sees tag → generates notes → creates release
---
### Workflow B: Manual (with Auto-Tag Safety Net)
```bash
# Step 1: Bump version manually
vim custom_components/tibber_prices/manifest.json
# Change: "version": "0.3.0"
# Step 2: Commit
git commit -am "chore(release): bump version to 0.3.0"
git push
# Step 3: Wait for Auto-Tag workflow
# GitHub Actions automatically creates v0.3.0 tag
# Then Release workflow creates the release
```
**What happens:**
1. You push manifest.json change
2. Auto-Tag workflow detects change → creates tag automatically
3. Release workflow sees new tag → creates release
---
### Workflow C: Manual Tag (Old Way)
```bash
# Step 1: Bump version
vim custom_components/tibber_prices/manifest.json
git commit -am "chore(release): bump version to 0.3.0"
# Step 2: Create tag manually
git tag v0.3.0
git push origin main v0.3.0
# Release workflow creates release
```
**What happens:**
1. You create and push tag manually
2. Release workflow creates release
3. Auto-Tag workflow skips (tag already exists)
---
## ⚙️ Configuration Files
- `scripts/release/prepare` - Helper script to bump version + create tag
- `.github/workflows/auto-tag.yml` - Automatic tag creation on manifest.json change
- `.github/workflows/release.yml` - Automatic release on tag push
- `.github/release.yml` - GitHub UI button configuration
- `cliff.toml` - git-cliff template (filters out version bumps)
---
## 🛡️ Safety Features
### 1. **Version Validation**
Both helper script and auto-tag workflow validate version format (X.Y.Z).
### 2. **No Duplicate Tags**
- Helper script checks if tag exists (local + remote)
- Auto-tag workflow checks if tag exists before creating
### 3. **Atomic Operations**
Helper script creates commit + tag locally. You decide when to push.
### 4. **Version Bumps Filtered**
Release notes automatically exclude `chore(release): bump version` commits.
### 5. **Rollback Instructions**
Helper script shows how to undo if you change your mind.
---
## 🐛 Troubleshooting
**"Tag already exists" error:**
```bash
# Local tag
git tag -d v0.3.0
# Remote tag (only if you need to recreate)
git push origin :refs/tags/v0.3.0
```
**Manifest version doesn't match tag:**
This shouldn't happen with the new workflows, but if it does:
```bash
# 1. Fix manifest.json
vim custom_components/tibber_prices/manifest.json
# 2. Amend the commit
git commit --amend -am "chore(release): bump version to 0.3.0"
# 3. Move the tag
git tag -f v0.3.0
git push -f origin main v0.3.0
```
**Auto-tag didn't create tag:**
Check workflow runs in GitHub Actions. Common causes:
- Tag already exists remotely
- Invalid version format in manifest.json
- manifest.json not in the commit that was pushed
---
## 🔍 Format Requirements
**HACS:** No specific format required, uses GitHub releases as-is
**Home Assistant:** No specific format required for custom integrations
**Markdown:** Standard GitHub-flavored Markdown supported
**HTML:** Can include `<ha-alert>` tags if needed
---
## 💡 Tips
1. **Conventional Commits:** Use proper commit format for best results:
```
feat(scope): Add new feature
Detailed description of what changed.
Impact: Users can now do X and Y.
```
2. **Impact Section:** Add `Impact:` in commit body for user-friendly descriptions
3. **Test Locally:** Run `./scripts/release/generate-notes` before creating release
4. **AI vs Template:** GitHub Copilot CLI provides better descriptions, git-cliff is faster and more reliable
5. **CI/CD:** Tag push triggers automatic release - no manual intervention needed

View file

@ -1,330 +0,0 @@
# Repairs System
The Tibber Prices integration includes a proactive repair notification system that alerts users to important issues requiring attention. This system leverages Home Assistant's built-in `issue_registry` to create user-facing notifications in the UI.
## Overview
The repairs system is implemented in `coordinator/repairs.py` via the `TibberPricesRepairManager` class, which is instantiated in the coordinator and integrated into the update cycle.
**Design Principles:**
- **Proactive**: Detect issues before they become critical
- **User-friendly**: Clear explanations with actionable guidance
- **Auto-clearing**: Repairs automatically disappear when conditions resolve
- **Non-blocking**: Integration continues to work even with active repairs
## Implemented Repair Types
### 1. Tomorrow Data Missing
**Issue ID:** `tomorrow_data_missing_{entry_id}`
**When triggered:**
- Current time is after 18:00 (configurable via `TOMORROW_DATA_WARNING_HOUR`)
- Tomorrow's electricity price data is still not available
**When cleared:**
- Tomorrow's data becomes available
- Automatically checks on every successful API update
**User impact:**
Users cannot plan ahead for tomorrow's electricity usage optimization. Automations relying on tomorrow's prices will not work.
**Implementation:**
```python
# In coordinator update cycle
has_tomorrow_data = self._data_fetcher.has_tomorrow_data(result["priceInfo"])
await self._repair_manager.check_tomorrow_data_availability(
has_tomorrow_data=has_tomorrow_data,
current_time=current_time,
)
```
**Translation placeholders:**
- `home_name`: Name of the affected home
- `warning_hour`: Hour after which warning appears (default: 18)
### 2. Rate Limit Exceeded
**Issue ID:** `rate_limit_exceeded_{entry_id}`
**When triggered:**
- Integration encounters 3 or more consecutive rate limit errors (HTTP 429)
- Threshold configurable via `RATE_LIMIT_WARNING_THRESHOLD`
**When cleared:**
- Successful API call completes (no rate limit error)
- Error counter resets to 0
**User impact:**
API requests are being throttled, causing stale data. Updates may be delayed until rate limit expires.
**Implementation:**
```python
# In error handler
is_rate_limit = (
"429" in error_str
or "rate limit" in error_str
or "too many requests" in error_str
)
if is_rate_limit:
await self._repair_manager.track_rate_limit_error()
# On successful update
await self._repair_manager.clear_rate_limit_tracking()
```
**Translation placeholders:**
- `home_name`: Name of the affected home
- `error_count`: Number of consecutive rate limit errors
### 3. Home Not Found
**Issue ID:** `home_not_found_{entry_id}`
**When triggered:**
- Home configured in this integration is no longer present in Tibber account
- Detected during user data refresh (daily check)
**When cleared:**
- Home reappears in Tibber account (unlikely - manual cleanup expected)
- Integration entry is removed (shutdown cleanup)
**User impact:**
Integration cannot fetch data for a non-existent home. User must remove the config entry and re-add if needed.
**Implementation:**
```python
# After user data update
home_exists = self._data_fetcher._check_home_exists(home_id)
if not home_exists:
await self._repair_manager.create_home_not_found_repair()
else:
await self._repair_manager.clear_home_not_found_repair()
```
**Translation placeholders:**
- `home_name`: Name of the missing home
- `entry_id`: Config entry ID for reference
## Configuration Constants
Defined in `coordinator/constants.py`:
```python
TOMORROW_DATA_WARNING_HOUR = 18 # Hour after which to warn about missing tomorrow data
RATE_LIMIT_WARNING_THRESHOLD = 3 # Number of consecutive errors before creating repair
```
## Architecture
### Class Structure
```python
class TibberPricesRepairManager:
"""Manages repair issues for a single Tibber home."""
def __init__(
self,
hass: HomeAssistant,
entry_id: str,
home_name: str,
) -> None:
"""Initialize repair manager."""
self._hass = hass
self._entry_id = entry_id
self._home_name = home_name
# State tracking
self._tomorrow_data_repair_active = False
self._rate_limit_error_count = 0
self._rate_limit_repair_active = False
self._home_not_found_repair_active = False
```
### State Tracking
Each repair type maintains internal state to avoid redundant operations:
- **`_tomorrow_data_repair_active`**: Boolean flag, prevents creating duplicate repairs
- **`_rate_limit_error_count`**: Integer counter, tracks consecutive errors
- **`_rate_limit_repair_active`**: Boolean flag, tracks repair status
- **`_home_not_found_repair_active`**: Boolean flag, one-time repair (manual cleanup)
### Lifecycle Integration
**Coordinator Initialization:**
```python
self._repair_manager = TibberPricesRepairManager(
hass=hass,
entry_id=self.config_entry.entry_id,
home_name=self._home_name,
)
```
**Update Cycle Integration:**
```python
# Success path - check conditions
if result and "priceInfo" in result:
has_tomorrow_data = self._data_fetcher.has_tomorrow_data(result["priceInfo"])
await self._repair_manager.check_tomorrow_data_availability(
has_tomorrow_data=has_tomorrow_data,
current_time=current_time,
)
await self._repair_manager.clear_rate_limit_tracking()
# Error path - track rate limits
if is_rate_limit:
await self._repair_manager.track_rate_limit_error()
```
**Shutdown Cleanup:**
```python
async def async_shutdown(self) -> None:
"""Shut down coordinator and clean up."""
await self._repair_manager.clear_all_repairs()
# ... other cleanup ...
```
## Translation System
Repairs use Home Assistant's standard translation system. Translations are defined in:
- `/translations/en.json`
- `/translations/de.json`
- `/translations/nb.json`
- `/translations/nl.json`
- `/translations/sv.json`
**Structure:**
```json
{
"issues": {
"tomorrow_data_missing": {
"title": "Tomorrow's price data missing for {home_name}",
"description": "Detailed explanation with multiple paragraphs...\n\nPossible causes:\n- Cause 1\n- Cause 2"
}
}
}
```
## Home Assistant Integration
Repairs appear in:
- **Settings → System → Repairs** (main repairs panel)
- **Notifications** (bell icon in UI shows repair count)
Repair properties:
- **`is_fixable=False`**: No automated fix available (user action required)
- **`severity=IssueSeverity.WARNING`**: Yellow warning level (not critical)
- **`translation_key`**: References `issues.{key}` in translation files
## Testing Repairs
### Tomorrow Data Missing
1. Wait until after 18:00 local time
2. Ensure integration has no tomorrow price data
3. Repair should appear in UI
4. When tomorrow data arrives (next API fetch), repair clears
**Manual trigger:**
```python
# Temporarily set warning hour to current hour for testing
TOMORROW_DATA_WARNING_HOUR = datetime.now().hour
```
### Rate Limit Exceeded
1. Simulate 3+ consecutive rate limit errors
2. Repair should appear after 3rd error
3. Successful API call clears the repair
**Manual test:**
- Reduce API polling interval to trigger rate limiting
- Or temporarily return HTTP 429 in API client
### Home Not Found
1. Remove home from Tibber account via app/web
2. Wait for user data refresh (daily check)
3. Repair appears indicating home is missing
4. Remove integration entry to clear repair
## Adding New Repair Types
To add a new repair type:
1. **Add constants** (if needed) in `coordinator/constants.py`
2. **Add state tracking** in `TibberPricesRepairManager.__init__`
3. **Implement check method** with create/clear logic
4. **Add translations** to all 5 language files
5. **Integrate into coordinator** update cycle or error handlers
6. **Add cleanup** to `clear_all_repairs()` method
7. **Document** in this file
**Example template:**
```python
async def check_new_condition(self, *, param: bool) -> None:
"""Check new condition and create/clear repair."""
should_warn = param # Your condition logic
if should_warn and not self._new_repair_active:
await self._create_new_repair()
elif not should_warn and self._new_repair_active:
await self._clear_new_repair()
async def _create_new_repair(self) -> None:
"""Create new repair issue."""
_LOGGER.warning("New issue detected - creating repair")
ir.async_create_issue(
self._hass,
DOMAIN,
f"new_issue_{self._entry_id}",
is_fixable=False,
severity=ir.IssueSeverity.WARNING,
translation_key="new_issue",
translation_placeholders={
"home_name": self._home_name,
},
)
self._new_repair_active = True
async def _clear_new_repair(self) -> None:
"""Clear new repair issue."""
_LOGGER.debug("New issue resolved - clearing repair")
ir.async_delete_issue(
self._hass,
DOMAIN,
f"new_issue_{self._entry_id}",
)
self._new_repair_active = False
```
## Best Practices
1. **Always use state tracking** - Prevents duplicate repair creation
2. **Auto-clear when resolved** - Improves user experience
3. **Clear on shutdown** - Prevents orphaned repairs
4. **Use descriptive issue IDs** - Include entry_id for multi-home setups
5. **Provide actionable guidance** - Tell users what they can do
6. **Use appropriate severity** - WARNING for most cases, ERROR only for critical
7. **Test all language translations** - Ensure placeholders work correctly
8. **Document expected behavior** - What triggers, what clears, what user should do
## Future Enhancements
Potential additions to the repairs system:
- **Stale data warning**: Alert when cache is >24 hours old with no API updates
- **Missing permissions**: Detect insufficient API token scopes
- **Config migration needed**: Notify users of breaking changes requiring reconfiguration
- **Extreme price alert**: Warn when prices exceed historical thresholds (optional, user-configurable)
## References
- Home Assistant Repairs Documentation: https://developers.home-assistant.io/docs/core/platform/repairs
- Issue Registry API: `homeassistant.helpers.issue_registry`
- Integration Constants: `custom_components/tibber_prices/const.py`
- Repair Manager Implementation: `custom_components/tibber_prices/coordinator/repairs.py`

View file

@ -1,57 +0,0 @@
# Development Setup
> **Note:** This guide is under construction. For now, please refer to [`AGENTS.md`](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md) for detailed setup information.
## Prerequisites
- VS Code with Dev Container support
- Docker installed and running
- GitHub account (for Tibber API token)
## Quick Setup
```bash
# Clone the repository
git clone https://github.com/jpawlowski/hass.tibber_prices.git
cd hass.tibber_prices
# Open in VS Code
code .
# Reopen in DevContainer (VS Code will prompt)
# Or manually: Ctrl+Shift+P → "Dev Containers: Reopen in Container"
```
## Development Environment
The DevContainer includes:
- Python 3.13 with `.venv` at `/home/vscode/.venv/`
- `uv` package manager (fast, modern Python tooling)
- Home Assistant development dependencies
- Ruff linter/formatter
- Git, GitHub CLI, Node.js, Rust toolchain
## Running the Integration
```bash
# Start Home Assistant in debug mode
./scripts/develop
```
Visit http://localhost:8123
## Making Changes
```bash
# Lint and format code
./scripts/lint
# Check-only (CI mode)
./scripts/lint-check
# Validate integration structure
./scripts/release/hassfest
```
See [`AGENTS.md`](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md) for detailed patterns and conventions.

View file

@ -1,52 +0,0 @@
# Testing
> **Note:** This guide is under construction.
## Integration Validation
Before running tests or committing changes, validate the integration structure:
```bash
# Run local validation (JSON syntax, Python syntax, required files)
./scripts/release/hassfest
```
This lightweight script checks:
- ✓ `config_flow.py` exists
- ✓ `manifest.json` is valid JSON with required fields
- ✓ Translation files have valid JSON syntax
- ✓ All Python files compile without syntax errors
**Note:** Full hassfest validation runs in GitHub Actions on push.
## Running Tests
```bash
# Run all tests
pytest tests/
# Run specific test file
pytest tests/test_coordinator.py
# Run with coverage
pytest --cov=custom_components.tibber_prices tests/
```
## Manual Testing
```bash
# Start development environment
./scripts/develop
```
Then test in Home Assistant UI:
- Configuration flow
- Sensor states and attributes
- Services
- Translation strings
## Test Guidelines
Coming soon...

View file

@ -1,433 +0,0 @@
---
comments: false
---
# Timer Architecture
This document explains the timer/scheduler system in the Tibber Prices integration - what runs when, why, and how they coordinate.
## Overview
The integration uses **three independent timer mechanisms** for different purposes:
| Timer | Type | Interval | Purpose | Trigger Method |
|-------|------|----------|---------|----------------|
| **Timer #1** | HA built-in | 15 minutes | API data updates | `DataUpdateCoordinator` |
| **Timer #2** | Custom | :00, :15, :30, :45 | Entity state refresh | `async_track_utc_time_change()` |
| **Timer #3** | Custom | Every minute | Countdown/progress | `async_track_utc_time_change()` |
**Key principle:** Timer #1 (HA) controls **data fetching**, Timer #2 controls **entity updates**, Timer #3 controls **timing displays**.
---
## Timer #1: DataUpdateCoordinator (HA Built-in)
**File:** `coordinator/core.py``TibberPricesDataUpdateCoordinator`
**Type:** Home Assistant's built-in `DataUpdateCoordinator` with `UPDATE_INTERVAL = 15 minutes`
**What it is:**
- HA provides this timer system automatically when you inherit from `DataUpdateCoordinator`
- Triggers `_async_update_data()` method every 15 minutes
- **Not** synchronized to clock boundaries (each installation has different start time)
**Purpose:** Check if fresh API data is needed, fetch if necessary
**What it does:**
```python
async def _async_update_data(self) -> TibberPricesData:
# Step 1: Check midnight turnover FIRST (prevents race with Timer #2)
if self._check_midnight_turnover_needed(dt_util.now()):
await self._perform_midnight_data_rotation(dt_util.now())
# Notify ALL entities after midnight turnover
return self.data # Early return
# Step 2: Check if we need tomorrow data (after 13:00)
if self._should_update_price_data() == "tomorrow_check":
await self._fetch_and_update_data() # Fetch from API
return self.data
# Step 3: Use cached data (fast path - most common)
return self.data
```
**Load Distribution:**
- Each HA installation starts Timer #1 at different times → natural distribution
- Tomorrow data check adds 0-30s random delay → prevents "thundering herd" on Tibber API
- Result: API load spread over ~30 minutes instead of all at once
**Midnight Coordination:**
- Atomic check: `_check_midnight_turnover_needed(now)` compares dates only (no side effects)
- If midnight turnover needed → performs it and returns early
- Timer #2 will see turnover already done and skip gracefully
**Why we use HA's timer:**
- Automatic restart after HA restart
- Built-in retry logic for temporary failures
- Standard HA integration pattern
- Handles backpressure (won't queue up if previous update still running)
---
## Timer #2: Quarter-Hour Refresh (Custom)
**File:** `coordinator/listeners.py``ListenerManager.schedule_quarter_hour_refresh()`
**Type:** Custom timer using `async_track_utc_time_change(minute=[0, 15, 30, 45], second=0)`
**Purpose:** Update time-sensitive entity states at interval boundaries **without waiting for API poll**
**Problem it solves:**
- Timer #1 runs every 15 minutes but NOT synchronized to clock (:03, :18, :33, :48)
- Current price changes at :00, :15, :30, :45 → entities would show stale data for up to 15 minutes
- Example: 14:00 new price, but Timer #1 ran at 13:58 → next update at 14:13 → users see old price until 14:13
**What it does:**
```python
async def _handle_quarter_hour_refresh(self, now: datetime) -> None:
# Step 1: Check midnight turnover (coordinates with Timer #1)
if self._check_midnight_turnover_needed(now):
# Timer #1 might have already done this → atomic check handles it
await self._perform_midnight_data_rotation(now)
# Notify ALL entities after midnight turnover
return
# Step 2: Normal quarter-hour refresh (most common path)
# Only notify time-sensitive entities (current_interval_price, etc.)
self._listener_manager.async_update_time_sensitive_listeners()
```
**Smart Boundary Tolerance:**
- Uses `round_to_nearest_quarter_hour()` with ±2 second tolerance
- HA may schedule timer at 14:59:58 → rounds to 15:00:00 (shows new interval)
- HA restart at 14:59:30 → stays at 14:45:00 (shows current interval)
- See [Architecture](./architecture.md#3-quarter-hour-precision) for details
**Absolute Time Scheduling:**
- `async_track_utc_time_change()` plans for **all future boundaries** (15:00, 15:15, 15:30, ...)
- NOT relative delays ("in 15 minutes")
- If triggered at 14:59:58 → next trigger is 15:15:00, NOT 15:00:00 (prevents double updates)
**Which entities listen:**
- All sensors that depend on "current interval" (e.g., `current_interval_price`, `next_interval_price`)
- Binary sensors that check "is now in period?" (e.g., `best_price_period_active`)
- ~50-60 entities out of 120+ total
**Why custom timer:**
- HA's built-in coordinator doesn't support exact boundary timing
- We need **absolute time** triggers, not periodic intervals
- Allows fast entity updates without expensive data transformation
---
## Timer #3: Minute Refresh (Custom)
**File:** `coordinator/listeners.py``ListenerManager.schedule_minute_refresh()`
**Type:** Custom timer using `async_track_utc_time_change(second=0)` (every minute)
**Purpose:** Update countdown and progress sensors for smooth UX
**What it does:**
```python
async def _handle_minute_refresh(self, now: datetime) -> None:
# Only notify minute-update entities
# No data fetching, no transformation, no midnight handling
self._listener_manager.async_update_minute_listeners()
```
**Which entities listen:**
- `best_price_remaining_minutes` - Countdown timer
- `peak_price_remaining_minutes` - Countdown timer
- `best_price_progress` - Progress bar (0-100%)
- `peak_price_progress` - Progress bar (0-100%)
- ~10 entities total
**Why custom timer:**
- Users want smooth countdowns (not jumping 15 minutes at a time)
- Progress bars need minute-by-minute updates
- Very lightweight (no data processing, just state recalculation)
**Why NOT every second:**
- Minute precision sufficient for countdown UX
- Reduces CPU load (60× fewer updates than seconds)
- Home Assistant best practice (avoid sub-minute updates)
---
## Listener Pattern (Python/HA Terminology)
**Your question:** "Sind Timer für dich eigentlich 'Listener'?"
**Answer:** In Home Assistant terminology:
- **Timer** = The mechanism that triggers at specific times (`async_track_utc_time_change`)
- **Listener** = A callback function that gets called when timer triggers
- **Observer Pattern** = Entities register callbacks, coordinator notifies them
**How it works:**
```python
# Entity registers a listener callback
class TibberPricesSensor(CoordinatorEntity):
async def async_added_to_hass(self):
# Register this entity's update callback
self._remove_listener = self.coordinator.async_add_time_sensitive_listener(
self._handle_coordinator_update
)
# Coordinator maintains list of listeners
class ListenerManager:
def __init__(self):
self._time_sensitive_listeners = [] # List of callbacks
def async_add_time_sensitive_listener(self, callback):
self._time_sensitive_listeners.append(callback)
def async_update_time_sensitive_listeners(self):
# Timer triggered → notify all listeners
for callback in self._time_sensitive_listeners:
callback() # Entity updates itself
```
**Why this pattern:**
- Decouples timer logic from entity logic
- One timer can notify many entities efficiently
- Entities can unregister when removed (cleanup)
- Standard HA pattern for coordinator-based integrations
---
## Timer Coordination Scenarios
### Scenario 1: Normal Operation (No Midnight)
```
14:00:00 → Timer #2 triggers
→ Update time-sensitive entities (current price changed)
→ 60 entities updated (~5ms)
14:03:12 → Timer #1 triggers (HA's 15-min cycle)
→ Check if tomorrow data needed (no, still cached)
→ Return cached data (fast path, ~2ms)
14:15:00 → Timer #2 triggers
→ Update time-sensitive entities
→ 60 entities updated (~5ms)
14:16:00 → Timer #3 triggers
→ Update countdown/progress entities
→ 10 entities updated (~1ms)
```
**Key observation:** Timer #1 and Timer #2 run **independently**, no conflicts.
### Scenario 2: Midnight Turnover
```
23:45:12 → Timer #1 triggers
→ Check midnight: current_date=2025-11-17, last_check=2025-11-17
→ No turnover needed
→ Return cached data
00:00:00 → Timer #2 triggers FIRST (synchronized to midnight)
→ Check midnight: current_date=2025-11-18, last_check=2025-11-17
→ Turnover needed! Perform rotation, save cache
→ _last_midnight_check = 2025-11-18
→ Notify ALL entities
00:03:12 → Timer #1 triggers (its regular cycle)
→ Check midnight: current_date=2025-11-18, last_check=2025-11-18
→ Turnover already done → skip
→ Return existing data (fast path)
```
**Key observation:** Atomic date comparison prevents double-turnover, whoever runs first wins.
### Scenario 3: Tomorrow Data Check (After 13:00)
```
13:00:00 → Timer #2 triggers
→ Normal quarter-hour refresh
→ Update time-sensitive entities
13:03:12 → Timer #1 triggers
→ Check tomorrow data: missing or invalid
→ Fetch from Tibber API (~300ms)
→ Transform data (~200ms)
→ Calculate periods (~100ms)
→ Notify ALL entities (new data available)
13:15:00 → Timer #2 triggers
→ Normal quarter-hour refresh (uses newly fetched data)
→ Update time-sensitive entities
```
**Key observation:** Timer #1 does expensive work (API + transform), Timer #2 does cheap work (entity notify).
---
## Why We Keep HA's Timer (Timer #1)
**Your question:** "warum wir den HA timer trotzdem weiter benutzen, da er ja für uns unkontrollierte aktualisierte änderungen triggert"
**Answer:** You're correct that it's not synchronized, but that's actually **intentional**:
### Reason 1: Load Distribution on Tibber API
If all installations used synchronized timers:
- ❌ Everyone fetches at 13:00:00 → Tibber API overload
- ❌ Everyone fetches at 14:00:00 → Tibber API overload
- ❌ "Thundering herd" problem
With HA's unsynchronized timer:
- ✅ Installation A: 13:03:12, 13:18:12, 13:33:12, ...
- ✅ Installation B: 13:07:45, 13:22:45, 13:37:45, ...
- ✅ Installation C: 13:11:28, 13:26:28, 13:41:28, ...
- ✅ Natural distribution over ~30 minutes
- ✅ Plus: Random 0-30s delay on tomorrow checks
**Result:** API load spread evenly, no spikes.
### Reason 2: What Timer #1 Actually Checks
Timer #1 does NOT blindly update. It checks:
```python
def _should_update_price_data(self) -> str:
# Check 1: Do we have tomorrow data? (only relevant after ~13:00)
if tomorrow_missing or tomorrow_invalid:
return "tomorrow_check" # Fetch needed
# Check 2: Is cache still valid?
if cache_valid:
return "cached" # No fetch needed (most common!)
# Check 3: Has enough time passed?
if time_since_last_update < threshold:
return "cached" # Too soon, skip fetch
return "update_needed" # Rare case
```
**Most Timer #1 cycles:** Fast path (~2ms), no API call, just returns cached data.
**API fetch only when:**
- Tomorrow data missing/invalid (after 13:00)
- Cache expired (midnight turnover)
- Explicit user refresh
### Reason 3: HA Integration Best Practices
- ✅ Standard HA pattern: `DataUpdateCoordinator` is recommended by HA docs
- ✅ Automatic retry logic for temporary API failures
- ✅ Backpressure handling (won't queue updates if previous still running)
- ✅ Developer tools integration (users can manually trigger refresh)
- ✅ Diagnostics integration (shows last update time, success/failure)
### What We DO Synchronize
- ✅ **Timer #2:** Entity state updates at exact boundaries (user-visible)
- ✅ **Timer #3:** Countdown/progress at exact minutes (user-visible)
- ❌ **Timer #1:** API fetch timing (invisible to user, distribution wanted)
---
## Performance Characteristics
### Timer #1 (DataUpdateCoordinator)
- **Triggers:** Every 15 minutes (unsynchronized)
- **Fast path:** ~2ms (cache check, return existing data)
- **Slow path:** ~600ms (API fetch + transform + calculate)
- **Frequency:** ~96 times/day
- **API calls:** ~1-2 times/day (cached otherwise)
### Timer #2 (Quarter-Hour Refresh)
- **Triggers:** 96 times/day (exact boundaries)
- **Processing:** ~5ms (notify 60 entities)
- **No API calls:** Uses cached/transformed data
- **No transformation:** Just entity state updates
### Timer #3 (Minute Refresh)
- **Triggers:** 1440 times/day (every minute)
- **Processing:** ~1ms (notify 10 entities)
- **No API calls:** No data processing at all
- **Lightweight:** Just countdown math
**Total CPU budget:** ~15 seconds/day for all timers combined.
---
## Debugging Timer Issues
### Check Timer #1 (HA Coordinator)
```python
# Enable debug logging
_LOGGER.setLevel(logging.DEBUG)
# Watch for these log messages:
"Fetching data from API (reason: tomorrow_check)" # API call
"Using cached data (no update needed)" # Fast path
"Midnight turnover detected (Timer #1)" # Turnover
```
### Check Timer #2 (Quarter-Hour)
```python
# Watch coordinator logs:
"Updated 60 time-sensitive entities at quarter-hour boundary" # Normal
"Midnight turnover detected (Timer #2)" # Turnover
```
### Check Timer #3 (Minute)
```python
# Watch coordinator logs:
"Updated 10 minute-update entities" # Every minute
```
### Common Issues
1. **Timer #2 not triggering:**
- Check: `schedule_quarter_hour_refresh()` called in `__init__`?
- Check: `_quarter_hour_timer_cancel` properly stored?
2. **Double updates at midnight:**
- Should NOT happen (atomic coordination)
- Check: Both timers use same date comparison logic?
3. **API overload:**
- Check: Random delay working? (0-30s jitter on tomorrow check)
- Check: Cache validation logic correct?
---
## Related Documentation
- **[Architecture](./architecture.md)** - Overall system design, data flow
- **[Caching Strategy](./caching-strategy.md)** - Cache lifetimes, invalidation, midnight turnover
- **[AGENTS.md](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/AGENTS.md)** - Complete reference for AI development
---
## Summary
**Three independent timers:**
1. **Timer #1** (HA built-in, 15 min, unsynchronized) → Data fetching (when needed)
2. **Timer #2** (Custom, :00/:15/:30/:45) → Entity state updates (always)
3. **Timer #3** (Custom, every minute) → Countdown/progress (always)
**Key insights:**
- Timer #1 unsynchronized = good (load distribution on API)
- Timer #2 synchronized = good (user sees correct data immediately)
- Timer #3 synchronized = good (smooth countdown UX)
- All three coordinate gracefully (atomic midnight checks, no conflicts)
**"Listener" terminology:**
- Timer = mechanism that triggers
- Listener = callback that gets called
- Observer pattern = entities register, coordinator notifies

View file

@ -1,5 +1,4 @@
[
"v0.25.0b0",
"v0.24.0",
"v0.23.1",
"v0.23.0",

View file

@ -1,264 +0,0 @@
# Actions (Services)
Home Assistant now surfaces these backend service endpoints as **Actions** in the UI (for example, Developer Tools → Actions or the Action editor inside dashboards). Behind the scenes they are still Home Assistant services that use the `service:` key, but this guide uses the word “action” whenever we refer to the user interface.
You can still call them from automations, scripts, and dashboards the same way as before (`service: tibber_prices.get_chartdata`, etc.), just remember that the frontend officially lists them as actions.
## Available Actions
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
### tibber_prices.get_chartdata
**Purpose:** Returns electricity price data in chart-friendly formats for visualization and analysis.
**Key Features:**
- **Flexible Output Formats**: Array of objects or array of arrays
- **Time Range Selection**: Filter by day (yesterday, today, tomorrow)
- **Price Filtering**: Filter by price level or rating
- **Period Support**: Return best/peak price period summaries instead of intervals
- **Resolution Control**: Interval (15-minute) or hourly aggregation
- **Customizable Field Names**: Rename output fields to match your chart library
- **Currency Control**: Override integration default - use base (€/kWh, kr/kWh) or subunit (ct/kWh, øre/kWh)
**Basic Example:**
```yaml
service: tibber_prices.get_chartdata
data:
entry_id: YOUR_ENTRY_ID
day: ["today", "tomorrow"]
output_format: array_of_objects
response_variable: chart_data
```
**Response Format:**
```json
{
"data": [
{
"start_time": "2025-11-17T00:00:00+01:00",
"price_per_kwh": 0.2534
},
{
"start_time": "2025-11-17T00:15:00+01:00",
"price_per_kwh": 0.2498
}
]
}
```
**Common Parameters:**
| Parameter | Description | Default |
| ---------------- | ------------------------------------------- | ----------------------- |
| `entry_id` | Integration entry ID (required) | - |
| `day` | Days to include: yesterday, today, tomorrow | `["today", "tomorrow"]` |
| `output_format` | `array_of_objects` or `array_of_arrays` | `array_of_objects` |
| `resolution` | `interval` (15-min) or `hourly` | `interval` |
| `subunit_currency` | Override display mode: `true` for subunit (ct/øre), `false` for base (€/kr) | Integration setting |
| `round_decimals` | Decimal places (0-10) | 2 (subunit) or 4 (base) |
**Rolling Window Mode:**
Omit the `day` parameter to get a dynamic 48-hour rolling window that automatically adapts to data availability:
```yaml
service: tibber_prices.get_chartdata
data:
entry_id: YOUR_ENTRY_ID
# Omit 'day' for rolling window
output_format: array_of_objects
response_variable: chart_data
```
**Behavior:**
- **When tomorrow data available** (typically after ~13:00): Returns today + tomorrow
- **When tomorrow data not available**: Returns yesterday + today
This is useful for charts that should always show a 48-hour window without manual day selection.
**Period Filter Example:**
Get best price periods as summaries instead of intervals:
```yaml
service: tibber_prices.get_chartdata
data:
entry_id: YOUR_ENTRY_ID
period_filter: best_price # or peak_price
day: ["today", "tomorrow"]
include_level: true
include_rating_level: true
response_variable: periods
```
**Advanced Filtering:**
```yaml
service: tibber_prices.get_chartdata
data:
entry_id: YOUR_ENTRY_ID
level_filter: ["VERY_CHEAP", "CHEAP"] # Only cheap periods
rating_level_filter: ["LOW"] # Only low-rated prices
insert_nulls: segments # Add nulls at segment boundaries
```
**Complete Documentation:**
For detailed parameter descriptions, open **Developer Tools → Actions** (the UI label) and select `tibber_prices.get_chartdata`. The inline documentation is still stored in `services.yaml` because actions are backed by services.
---
### tibber_prices.get_apexcharts_yaml
> ⚠️ **IMPORTANT:** This action generates a **basic example configuration** as a starting point, NOT a complete solution for all ApexCharts features.
>
> This integration is primarily a **data provider**. The generated YAML demonstrates how to use the `get_chartdata` action to fetch price data. Due to the segmented nature of our data (different time periods per series) and the use of Home Assistant's service API instead of entity attributes, many advanced ApexCharts features (like `in_header`, certain transformations) are **not compatible** or require manual customization.
>
> **You are welcome to customize** the generated YAML for your specific needs, but comprehensive ApexCharts configuration support is beyond the scope of this integration. Community contributions with improved configurations are always appreciated!
>
> **For custom solutions:** Use the `get_chartdata` action directly to build your own charts with full control over the data format and visualization.
**Purpose:** Generates a basic ApexCharts card YAML configuration example for visualizing electricity prices with automatic color-coding by price level.
**Prerequisites:**
- [ApexCharts Card](https://github.com/RomRider/apexcharts-card) (required for all configurations)
- [Config Template Card](https://github.com/iantrich/config-template-card) (required only for rolling window modes - enables dynamic Y-axis scaling)
**✨ Key Features:**
- **Automatic Color-Coded Series**: Separate series for each price level (VERY_CHEAP, CHEAP, NORMAL, EXPENSIVE, VERY_EXPENSIVE) or rating (LOW, NORMAL, HIGH)
- **Dynamic Y-Axis Scaling**: Rolling window modes automatically use `chart_metadata` sensor for optimal Y-axis bounds
- **Best Price Period Highlights**: Optional vertical bands showing detected best price periods
- **Translated Labels**: Automatically uses your Home Assistant language setting
- **Clean Gap Visualization**: Proper NULL insertion for missing data segments
**Quick Example:**
```yaml
service: tibber_prices.get_apexcharts_yaml
data:
entry_id: YOUR_ENTRY_ID
day: today # Optional: yesterday, today, tomorrow, rolling_window, rolling_window_autozoom
level_type: rating_level # or "level" for 5-level classification
highlight_best_price: true # Show best price period overlays
response_variable: apexcharts_config
```
**Day Parameter Options:**
- **Fixed days** (`yesterday`, `today`, `tomorrow`): Static 24-hour views, no additional dependencies
- **Rolling Window** (default when omitted or `rolling_window`): Dynamic 48-hour window that automatically shifts between yesterday+today and today+tomorrow based on data availability
- **✨ Includes dynamic Y-axis scaling** via `chart_metadata` sensor
- **Rolling Window (Auto-Zoom)** (`rolling_window_autozoom`): Same as rolling window, but additionally zooms in progressively (2h lookback + remaining time until midnight, graph span decreases every 15 minutes)
- **✨ Includes dynamic Y-axis scaling** via `chart_metadata` sensor
**Dynamic Y-Axis Scaling (Rolling Window Modes):**
Rolling window configurations automatically integrate with the `chart_metadata` sensor for optimal chart appearance:
- **Automatic bounds**: Y-axis min/max adjust to data range
- **No manual configuration**: Works out of the box if sensor is enabled
- **Fallback behavior**: If sensor is disabled, uses ApexCharts auto-scaling
- **Real-time updates**: Y-axis adapts when price data changes
**Example: Today's Prices (Static View)**
```yaml
service: tibber_prices.get_apexcharts_yaml
data:
entry_id: YOUR_ENTRY_ID
day: today
level_type: rating_level
response_variable: config
# Use in dashboard:
type: custom:apexcharts-card
# ... paste generated config
```
**Example: Rolling 48h Window (Dynamic View)**
```yaml
service: tibber_prices.get_apexcharts_yaml
data:
entry_id: YOUR_ENTRY_ID
# Omit 'day' for rolling window (or use 'rolling_window')
level_type: level # 5-level classification
highlight_best_price: true
response_variable: config
# Use in dashboard:
type: custom:config-template-card
entities:
- binary_sensor.<home_name>_tomorrow_s_data_available
- sensor.<home_name>_chart_metadata # For dynamic Y-axis
card:
# ... paste generated config
```
**Screenshots:**
_Screenshots coming soon for all 4 modes: today, tomorrow, rolling_window, rolling_window_autozoom_
**Level Type Options:**
- **`rating_level`** (default): 3 series (LOW, NORMAL, HIGH) - based on your personal thresholds
- **`level`**: 5 series (VERY_CHEAP, CHEAP, NORMAL, EXPENSIVE, VERY_EXPENSIVE) - absolute price ranges
**Best Price Period Highlights:**
When `highlight_best_price: true`:
- Vertical bands overlay the chart showing detected best price periods
- Tooltip shows "Best Price Period" label when hovering over highlighted areas
- Only appears when best price periods are configured and detected
**Important Notes:**
- **Config Template Card** is only required for rolling window modes (enables dynamic Y-axis)
- Fixed day views (`today`, `tomorrow`, `yesterday`) work with ApexCharts Card alone
- Generated YAML is a starting point - customize colors, styling, features as needed
- All labels are automatically translated to your Home Assistant language
Use the response in Lovelace dashboards by copying the generated YAML.
**Documentation:** Refer to **Developer Tools → Actions** for descriptions of the fields exposed by this action.
---
### tibber_prices.refresh_user_data
**Purpose:** Forces an immediate refresh of user data (homes, subscriptions) from the Tibber API.
**Example:**
```yaml
service: tibber_prices.refresh_user_data
data:
entry_id: YOUR_ENTRY_ID
```
**Note:** User data is cached for 24 hours. Trigger this action only when you need immediate updates (e.g., after changing Tibber subscriptions).
---
## Migration from Chart Data Export Sensor
If you're still using the `sensor.<home_name>_chart_data_export` sensor, consider migrating to the `tibber_prices.get_chartdata` action:
**Benefits:**
- No HA restart required for configuration changes
- More flexible filtering and formatting options
- Better performance (on-demand instead of polling)
- Future-proof (active development)
**Migration Steps:**
1. Note your current sensor configuration (Step 7 in Options Flow)
2. Create automation/script that calls `tibber_prices.get_chartdata` with the same parameters
3. Test the new approach
4. Disable the old sensor when satisfied

View file

@ -1,250 +0,0 @@
# Automation Examples
> **Note:** This guide is under construction.
> **Tip:** For dashboard examples with dynamic icons and colors, see the **[Dynamic Icons Guide](dynamic-icons.md)** and **[Dynamic Icon Colors Guide](icon-colors.md)**.
## Table of Contents
- [Price-Based Automations](#price-based-automations)
- [Volatility-Aware Automations](#volatility-aware-automations)
- [Best Hour Detection](#best-hour-detection)
- [ApexCharts Cards](#apexcharts-cards)
---
> **Important Note:** The following examples are intended as templates to illustrate the logic. They are **not** suitable for direct copy & paste without adaptation.
>
> Please make sure you:
> 1. Replace the **Entity IDs** (e.g., `sensor.<home_name>_...`, `switch.pool_pump`) with the IDs of your own devices and sensors.
> 2. Adapt the logic to your specific devices (e.g., heat pump, EV, water boiler).
>
> These examples provide a good starting point but must be tailored to your individual Home Assistant setup.
>
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
## Price-Based Automations
Coming soon...
---
## Volatility-Aware Automations
These examples show how to create robust automations that only act when price differences are meaningful, avoiding unnecessary actions on days with flat prices.
### Use Case: Only Act on Meaningful Price Variations
On days with low price variation, the difference between "cheap" and "expensive" periods can be just a fraction of a cent. This automation charges a home battery only when the volatility is high enough to result in actual savings.
**Best Practice:** Instead of checking a numeric percentage, this automation checks the sensor's classified state. This makes the automation simpler and respects the volatility thresholds you have configured centrally in the integration's options.
```yaml
automation:
- alias: "Home Battery - Charge During Best Price (Moderate+ Volatility)"
description: "Charge home battery during Best Price periods, but only on days with meaningful price differences"
trigger:
- platform: state
entity_id: binary_sensor.<home_name>_best_price_period
to: "on"
condition:
# Best Practice: Check the classified volatility level.
# This ensures the automation respects the thresholds you set in the config options.
# We use the 'price_volatility' attribute for a language-independent check.
# 'low' means minimal savings, so we only run if it's NOT low.
- condition: template
value_template: >
{{ state_attr('sensor.<home_name>_today_s_price_volatility', 'price_volatility') != 'low' }}
# Only charge if battery has capacity
- condition: numeric_state
entity_id: sensor.home_battery_level
below: 90
action:
- service: switch.turn_on
target:
entity_id: switch.home_battery_charge
- service: notify.mobile_app
data:
message: >
Home battery charging started. Price: {{ states('sensor.<home_name>_current_electricity_price') }} {{ state_attr('sensor.<home_name>_current_electricity_price', 'unit_of_measurement') }}.
Today's volatility is {{ state_attr('sensor.<home_name>_today_s_price_volatility', 'price_volatility') }}.
```
**Why this works:**
- The automation only runs if volatility is `moderate`, `high`, or `very_high`.
- If you adjust your volatility thresholds in the future, this automation adapts automatically without any changes.
- It uses the `price_volatility` attribute, ensuring it works correctly regardless of your Home Assistant's display language.
### Use Case: Combined Volatility and Absolute Price Check
This is the most robust approach. It trusts the "Best Price" classification on volatile days but adds a backup absolute price check for low-volatility days. This handles situations where prices are globally low, even if the daily variation is minimal.
```yaml
automation:
- alias: "EV Charging - Smart Strategy"
description: "Charge EV using volatility-aware logic"
trigger:
- platform: state
entity_id: binary_sensor.<home_name>_best_price_period
to: "on"
condition:
# Check battery level
- condition: numeric_state
entity_id: sensor.ev_battery_level
below: 80
# Strategy: Moderate+ volatility OR the price is genuinely cheap
- condition: or
conditions:
# Path 1: Volatility is not 'low', so we trust the 'Best Price' period classification.
- condition: template
value_template: >
{{ state_attr('sensor.<home_name>_today_s_price_volatility', 'price_volatility') != 'low' }}
# Path 2: Volatility is low, but we charge anyway if the price is below an absolute cheapness threshold.
- condition: numeric_state
entity_id: sensor.<home_name>_current_electricity_price
below: 0.18
action:
- service: switch.turn_on
target:
entity_id: switch.ev_charger
- service: notify.mobile_app
data:
message: >
EV charging started. Price: {{ states('sensor.<home_name>_current_electricity_price') }} {{ state_attr('sensor.<home_name>_current_electricity_price', 'unit_of_measurement') }}.
Today's volatility is {{ state_attr('sensor.<home_name>_today_s_price_volatility', 'price_volatility') }}.
```
**Why this works:**
- On days with meaningful price swings, it charges during any `Best Price` period.
- On days with flat prices, it still charges if the price drops below your personal "cheap enough" threshold (e.g., 0.18 €/kWh or 18 ct/kWh).
- This gracefully handles midnight period flips, as the absolute price check will likely remain true if prices stay low.
### Use Case: Using the Period's Own Volatility Attribute
For maximum simplicity, you can use the attributes of the `best_price_period` sensor itself. It contains the volatility classification for the day the period belongs to. This is especially useful for periods that span across midnight.
```yaml
automation:
- alias: "Heat Pump - Smart Heating Using Period's Volatility"
trigger:
- platform: state
entity_id: binary_sensor.<home_name>_best_price_period
to: "on"
condition:
# Best Practice: Check if the period's own volatility attribute is not 'low'.
# This correctly handles periods that start today but end tomorrow.
- condition: template
value_template: >
{{ state_attr('binary_sensor.<home_name>_best_price_period', 'volatility') != 'low' }}
action:
- service: climate.set_temperature
target:
entity_id: climate.heat_pump
data:
temperature: 22 # Boost temperature during cheap period
```
**Why this works:**
- Each detected period has its own `volatility` attribute (`low`, `moderate`, etc.).
- This is the simplest way to check for meaningful savings for that specific period.
- The attribute name on the binary sensor is `volatility` (lowercase) and its value is also lowercase.
- It also contains other useful attributes like `price_mean`, `price_spread`, and the `price_coefficient_variation_%` for that period.
---
## Best Hour Detection
Coming soon...
---
## ApexCharts Cards
> ⚠️ **IMPORTANT:** The `tibber_prices.get_apexcharts_yaml` service generates a **basic example configuration** as a starting point. It is NOT a complete solution for all ApexCharts features.
>
> This integration is primarily a **data provider**. Due to technical limitations (segmented time periods, service API usage), many advanced ApexCharts features require manual customization or may not be compatible.
>
> **For advanced customization:** Use the `get_chartdata` service directly to build charts tailored to your specific needs. Community contributions with improved configurations are welcome!
The `tibber_prices.get_apexcharts_yaml` service generates basic ApexCharts card configuration examples for visualizing electricity prices.
### Prerequisites
**Required:**
- [ApexCharts Card](https://github.com/RomRider/apexcharts-card) - Install via HACS
**Optional (for rolling window mode):**
- [Config Template Card](https://github.com/iantrich/config-template-card) - Install via HACS
### Installation
1. Open HACS → Frontend
2. Search for "ApexCharts Card" and install
3. (Optional) Search for "Config Template Card" and install if you want rolling window mode
### Example: Fixed Day View
```yaml
# Generate configuration via automation/script
service: tibber_prices.get_apexcharts_yaml
data:
entry_id: YOUR_ENTRY_ID
day: today # or "yesterday", "tomorrow"
level_type: rating_level # or "level" for 5-level view
response_variable: apexcharts_config
```
Then copy the generated YAML into your Lovelace dashboard.
### Example: Rolling 48h Window
For a dynamic chart that automatically adapts to data availability:
```yaml
service: tibber_prices.get_apexcharts_yaml
data:
entry_id: YOUR_ENTRY_ID
day: rolling_window # Or omit for same behavior (default)
level_type: rating_level
response_variable: apexcharts_config
```
**Behavior:**
- **When tomorrow data available** (typically after ~13:00): Shows today + tomorrow
- **When tomorrow data not available**: Shows yesterday + today
- **Fixed 48h span:** Always shows full 48 hours
**Auto-Zoom Variant:**
For progressive zoom-in throughout the day:
```yaml
service: tibber_prices.get_apexcharts_yaml
data:
entry_id: YOUR_ENTRY_ID
day: rolling_window_autozoom
level_type: rating_level
response_variable: apexcharts_config
```
- Same data loading as rolling window
- **Progressive zoom:** Graph span starts at ~26h in the morning and decreases to ~14h by midnight
- **Updates every 15 minutes:** Always shows 2h lookback + remaining time until midnight
**Note:** Rolling window modes require Config Template Card to dynamically adjust the time range.
### Features
- Color-coded price levels/ratings (green = cheap, yellow = normal, red = expensive)
- Best price period highlighting (semi-transparent green overlay)
- Automatic NULL insertion for clean gaps
- Translated labels based on your Home Assistant language
- Interactive zoom and pan
- Live marker showing current time

View file

@ -1,307 +0,0 @@
# Chart Examples
This guide showcases the different chart configurations available through the `tibber_prices.get_apexcharts_yaml` action.
> **Quick Start:** Call the action with your desired parameters, copy the generated YAML, and paste it into your Lovelace dashboard!
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
## Overview
The integration can generate 4 different chart modes, each optimized for specific use cases:
| Mode | Description | Best For | Dependencies |
|------|-------------|----------|--------------|
| **Today** | Static 24h view of today's prices | Quick daily overview | ApexCharts Card |
| **Tomorrow** | Static 24h view of tomorrow's prices | Planning tomorrow | ApexCharts Card |
| **Rolling Window** | Dynamic 48h view (today+tomorrow or yesterday+today) | Always-current overview | ApexCharts + Config Template Card |
| **Rolling Window Auto-Zoom** | Dynamic view that zooms in as day progresses | Real-time focus on remaining day | ApexCharts + Config Template Card |
**Screenshots available for:**
- ✅ Today (static) - Representative of all fixed day views
- ✅ Rolling Window - Shows dynamic Y-axis scaling
- ✅ Rolling Window Auto-Zoom - Shows progressive zoom effect
## All Chart Modes
### 1. Today's Prices (Static)
**When to use:** Simple daily price overview, no dynamic updates needed.
**Dependencies:** ApexCharts Card only
**Generate:**
```yaml
service: tibber_prices.get_apexcharts_yaml
data:
entry_id: YOUR_ENTRY_ID
day: today
level_type: rating_level
highlight_best_price: true
```
**Screenshot:**
![Today's Prices - Static 24h View](/img/charts/today.jpg)
**Key Features:**
- ✅ Color-coded price levels (LOW, NORMAL, HIGH)
- ✅ Best price period highlights (vertical bands)
- ✅ Static 24-hour view (00:00 - 23:59)
- ✅ Works with ApexCharts Card alone
**Note:** Tomorrow view (`day: tomorrow`) works identically to Today view, just showing tomorrow's data. All fixed day views (yesterday/today/tomorrow) use the same visualization approach.
---
### 2. Rolling 48h Window (Dynamic)
**When to use:** Always-current view that automatically switches between yesterday+today and today+tomorrow.
**Dependencies:** ApexCharts Card + Config Template Card
**Generate:**
```yaml
service: tibber_prices.get_apexcharts_yaml
data:
entry_id: YOUR_ENTRY_ID
# Omit 'day' for rolling window
level_type: rating_level
highlight_best_price: true
```
**Screenshot:**
![Rolling 48h Window with Dynamic Y-Axis Scaling](/img/charts/rolling-window.jpg)
**Key Features:**
- ✅ **Dynamic Y-axis scaling** via `chart_metadata` sensor
- ✅ Automatic data selection: today+tomorrow (when available) or yesterday+today
- ✅ Always shows 48 hours of data
- ✅ Updates automatically when tomorrow's data arrives
- ✅ Color gradients for visual appeal
**How it works:**
- Before ~13:00: Shows yesterday + today
- After ~13:00: Shows today + tomorrow
- Y-axis automatically adjusts to data range for optimal visualization
---
### 3. Rolling Window Auto-Zoom (Dynamic)
**When to use:** Real-time focus on remaining day - progressively zooms in as day advances.
**Dependencies:** ApexCharts Card + Config Template Card
**Generate:**
```yaml
service: tibber_prices.get_apexcharts_yaml
data:
entry_id: YOUR_ENTRY_ID
day: rolling_window_autozoom
level_type: rating_level
highlight_best_price: true
```
**Screenshot:**
![Rolling Window Auto-Zoom - Progressive Zoom Effect](/img/charts/rolling-window-autozoom.jpg)
**Key Features:**
- ✅ **Progressive zoom:** Graph span decreases every 15 minutes
- ✅ **Dynamic Y-axis scaling** via `chart_metadata` sensor
- ✅ Always shows: 2 hours lookback + remaining time until midnight
- ✅ Perfect for real-time price monitoring
- ✅ Example: At 18:00, shows 16:00 → 00:00 (8h window)
**How it works:**
- 00:00: Shows full 48h window (same as rolling window)
- 06:00: Shows 04:00 → midnight (20h window)
- 12:00: Shows 10:00 → midnight (14h window)
- 18:00: Shows 16:00 → midnight (8h window)
- 23:45: Shows 21:45 → midnight (2.25h window)
This creates a "zooming in" effect that focuses on the most relevant remaining time.
---
## Comparison: Level Type Options
### Rating Level (3 series)
Based on **your personal price thresholds** (configured in Options Flow):
- **LOW** (Green): Below your "cheap" threshold
- **NORMAL** (Blue): Between thresholds
- **HIGH** (Red): Above your "expensive" threshold
**Best for:** Personal decision-making based on your budget
### Level (5 series)
Based on **absolute price ranges** (calculated from daily min/max):
- **VERY_CHEAP** (Dark Green): Bottom 20%
- **CHEAP** (Light Green): 20-40%
- **NORMAL** (Blue): 40-60%
- **EXPENSIVE** (Orange): 60-80%
- **VERY_EXPENSIVE** (Red): Top 20%
**Best for:** Objective price distribution visualization
---
## Dynamic Y-Axis Scaling
Rolling window modes (2 & 3) automatically integrate with the `chart_metadata` sensor for optimal visualization:
**Without chart_metadata sensor (disabled):**
```
┌─────────────────────┐
│ │ ← Lots of empty space
│ ___ │
___/ \___
│_/ \_ │
├─────────────────────┤
0 100 ct
```
**With chart_metadata sensor (enabled):**
```
┌─────────────────────┐
│ ___ │ ← Y-axis fitted to data
___/ \___
│_/ \_ │
├─────────────────────┤
18 28 ct ← Optimal range
```
**Requirements:**
- ✅ The `sensor.<home_name>_chart_metadata` must be **enabled** (it's enabled by default!)
- ✅ That's it! The generated YAML automatically uses the sensor for dynamic scaling
**Important:** Do NOT disable the `chart_metadata` sensor if you want optimal Y-axis scaling in rolling window modes!
**Note:** Fixed day views (`today`, `tomorrow`) use ApexCharts' built-in auto-scaling and don't require the metadata sensor.
---
## Best Price Period Highlights
When `highlight_best_price: true`, vertical bands overlay the chart showing detected best price periods:
**Example:**
```
Price
30│ ┌─────────┐ Normal prices
│ │ │
25│ ▓▓▓▓▓▓│ │ ← Best price period (shaded)
│ ▓▓▓▓▓▓│ │
20│─────▓▓▓▓▓▓│─────────│
│ ▓▓▓▓▓▓
└─────────────────────── Time
06:00 12:00 18:00
```
**Features:**
- Automatic detection based on your configuration (see [Period Calculation Guide](period-calculation.md))
- Tooltip shows "Best Price Period" label
- Only appears when periods are configured and detected
- Can be disabled with `highlight_best_price: false`
---
## Prerequisites
### Required for All Modes
- **[ApexCharts Card](https://github.com/RomRider/apexcharts-card)**: Core visualization library
```bash
# Install via HACS
HACS → Frontend → Search "ApexCharts Card" → Download
```
### Required for Rolling Window Modes Only
- **[Config Template Card](https://github.com/iantrich/config-template-card)**: Enables dynamic configuration
```bash
# Install via HACS
HACS → Frontend → Search "Config Template Card" → Download
```
**Note:** Fixed day views (`today`, `tomorrow`) work with ApexCharts Card alone!
---
## Tips & Tricks
### Customizing Colors
Edit the `colors` array in the generated YAML:
```yaml
apex_config:
colors:
- "#00FF00" # Change LOW/VERY_CHEAP color
- "#0000FF" # Change NORMAL color
- "#FF0000" # Change HIGH/VERY_EXPENSIVE color
```
### Changing Chart Height
Add to the card configuration:
```yaml
type: custom:apexcharts-card
graph_span: 48h
header:
show: true
title: My Custom Title
apex_config:
chart:
height: 400 # Adjust height in pixels
```
### Combining with Other Cards
Wrap in a vertical stack for dashboard integration:
```yaml
type: vertical-stack
cards:
- type: entity
entity: sensor.<home_name>_current_electricity_price
- type: custom:apexcharts-card
# ... generated chart config
```
---
## Next Steps
- **[Actions Guide](actions.md)**: Complete documentation of `get_apexcharts_yaml` parameters
- **[Chart Metadata Sensor](sensors.md#chart-metadata)**: Learn about dynamic Y-axis scaling
- **[Period Calculation Guide](period-calculation.md)**: Configure best price period detection
---
## Screenshots
### Gallery
1. **Today View (Static)** - Representative of all fixed day views (yesterday/today/tomorrow)
![Today View](/img/charts/today.jpg)
2. **Rolling Window (Dynamic)** - Shows dynamic Y-axis scaling and 48h window
![Rolling Window](/img/charts/rolling-window.jpg)
3. **Rolling Window Auto-Zoom (Dynamic)** - Shows progressive zoom effect
![Rolling Window Auto-Zoom](/img/charts/rolling-window-autozoom.jpg)
**Note:** Tomorrow view is visually identical to Today view (same chart type, just different data).

View file

@ -1,67 +0,0 @@
# Core Concepts
Understanding the fundamental concepts behind the Tibber Prices integration.
## Price Intervals
The integration works with **quarter-hourly intervals** (15 minutes):
- Each interval has a start time (e.g., 14:00, 14:15, 14:30, 14:45)
- Prices are fixed for the entire interval
- Synchronized with Tibber's smart meter readings
## Price Ratings
Prices are automatically classified into **rating levels**:
- **VERY_CHEAP** - Exceptionally low prices (great for energy-intensive tasks)
- **CHEAP** - Below average prices (good for flexible loads)
- **NORMAL** - Around average prices (regular consumption)
- **EXPENSIVE** - Above average prices (reduce consumption if possible)
- **VERY_EXPENSIVE** - Exceptionally high prices (avoid heavy loads)
Rating is based on **statistical analysis** comparing current price to:
- Daily average
- Trailing 24-hour average
- User-configured thresholds
## Price Periods
**Best Price Periods** and **Peak Price Periods** are automatically detected time windows:
- **Best Price Period** - Consecutive intervals with favorable prices (for scheduling energy-heavy tasks)
- **Peak Price Period** - Time windows with highest prices (to avoid or shift consumption)
Periods can:
- Span multiple hours
- Cross midnight boundaries
- Adapt based on your configuration (flex, min_distance, rating levels)
See [Period Calculation](period-calculation.md) for detailed configuration.
## Statistical Analysis
The integration enriches every interval with context:
- **Trailing 24h Average** - Average price over the last 24 hours
- **Leading 24h Average** - Average price over the next 24 hours
- **Price Difference** - How much current price deviates from average (in %)
- **Volatility** - Price stability indicator (LOW, MEDIUM, HIGH)
This helps you understand if current prices are exceptional or typical.
## Multi-Home Support
You can add multiple Tibber homes to track prices for:
- Different locations
- Different electricity contracts
- Comparison between regions
Each home gets its own set of sensors with unique entity IDs.
---
💡 **Next Steps:**
- [Glossary](glossary.md) - Detailed term definitions
- [Sensors](sensors.md) - How to use sensor data
- [Automation Examples](automation-examples.md) - Practical use cases

View file

@ -1,86 +0,0 @@
# Configuration
> **Note:** This guide is under construction. For detailed setup instructions, please refer to the [main README](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/README.md).
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
## Initial Setup
Coming soon...
## Configuration Options
### Average Sensor Display Settings
**Location:** Settings → Devices & Services → Tibber Prices → Configure → Step 6
The integration allows you to choose how average price sensors display their values. This setting affects all average sensors (daily, 24h rolling, hourly smoothed, and future forecasts).
#### Display Modes
**Median (Default):**
- Shows the "middle value" when all prices are sorted
- **Resistant to extreme spikes** - one expensive hour doesn't skew the result
- Best for understanding **typical price levels**
- Example: "What was the typical price today?"
**Arithmetic Mean:**
- Shows the mathematical average of all prices
- **Includes effect of spikes** - reflects actual cost if consuming evenly
- Best for **cost calculations and budgeting**
- Example: "What was my average cost per kWh today?"
#### Why This Matters
Consider a day with these hourly prices:
```
10, 12, 13, 15, 80 ct/kWh
```
- **Median = 13 ct/kWh** ← "Typical" price (middle value, ignores spike)
- **Mean = 26 ct/kWh** ← Average cost (spike pulls it up)
The median tells you the price was **typically** around 13 ct/kWh (4 out of 5 hours). The mean tells you if you consumed evenly, your **average cost** was 26 ct/kWh.
#### Automation-Friendly Design
**Both values are always available as attributes**, regardless of your display choice:
```yaml
# These attributes work regardless of display setting:
{{ state_attr('sensor.<home_name>_price_today', 'price_median') }}
{{ state_attr('sensor.<home_name>_price_today', 'price_mean') }}
```
This means:
- ✅ You can change the display anytime without breaking automations
- ✅ Automations can use both values for different purposes
- ✅ No need to create template sensors for the "other" value
#### Affected Sensors
This setting applies to:
- Daily average sensors (today, tomorrow)
- 24-hour rolling averages (trailing, leading)
- Hourly smoothed prices (current hour, next hour)
- Future forecast sensors (next 1h, 2h, 3h, ... 12h)
See the **[Sensors Guide](sensors.md#average-price-sensors)** for detailed examples.
#### Choosing Your Display
**Choose Median if:**
- 👥 You show prices to users ("What's today like?")
- 📊 You want dashboard values that represent typical conditions
- 🎯 You compare price levels across days
- 🔍 You analyze volatility (comparing typical vs extremes)
**Choose Mean if:**
- 💰 You calculate costs and budgets
- 📈 You forecast energy expenses
- 🧮 You need mathematical accuracy for financial planning
- 📊 You track actual average costs over time
**Pro Tip:** Most users prefer **Median** for displays (more intuitive), but use `price_mean` attribute in cost calculation automations.
Coming soon...

View file

@ -1,188 +0,0 @@
# Dashboard Examples
Beautiful dashboard layouts using Tibber Prices sensors.
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
## Basic Price Display Card
Simple card showing current price with dynamic color:
```yaml
type: entities
title: Current Electricity Price
entities:
- entity: sensor.<home_name>_current_electricity_price
name: Current Price
icon: mdi:flash
- entity: sensor.<home_name>_current_price_rating
name: Price Rating
- entity: sensor.<home_name>_next_electricity_price
name: Next Price
```
## Period Status Cards
Show when best/peak price periods are active:
```yaml
type: horizontal-stack
cards:
- type: entity
entity: binary_sensor.<home_name>_best_price_period
name: Best Price Active
icon: mdi:currency-eur-off
- type: entity
entity: binary_sensor.<home_name>_peak_price_period
name: Peak Price Active
icon: mdi:alert
```
## Custom Button Card Examples
### Price Level Card
```yaml
type: custom:button-card
entity: sensor.<home_name>_current_price_level
name: Price Level
show_state: true
styles:
card:
- background: |
[[[
if (entity.state === 'LOWEST') return 'linear-gradient(135deg, #00ffa3 0%, #00d4ff 100%)';
if (entity.state === 'LOW') return 'linear-gradient(135deg, #4dddff 0%, #00ffa3 100%)';
if (entity.state === 'NORMAL') return 'linear-gradient(135deg, #ffd700 0%, #ffb800 100%)';
if (entity.state === 'HIGH') return 'linear-gradient(135deg, #ff8c00 0%, #ff6b00 100%)';
if (entity.state === 'HIGHEST') return 'linear-gradient(135deg, #ff4500 0%, #dc143c 100%)';
return 'var(--card-background-color)';
]]]
```
## Lovelace Layouts
### Compact Mobile View
Optimized for mobile devices:
```yaml
type: vertical-stack
cards:
- type: custom:mini-graph-card
entities:
- entity: sensor.<home_name>_current_electricity_price
name: Today's Prices
hours_to_show: 24
points_per_hour: 4
- type: glance
entities:
- entity: sensor.<home_name>_best_price_start
name: Best Period Starts
- entity: binary_sensor.<home_name>_best_price_period
name: Active Now
```
### Desktop Dashboard
Full-width layout for desktop:
```yaml
type: grid
columns: 3
square: false
cards:
- type: custom:apexcharts-card
# See chart-examples.md for ApexCharts config
- type: vertical-stack
cards:
- type: entities
title: Current Status
entities:
- sensor.<home_name>_current_electricity_price
- sensor.<home_name>_current_price_rating
- type: vertical-stack
cards:
- type: entities
title: Statistics
entities:
- sensor.<home_name>_price_today
- sensor.<home_name>_today_s_lowest_price
- sensor.<home_name>_today_s_highest_price
```
## Icon Color Integration
Using the `icon_color` attribute for dynamic colors:
```yaml
type: custom:mushroom-chips-card
chips:
- type: entity
entity: sensor.<home_name>_current_electricity_price
icon_color: "{{ state_attr('sensor.<home_name>_current_electricity_price', 'icon_color') }}"
- type: entity
entity: binary_sensor.<home_name>_best_price_period
icon_color: green
- type: entity
entity: binary_sensor.<home_name>_peak_price_period
icon_color: red
```
See [Icon Colors](icon-colors.md) for detailed color mapping.
## Picture Elements Dashboard
Advanced interactive dashboard:
```yaml
type: picture-elements
image: /local/electricity_dashboard_bg.png
elements:
- type: state-label
entity: sensor.<home_name>_current_electricity_price
style:
top: 20%
left: 50%
font-size: 32px
font-weight: bold
- type: state-badge
entity: binary_sensor.<home_name>_best_price_period
style:
top: 40%
left: 30%
# Add more elements...
```
## Auto-Entities Dynamic Lists
Automatically list all price sensors:
```yaml
type: custom:auto-entities
card:
type: entities
title: All Price Sensors
filter:
include:
- entity_id: "sensor.<home_name>_*_price"
exclude:
- state: unavailable
sort:
method: state
numeric: true
```
---
💡 **Related:**
- [Chart Examples](chart-examples.md) - ApexCharts configurations
- [Dynamic Icons](dynamic-icons.md) - Icon behavior
- [Icon Colors](icon-colors.md) - Color attributes

View file

@ -1,180 +0,0 @@
# Dynamic Icons
Many sensors in the Tibber Prices integration automatically change their icon based on their current state. This provides instant visual feedback about price levels, trends, and periods without needing to read the actual values.
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
## What are Dynamic Icons?
Instead of having a fixed icon, some sensors update their icon to reflect their current state:
- **Price level sensors** show different cash/money icons depending on whether prices are cheap or expensive
- **Price rating sensors** show thumbs up/down based on how the current price compares to average
- **Volatility sensors** show different chart types based on price stability
- **Binary sensors** show different icons when ON vs OFF (e.g., piggy bank when in best price period)
The icons change automatically - no configuration needed!
## How to Check if a Sensor Has Dynamic Icons
To see which icon a sensor currently uses:
1. Go to **Developer Tools****States** in Home Assistant
2. Search for your sensor (e.g., `sensor.<home_name>_current_price_level`)
3. Look at the icon displayed in the entity row
4. Change conditions (wait for price changes) and check if the icon updates
**Common sensor types with dynamic icons:**
- Price level sensors (e.g., `current_price_level`)
- Price rating sensors (e.g., `current_price_rating`)
- Volatility sensors (e.g., `today_s_price_volatility`)
- Binary sensors (e.g., `best_price_period`, `peak_price_period`)
## Using Dynamic Icons in Your Dashboard
### Standard Entity Cards
Dynamic icons work automatically in standard Home Assistant cards:
```yaml
type: entities
entities:
- entity: sensor.<home_name>_current_price_level
- entity: sensor.<home_name>_current_price_rating
- entity: sensor.<home_name>_today_s_price_volatility
- entity: binary_sensor.<home_name>_best_price_period
```
The icons will update automatically as the sensor states change.
### Glance Card
```yaml
type: glance
entities:
- entity: sensor.<home_name>_current_price_level
name: Price Level
- entity: sensor.<home_name>_current_price_rating
name: Rating
- entity: binary_sensor.<home_name>_best_price_period
name: Best Price
```
### Custom Button Card
```yaml
type: custom:button-card
entity: sensor.<home_name>_current_price_level
name: Current Price Level
show_state: true
# Icon updates automatically - no need to specify it!
```
### Mushroom Entity Card
```yaml
type: custom:mushroom-entity-card
entity: sensor.<home_name>_today_s_price_volatility
name: Price Volatility
# Icon changes automatically based on volatility level
```
## Overriding Dynamic Icons
If you want to use a fixed icon instead of the dynamic one:
### In Entity Cards
```yaml
type: entities
entities:
- entity: sensor.<home_name>_current_price_level
icon: mdi:lightning-bolt # Fixed icon, won't change
```
### In Custom Button Card
```yaml
type: custom:button-card
entity: sensor.<home_name>_current_price_rating
name: Price Rating
icon: mdi:chart-line # Fixed icon overrides dynamic behavior
show_state: true
```
## Combining with Dynamic Colors
Dynamic icons work great together with dynamic colors! See the **[Dynamic Icon Colors Guide](icon-colors.md)** for examples.
**Example: Dynamic icon AND color**
```yaml
type: custom:button-card
entity: sensor.<home_name>_current_price_level
name: Current Price
show_state: true
# Icon changes automatically (cheap/expensive cash icons)
styles:
icon:
- color: |
[[[
return entity.attributes.icon_color || 'var(--state-icon-color)';
]]]
```
This gives you both:
- ✅ Different icon based on state (e.g., cash-plus when cheap, cash-remove when expensive)
- ✅ Different color based on state (e.g., green when cheap, red when expensive)
## Icon Behavior Details
### Binary Sensors
Binary sensors may have different icons for different states:
- **ON state**: Typically shows an active/alert icon
- **OFF state**: May show different icons depending on whether future periods exist
- Has upcoming periods: Timer/waiting icon
- No upcoming periods: Sleep/inactive icon
**Example:** `binary_sensor.<home_name>_best_price_period`
- When ON: Shows a piggy bank (good time to save money)
- When OFF with future periods: Shows a timer (waiting for next period)
- When OFF without future periods: Shows a sleep icon (no periods expected soon)
### State-Based Icons
Sensors with text states (like `cheap`, `normal`, `expensive`) typically show icons that match the meaning:
- Lower/better values → More positive icons
- Higher/worse values → More cautionary icons
- Normal/average values → Neutral icons
The exact icons are chosen to be intuitive and meaningful in the Home Assistant ecosystem.
## Troubleshooting
**Icon not changing:**
- Wait for the sensor state to actually change (prices update every 15 minutes)
- Check in Developer Tools → States that the sensor state is changing
- If you've set a custom icon in your card, it will override the dynamic icon
**Want to see the icon code:**
- Look at the entity in Developer Tools → States
- The `icon` attribute shows the current Material Design icon code (e.g., `mdi:cash-plus`)
**Want different icons:**
- You can override icons in your card configuration (see examples above)
- Or create a template sensor with your own icon logic
## See Also
- [Dynamic Icon Colors](icon-colors.md) - Color your icons based on state
- [Sensors Reference](sensors.md) - Complete list of available sensors
- [Automation Examples](automation-examples.md) - Use dynamic icons in automations

View file

@ -1,158 +0,0 @@
# FAQ - Frequently Asked Questions
Common questions about the Tibber Prices integration.
## General Questions
### Why don't I see tomorrow's prices yet?
Tomorrow's prices are published by Tibber around **13:00 CET** (12:00 UTC in winter, 11:00 UTC in summer).
- **Before publication**: Sensors show `unavailable` or use today's data
- **After publication**: Integration automatically fetches new data within 15 minutes
- **No manual refresh needed** - polling happens automatically
### How often does the integration update data?
- **API Polling**: Every 15 minutes
- **Sensor Updates**: On quarter-hour boundaries (00, 15, 30, 45 minutes)
- **Cache**: Price data cached until midnight (reduces API load)
### Can I use multiple Tibber homes?
Yes! Use the **"Add another home"** option:
1. Settings → Devices & Services → Tibber Prices
2. Click "Configure" → "Add another home"
3. Select additional home from dropdown
4. Each home gets separate sensors with unique entity IDs
### Does this work without a Tibber subscription?
No, you need:
- Active Tibber electricity contract
- API token from [developer.tibber.com](https://developer.tibber.com/)
The integration is free, but requires Tibber as your electricity provider.
## Configuration Questions
### What are good values for price thresholds?
**Default values work for most users:**
- High Price Threshold: 30% above average
- Low Price Threshold: 15% below average
**Adjust if:**
- You're in a market with high volatility → increase thresholds
- You want more sensitive ratings → decrease thresholds
- Seasonal changes → review every few months
### How do I optimize Best Price Period detection?
**Key parameters:**
- **Flex**: 15-20% is optimal (default 15%)
- **Min Distance**: 5-10% recommended (default 5%)
- **Rating Levels**: Start with "CHEAP + VERY_CHEAP" (default)
- **Relaxation**: Keep enabled (helps find periods on expensive days)
See [Period Calculation](period-calculation.md) for detailed tuning guide.
### Why do I sometimes only get 1 period instead of 2?
This happens on **high-price days** when:
- Few intervals meet your criteria
- Relaxation is disabled
- Flex is too low
- Min Distance is too strict
**Solutions:**
1. Enable relaxation (recommended)
2. Increase flex to 20-25%
3. Reduce min_distance to 3-5%
4. Add more rating levels (include "NORMAL")
## Troubleshooting
### Sensors show "unavailable"
**Common causes:**
1. **API Token invalid** → Check token at developer.tibber.com
2. **No internet connection** → Check HA network
3. **Tibber API down** → Check [status.tibber.com](https://status.tibber.com)
4. **Integration not loaded** → Restart Home Assistant
### Best Price Period is ON all day
This means **all intervals meet your criteria** (very cheap day!):
- Not an error - enjoy the low prices!
- Consider tightening filters (lower flex, higher min_distance)
- Or add automation to only run during first detected period
### Prices are in wrong currency or wrong units
**Currency** is determined by your Tibber subscription (cannot be changed).
**Display mode** (base vs. subunit) is configurable:
- Configure in: `Settings > Devices & Services > Tibber Prices > Configure`
- Options:
- **Base currency**: €/kWh, kr/kWh (decimal values like 0.25)
- **Subunit**: ct/kWh, øre/kWh (larger values like 25.00)
- Smart defaults: EUR → subunit, NOK/SEK/DKK → base currency
If you see unexpected units, check your configuration in the integration options.
### Tomorrow data not appearing at all
**Check:**
1. Your Tibber home has hourly price contract (not fixed price)
2. API token has correct permissions
3. Integration logs for API errors (`/config/home-assistant.log`)
4. Tibber actually published data (check Tibber app)
## Automation Questions
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
### How do I run dishwasher during cheap period?
```yaml
automation:
- alias: "Dishwasher during Best Price"
trigger:
- platform: state
entity_id: binary_sensor.<home_name>_best_price_period
to: "on"
condition:
- condition: time
after: "20:00:00" # Only start after 8 PM
action:
- service: switch.turn_on
target:
entity_id: switch.dishwasher
```
See [Automation Examples](automation-examples.md) for more recipes.
### Can I avoid peak prices automatically?
Yes! Use Peak Price Period binary sensor:
```yaml
automation:
- alias: "Disable charging during peak prices"
trigger:
- platform: state
entity_id: binary_sensor.<home_name>_peak_price_period
to: "on"
action:
- service: switch.turn_off
target:
entity_id: switch.ev_charger
```
---
💡 **Still need help?**
- [Troubleshooting Guide](troubleshooting.md)
- [GitHub Issues](https://github.com/jpawlowski/hass.tibber_prices/issues)

View file

@ -1,105 +0,0 @@
---
comments: false
---
# Glossary
Quick reference for terms used throughout the documentation.
## A
**API Token**
: Your personal access key from Tibber. Get it at [developer.tibber.com](https://developer.tibber.com/settings/access-token).
**Attributes**
: Additional data attached to each sensor (timestamps, statistics, metadata). Access via `state_attr()` in templates.
## B
**Best Price Period**
: Automatically detected time window with favorable electricity prices. Ideal for scheduling dishwashers, heat pumps, EV charging.
**Binary Sensor**
: Sensor with ON/OFF state (e.g., "Best Price Period Active"). Used in automations as triggers.
## C
**Currency Display Mode**
: Configurable setting for how prices are shown. Choose base currency (€, kr) or subunit (ct, øre). Smart defaults apply: EUR → subunit, NOK/SEK/DKK → base.
**Coordinator**
: Home Assistant component managing data fetching and updates. Polls Tibber API every 15 minutes.
## D
**Dynamic Icons**
: Icons that change based on sensor state (e.g., battery icons showing price level). See [Dynamic Icons](dynamic-icons.md).
## F
**Flex (Flexibility)**
: Configuration parameter controlling how strict period detection is. Higher flex = more periods found, but potentially at higher prices.
## I
**Interval**
: 15-minute time slot with fixed electricity price (00:00-00:15, 00:15-00:30, etc.).
## L
**Level**
: Price classification within a day (LOWEST, LOW, NORMAL, HIGH, HIGHEST). Based on daily min/max prices.
## M
**Min Distance**
: Threshold requiring periods to be at least X% below daily average. Prevents detecting "cheap" periods during expensive days.
## P
**Peak Price Period**
: Time window with highest electricity prices. Use to avoid heavy consumption.
**Price Info**
: Complete dataset with all intervals (yesterday, today, tomorrow) including enriched statistics.
## Q
**Quarter-Hourly**
: 15-minute precision (4 intervals per hour, 96 per day).
## R
**Rating**
: Statistical price classification (VERY_CHEAP, CHEAP, NORMAL, EXPENSIVE, VERY_EXPENSIVE). Based on 24h averages and thresholds.
**Relaxation**
: Automatic loosening of period detection filters when target period count isn't met. Ensures you always get usable periods.
## S
**State**
: Current value of a sensor (e.g., price in ct/kWh, "ON"/"OFF" for binary sensors).
**State Class**
: Home Assistant classification for long-term statistics (MEASUREMENT, TOTAL, or none).
## T
**Trailing Average**
: Average price over the past 24 hours from current interval.
**Leading Average**
: Average price over the next 24 hours from current interval.
## V
**Volatility**
: Measure of price stability (LOW, MEDIUM, HIGH). High volatility = large price swings = good for timing optimization.
---
💡 **See Also:**
- [Core Concepts](concepts.md) - In-depth explanations
- [Sensors](sensors.md) - How sensors use these concepts
- [Period Calculation](period-calculation.md) - Deep dive into period detection

View file

@ -1,449 +0,0 @@
---
comments: false
---
# Dynamic Icon Colors
Many sensors in the Tibber Prices integration provide an `icon_color` attribute that allows you to dynamically color elements in your dashboard based on the sensor's state. This is particularly useful for visual dashboards where you want instant recognition of price levels or states.
**What makes icon_color special:** Instead of writing complex if/else logic to interpret the sensor state, you can simply use the `icon_color` value directly - it already contains the appropriate CSS color variable for the current state.
> **Related:** Many sensors also automatically change their **icon** based on state. See the **[Dynamic Icons Guide](dynamic-icons.md)** for details.
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
## What is icon_color?
The `icon_color` attribute contains a **CSS variable name** (not a direct color value) that changes based on the sensor's state. For example:
- **Price level sensors**: `var(--success-color)` for cheap, `var(--error-color)` for expensive
- **Binary sensors**: `var(--success-color)` when in best price period, `var(--error-color)` during peak price
- **Volatility**: `var(--success-color)` for low volatility, `var(--error-color)` for very high
### Why CSS Variables?
Using CSS variables like `var(--success-color)` instead of hardcoded colors (like `#00ff00`) has important advantages:
- ✅ **Automatic theme adaptation** - Colors change with light/dark mode
- ✅ **Consistent with your theme** - Uses your theme's color scheme
- ✅ **Future-proof** - Works with custom themes and future HA updates
You can use the `icon_color` attribute directly in your card templates, or interpret the sensor state yourself if you prefer custom colors (see examples below).
## Which Sensors Support icon_color?
Many sensors provide the `icon_color` attribute for dynamic styling. To see if a sensor has this attribute:
1. Go to **Developer Tools****States** in Home Assistant
2. Search for your sensor (e.g., `sensor.<home_name>_current_price_level`)
3. Look for `icon_color` in the attributes section
**Common sensor types with icon_color:**
- Price level sensors (e.g., `current_price_level`)
- Price rating sensors (e.g., `current_price_rating`)
- Volatility sensors (e.g., `today_s_price_volatility`)
- Price trend sensors (e.g., `price_trend_3h`)
- Binary sensors (e.g., `best_price_period`, `peak_price_period`)
- Timing sensors (e.g., `best_price_time_until_start`, `best_price_progress`)
The colors adapt to the sensor's state - cheaper prices typically show green, expensive prices red, and neutral states gray.
## When to Use icon_color vs. State Value
**Use `icon_color` when:**
- ✅ You can apply the CSS variable directly (icons, text colors, borders)
- ✅ Your card supports CSS variable substitution
- ✅ You want simple, clean code without if/else logic
**Use the state value directly when:**
- ⚠️ You need to convert the color (e.g., CSS variable → RGBA with transparency)
- ⚠️ You need different colors than what `icon_color` provides
- ⚠️ You're building complex conditional logic anyway
**Example of when NOT to use icon_color:**
```yaml
# ❌ DON'T: Converting icon_color requires if/else anyway
card:
- background: |
[[[
const color = entity.attributes.icon_color;
if (color === 'var(--success-color)') return 'rgba(76, 175, 80, 0.1)';
if (color === 'var(--error-color)') return 'rgba(244, 67, 54, 0.1)';
// ... more if statements
]]]
# ✅ DO: Interpret state directly if you need custom logic
card:
- background: |
[[[
const level = entity.state;
if (level === 'very_cheap' || level === 'cheap') return 'rgba(76, 175, 80, 0.1)';
if (level === 'very_expensive' || level === 'expensive') return 'rgba(244, 67, 54, 0.1)';
return 'transparent';
]]]
```
The advantage of `icon_color` is simplicity - if you need complex logic, you lose that advantage.
## How to Use icon_color in Your Dashboard
### Method 1: Custom Button Card (Recommended)
The [custom:button-card](https://github.com/custom-cards/button-card) from HACS supports dynamic icon colors.
**Example: Icon color only**
```yaml
type: custom:button-card
entity: sensor.<home_name>_current_price_level
name: Current Price Level
show_state: true
icon: mdi:cash
styles:
icon:
- color: |
[[[
return entity.attributes.icon_color || 'var(--state-icon-color)';
]]]
```
**Example: Icon AND state value with same color**
```yaml
type: custom:button-card
entity: sensor.<home_name>_current_price_level
name: Current Price Level
show_state: true
icon: mdi:cash
styles:
icon:
- color: |
[[[
return entity.attributes.icon_color || 'var(--state-icon-color)';
]]]
state:
- color: |
[[[
return entity.attributes.icon_color || 'var(--primary-text-color)';
]]]
- font-weight: bold
```
### Method 2: Entities Card with card_mod
Use Home Assistant's built-in entities card with card_mod for icon and state colors:
```yaml
type: entities
entities:
- entity: sensor.<home_name>_current_price_level
card_mod:
style:
hui-generic-entity-row:
$: |
state-badge {
color: {{ state_attr('sensor.<home_name>_current_price_level', 'icon_color') }} !important;
}
.info {
color: {{ state_attr('sensor.<home_name>_current_price_level', 'icon_color') }} !important;
}
```
### Method 3: Mushroom Cards
The [Mushroom cards](https://github.com/piitaya/lovelace-mushroom) support card_mod for icon and text colors:
**Icon color only:**
```yaml
type: custom:mushroom-entity-card
entity: binary_sensor.<home_name>_best_price_period
name: Best Price Period
icon: mdi:piggy-bank
card_mod:
style: |
ha-card {
--card-mod-icon-color: {{ state_attr('binary_sensor.<home_name>_best_price_period', 'icon_color') }};
}
```
**Icon and state value:**
```yaml
type: custom:mushroom-entity-card
entity: sensor.<home_name>_current_price_level
name: Price Level
card_mod:
style: |
ha-card {
--card-mod-icon-color: {{ state_attr('sensor.<home_name>_current_price_level', 'icon_color') }};
--primary-text-color: {{ state_attr('sensor.<home_name>_current_price_level', 'icon_color') }};
}
```
### Method 4: Glance Card with card_mod
Combine multiple sensors with dynamic colors:
```yaml
type: glance
entities:
- entity: sensor.<home_name>_current_price_level
- entity: sensor.<home_name>_today_s_price_volatility
- entity: binary_sensor.<home_name>_best_price_period
card_mod:
style: |
ha-card div.entity:nth-child(1) state-badge {
color: {{ state_attr('sensor.<home_name>_current_price_level', 'icon_color') }} !important;
}
ha-card div.entity:nth-child(2) state-badge {
color: {{ state_attr('sensor.<home_name>_today_s_price_volatility', 'icon_color') }} !important;
}
ha-card div.entity:nth-child(3) state-badge {
color: {{ state_attr('binary_sensor.<home_name>_best_price_period', 'icon_color') }} !important;
}
```
## Complete Dashboard Example
Here's a complete example combining multiple sensors with dynamic colors:
```yaml
type: vertical-stack
cards:
# Current price status
- type: horizontal-stack
cards:
- type: custom:button-card
entity: sensor.<home_name>_current_price_level
name: Price Level
show_state: true
styles:
icon:
- color: |
[[[
return entity.attributes.icon_color || 'var(--state-icon-color)';
]]]
- type: custom:button-card
entity: sensor.<home_name>_current_price_rating
name: Price Rating
show_state: true
styles:
icon:
- color: |
[[[
return entity.attributes.icon_color || 'var(--state-icon-color)';
]]]
# Binary sensors for periods
- type: horizontal-stack
cards:
- type: custom:button-card
entity: binary_sensor.<home_name>_best_price_period
name: Best Price Period
show_state: true
icon: mdi:piggy-bank
styles:
icon:
- color: |
[[[
return entity.attributes.icon_color || 'var(--state-icon-color)';
]]]
- type: custom:button-card
entity: binary_sensor.<home_name>_peak_price_period
name: Peak Price Period
show_state: true
icon: mdi:alert-circle
styles:
icon:
- color: |
[[[
return entity.attributes.icon_color || 'var(--state-icon-color)';
]]]
# Volatility and trends
- type: horizontal-stack
cards:
- type: custom:button-card
entity: sensor.<home_name>_today_s_price_volatility
name: Volatility
show_state: true
styles:
icon:
- color: |
[[[
return entity.attributes.icon_color || 'var(--state-icon-color)';
]]]
- type: custom:button-card
entity: sensor.<home_name>_price_trend_3h
name: Next 3h Trend
show_state: true
styles:
icon:
- color: |
[[[
return entity.attributes.icon_color || 'var(--state-icon-color)';
]]]
```
## CSS Color Variables
The integration uses Home Assistant's standard CSS variables for theme compatibility:
- `var(--success-color)` - Green (good/cheap/low)
- `var(--info-color)` - Blue (informational)
- `var(--warning-color)` - Orange (caution/expensive)
- `var(--error-color)` - Red (alert/very expensive/high)
- `var(--state-icon-color)` - Gray (neutral/normal)
- `var(--disabled-color)` - Light gray (no data/inactive)
These automatically adapt to your theme's light/dark mode and custom color schemes.
### Using Custom Colors
If you want to override the theme colors with your own, you have two options:
#### Option 1: Use icon_color but Override in Your Theme
Define custom colors in your theme configuration (`themes.yaml`):
```yaml
my_custom_theme:
# Override standard variables
success-color: "#00C853" # Custom green
error-color: "#D32F2F" # Custom red
warning-color: "#F57C00" # Custom orange
info-color: "#0288D1" # Custom blue
```
The `icon_color` attribute will automatically use your custom theme colors.
#### Option 2: Interpret State Value Directly
Instead of using `icon_color`, read the sensor state and apply your own colors:
**Example: Custom colors for price level**
```yaml
type: custom:button-card
entity: sensor.<home_name>_current_price_level
name: Current Price Level
show_state: true
icon: mdi:cash
styles:
icon:
- color: |
[[[
const level = entity.state;
if (level === 'very_cheap') return '#00E676'; // Bright green
if (level === 'cheap') return '#66BB6A'; // Light green
if (level === 'normal') return '#9E9E9E'; // Gray
if (level === 'expensive') return '#FF9800'; // Orange
if (level === 'very_expensive') return '#F44336'; // Red
return 'var(--state-icon-color)'; // Fallback
]]]
```
**Example: Custom colors for binary sensor**
```yaml
type: custom:button-card
entity: binary_sensor.<home_name>_best_price_period
name: Best Price Period
show_state: true
icon: mdi:piggy-bank
styles:
icon:
- color: |
[[[
// Use state directly, not icon_color
return entity.state === 'on' ? '#4CAF50' : '#9E9E9E';
]]]
card:
- background: |
[[[
return entity.state === 'on' ? 'rgba(76, 175, 80, 0.1)' : 'transparent';
]]]
```
**Example: Custom colors for volatility**
```yaml
type: custom:button-card
entity: sensor.<home_name>_today_s_price_volatility
name: Volatility Today
show_state: true
styles:
icon:
- color: |
[[[
const volatility = entity.state;
if (volatility === 'low') return '#4CAF50'; // Green
if (volatility === 'moderate') return '#2196F3'; // Blue
if (volatility === 'high') return '#FF9800'; // Orange
if (volatility === 'very_high') return '#F44336'; // Red
return 'var(--state-icon-color)';
]]]
```
**Example: Custom colors for price rating**
```yaml
type: custom:button-card
entity: sensor.<home_name>_current_price_rating
name: Price Rating
show_state: true
styles:
icon:
- color: |
[[[
const rating = entity.state;
if (rating === 'low') return '#00C853'; // Dark green
if (rating === 'normal') return '#78909C'; // Blue-gray
if (rating === 'high') return '#D32F2F'; // Dark red
return 'var(--state-icon-color)';
]]]
```
### Which Approach Should You Use?
| Use Case | Recommended Approach |
| ------------------------------------- | ---------------------------------- |
| Want theme-consistent colors | ✅ Use `icon_color` directly |
| Want light/dark mode support | ✅ Use `icon_color` directly |
| Want custom theme colors | ✅ Override CSS variables in theme |
| Want specific hardcoded colors | ⚠️ Interpret state value directly |
| Multiple themes with different colors | ✅ Use `icon_color` directly |
**Recommendation:** Use `icon_color` whenever possible for better theme integration. Only interpret the state directly if you need very specific color values that shouldn't change with themes.
## Troubleshooting
**Icons not changing color:**
- Make sure you're using a card that supports custom styling (like custom:button-card or card_mod)
- Check that the entity actually has the `icon_color` attribute (inspect in Developer Tools → States)
- Verify your Home Assistant theme supports the CSS variables
**Colors look wrong:**
- The colors are theme-dependent. Try switching themes to see if they appear correctly
- Some custom themes may override the standard CSS variables with unexpected colors
**Want different colors?**
- You can override the colors in your theme configuration
- Or use conditional logic in your card templates based on the state value instead of `icon_color`
## See Also
- [Sensors Reference](sensors.md) - Complete list of available sensors
- [Automation Examples](automation-examples.md) - Use color-coded sensors in automations
- [Configuration Guide](configuration.md) - Adjust thresholds for price levels and ratings

View file

@ -1,15 +0,0 @@
# Installation
> **Note:** This guide is under construction. For now, please refer to the [main README](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/README.md) for installation instructions.
## HACS Installation (Recommended)
Coming soon...
## Manual Installation
Coming soon...
## Configuration
Coming soon...

View file

@ -1,59 +0,0 @@
---
comments: false
---
# User Documentation
Welcome to the **Tibber Prices custom integration for Home Assistant**! This community-developed integration enhances your Home Assistant installation with detailed electricity price data from Tibber, featuring quarter-hourly precision, statistical analysis, and intelligent ratings.
:::info Not affiliated with Tibber
This is an independent, community-maintained custom integration. It is **not** an official Tibber product and is **not** affiliated with or endorsed by Tibber AS.
:::
## 📚 Documentation
- **[Installation](installation.md)** - How to install via HACS and configure the integration
- **[Configuration](configuration.md)** - Setting up your Tibber API token and price thresholds
- **[Period Calculation](period-calculation.md)** - How Best/Peak Price periods are calculated and configured
- **[Sensors](sensors.md)** - Available sensors, their states, and attributes
- **[Dynamic Icons](dynamic-icons.md)** - State-based automatic icon changes
- **[Dynamic Icon Colors](icon-colors.md)** - Using icon_color attribute for color-coded dashboards
- **[Actions](actions.md)** - Custom actions (service endpoints) and how to use them
- **[Chart Examples](chart-examples.md)** - ✨ ApexCharts visualizations with screenshots
- **[Automation Examples](automation-examples.md)** - Ready-to-use automation recipes
- **[Troubleshooting](troubleshooting.md)** - Common issues and solutions
## 🚀 Quick Start
1. **Install via HACS** (add as custom repository)
2. **Add Integration** in Home Assistant → Settings → Devices & Services
3. **Enter Tibber API Token** (get yours at [developer.tibber.com](https://developer.tibber.com/))
4. **Configure Price Thresholds** (optional, defaults work for most users)
5. **Start Using Sensors** in automations, dashboards, and scripts!
## ✨ Key Features
- **Quarter-hourly precision** - 15-minute intervals for accurate price tracking
- **Statistical analysis** - Trailing/leading 24h averages for context
- **Price ratings** - LOW/NORMAL/HIGH classification based on your thresholds
- **Best/Peak hour detection** - Automatic detection of cheapest/peak periods with configurable filters ([learn how](period-calculation.md))
- **Beautiful ApexCharts** - Auto-generated chart configurations with dynamic Y-axis scaling ([see examples](chart-examples.md))
- **Chart metadata sensor** - Dynamic chart configuration for optimal visualization
- **Flexible currency display** - Choose base currency (€, kr) or subunit (ct, øre) with smart defaults per currency
## 🔗 Useful Links
- [GitHub Repository](https://github.com/jpawlowski/hass.tibber_prices)
- [Issue Tracker](https://github.com/jpawlowski/hass.tibber_prices/issues)
- [Release Notes](https://github.com/jpawlowski/hass.tibber_prices/releases)
- [Home Assistant Community](https://community.home-assistant.io/)
## 🤝 Need Help?
- Check the [Troubleshooting Guide](troubleshooting.md)
- Search [existing issues](https://github.com/jpawlowski/hass.tibber_prices/issues)
- Open a [new issue](https://github.com/jpawlowski/hass.tibber_prices/issues/new) if needed
---
**Note:** These guides are for end users. If you want to contribute to development, see the [Developer Documentation](https://jpawlowski.github.io/hass.tibber_prices/developer/).

View file

@ -1,705 +0,0 @@
# Period Calculation
Learn how Best Price and Peak Price periods work, and how to configure them for your needs.
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
## Table of Contents
- [Quick Start](#quick-start)
- [How It Works](#how-it-works)
- [Configuration Guide](#configuration-guide)
- [Understanding Relaxation](#understanding-relaxation)
- [Common Scenarios](#common-scenarios)
- [Troubleshooting](#troubleshooting)
- [No Periods Found](#no-periods-found)
- [Periods Split Into Small Pieces](#periods-split-into-small-pieces)
- [Midnight Price Classification Changes](#midnight-price-classification-changes)
- [Advanced Topics](#advanced-topics)
---
## Quick Start
### What Are Price Periods?
The integration finds time windows when electricity is especially **cheap** (Best Price) or **expensive** (Peak Price):
- **Best Price Periods** 🟢 - When to run your dishwasher, charge your EV, or heat water
- **Peak Price Periods** 🔴 - When to reduce consumption or defer non-essential loads
### Default Behavior
Out of the box, the integration:
1. **Best Price**: Finds cheapest 1-hour+ windows that are at least 5% below the daily average
2. **Peak Price**: Finds most expensive 30-minute+ windows that are at least 5% above the daily average
3. **Relaxation**: Automatically loosens filters if not enough periods are found
**Most users don't need to change anything!** The defaults work well for typical use cases.
<details>
<summary> Why do Best Price and Peak Price have different defaults?</summary>
The integration sets different **initial defaults** because the features serve different purposes:
**Best Price (60 min, 15% flex):**
- Longer duration ensures appliances can complete their cycles
- Stricter flex (15%) focuses on genuinely cheap times
- Use case: Running dishwasher, EV charging, water heating
**Peak Price (30 min, 20% flex):**
- Shorter duration acceptable for early warnings
- More flexible (20%) catches price spikes earlier
- Use case: Alerting to expensive periods, even brief ones
**You can adjust all these values** in the configuration if the defaults don't fit your use case. The asymmetric defaults simply provide good starting points for typical scenarios.
</details>
### Example Timeline
```
00:00 ████████████████ Best Price Period (cheap prices)
04:00 ░░░░░░░░░░░░░░░░ Normal
08:00 ████████████████ Peak Price Period (expensive prices)
12:00 ░░░░░░░░░░░░░░░░ Normal
16:00 ████████████████ Peak Price Period (expensive prices)
20:00 ████████████████ Best Price Period (cheap prices)
```
---
## How It Works
### The Basic Idea
Each day, the integration analyzes all 96 quarter-hourly price intervals and identifies **continuous time ranges** that meet specific criteria.
Think of it like this:
1. **Find potential windows** - Times close to the daily MIN (Best Price) or MAX (Peak Price)
2. **Filter by quality** - Ensure they're meaningfully different from average
3. **Check duration** - Must be long enough to be useful
4. **Apply preferences** - Optional: only show stable prices, avoid mediocre times
### Step-by-Step Process
#### 1. Define the Search Range (Flexibility)
**Best Price:** How much MORE than the daily minimum can a price be?
```
Daily MIN: 20 ct/kWh
Flexibility: 15% (default)
→ Search for times ≤ 23 ct/kWh (20 + 15%)
```
**Peak Price:** How much LESS than the daily maximum can a price be?
```
Daily MAX: 40 ct/kWh
Flexibility: -15% (default)
→ Search for times ≥ 34 ct/kWh (40 - 15%)
```
**Why flexibility?** Prices rarely stay at exactly MIN/MAX. Flexibility lets you capture realistic time windows.
#### 2. Ensure Quality (Distance from Average)
Periods must be meaningfully different from the daily average:
```
Daily AVG: 30 ct/kWh
Minimum distance: 5% (default)
Best Price: Must be ≤ 28.5 ct/kWh (30 - 5%)
Peak Price: Must be ≥ 31.5 ct/kWh (30 + 5%)
```
**Why?** This prevents marking mediocre times as "best" just because they're slightly below average.
#### 3. Check Duration
Periods must be long enough to be practical:
```
Default: 60 minutes minimum
45-minute period → Discarded
90-minute period → Kept ✓
```
#### 4. Apply Optional Filters
You can optionally require:
- **Absolute quality** (level filter) - "Only show if prices are CHEAP/EXPENSIVE (not just below/above average)"
#### 5. Automatic Price Spike Smoothing
Isolated price spikes are automatically detected and smoothed to prevent unnecessary period fragmentation:
```
Original prices: 18, 19, 35, 20, 19 ct ← 35 ct is an isolated outlier
Smoothed: 18, 19, 19, 20, 19 ct ← Spike replaced with trend prediction
Result: Continuous period 00:00-01:15 instead of split periods
```
**Important:**
- Original prices are always preserved (min/max/avg show real values)
- Smoothing only affects which intervals are combined into periods
- The attribute `period_interval_smoothed_count` shows if smoothing was active
### Visual Example
**Timeline for a typical day:**
```
Hour: 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Price: 18 19 20 28 29 30 35 34 33 32 30 28 25 24 26 28 30 32 31 22 21 20 19 18
Daily MIN: 18 ct | Daily MAX: 35 ct | Daily AVG: 26 ct
Best Price (15% flex = ≤20.7 ct):
████████ ████████████████
00:00-03:00 (3h) 19:00-24:00 (5h)
Peak Price (-15% flex = ≥29.75 ct):
████████████████████████
06:00-11:00 (5h)
```
---
## Configuration Guide
### Basic Settings
#### Flexibility
**What:** How far from MIN/MAX to search for periods
**Default:** 15% (Best Price), -15% (Peak Price)
**Range:** 0-100%
```yaml
best_price_flex: 15 # Can be up to 15% more expensive than daily MIN
peak_price_flex: -15 # Can be up to 15% less expensive than daily MAX
```
**When to adjust:**
- **Increase (20-25%)** → Find more/longer periods
- **Decrease (5-10%)** → Find only the very best/worst times
**💡 Tip:** Very high flexibility (>30%) is rarely useful. **Recommendation:** Start with 15-20% and enable relaxation it adapts automatically to each day's price pattern.
#### Minimum Period Length
**What:** How long a period must be to show it
**Default:** 60 minutes (Best Price), 30 minutes (Peak Price)
**Range:** 15-240 minutes
```yaml
best_price_min_period_length: 60
peak_price_min_period_length: 30
```
**When to adjust:**
- **Increase (90-120 min)** → Only show longer periods (e.g., for heat pump cycles)
- **Decrease (30-45 min)** → Show shorter windows (e.g., for quick tasks)
#### Distance from Average
**What:** How much better than average a period must be
**Default:** 5%
**Range:** 0-20%
```yaml
best_price_min_distance_from_avg: 5
peak_price_min_distance_from_avg: 5
```
**When to adjust:**
- **Increase (5-10%)** → Only show clearly better times
- **Decrease (0-1%)** → Show any time below/above average
** Note:** Both flexibility and distance filters must be satisfied. When using high flexibility values (>30%), the distance filter may become the limiting factor. For best results, use moderate flexibility (15-20%) with relaxation enabled.
### Optional Filters
#### Level Filter (Absolute Quality)
**What:** Only show periods with CHEAP/EXPENSIVE intervals (not just below/above average)
**Default:** `any` (disabled)
**Options:** `any` | `cheap` | `very_cheap` (Best Price) | `expensive` | `very_expensive` (Peak Price)
```yaml
best_price_max_level: any # Show any period below average
best_price_max_level: cheap # Only show if at least one interval is CHEAP
```
**Use case:** "Only notify me when prices are objectively cheap/expensive"
** Volatility Thresholds:** The level filter also supports volatility-based levels (`volatility_low`, `volatility_medium`, `volatility_high`). These use **fixed internal thresholds** (LOW < 10%, MEDIUM < 20%, HIGH 20%) that are separate from the sensor volatility thresholds you configure in the UI. This separation ensures that changing sensor display preferences doesn't affect period calculation behavior.
#### Gap Tolerance (for Level Filter)
**What:** Allow some "mediocre" intervals within an otherwise good period
**Default:** 0 (strict)
**Range:** 0-10
```yaml
best_price_max_level: cheap
best_price_max_level_gap_count: 2 # Allow up to 2 NORMAL intervals per period
```
**Use case:** "Don't split periods just because one interval isn't perfectly CHEAP"
### Tweaking Strategy: What to Adjust First?
When you're not happy with the default behavior, adjust settings in this order:
#### 1. **Start with Relaxation (Easiest)**
If you're not finding enough periods:
```yaml
enable_min_periods_best: true # Already default!
min_periods_best: 2 # Already default!
relaxation_attempts_best: 11 # Already default!
```
**Why start here?** Relaxation automatically finds the right balance for each day. Much easier than manual tuning.
#### 2. **Adjust Period Length (Simple)**
If periods are too short/long for your use case:
```yaml
best_price_min_period_length: 90 # Increase from 60 for longer periods
# OR
best_price_min_period_length: 45 # Decrease from 60 for shorter periods
```
**Safe to change:** This only affects duration, not price selection logic.
#### 3. **Fine-tune Flexibility (Moderate)**
If you consistently want more/fewer periods:
```yaml
best_price_flex: 20 # Increase from 15% for more periods
# OR
best_price_flex: 10 # Decrease from 15% for stricter selection
```
**⚠️ Watch out:** Values >25% may conflict with distance filter. Use relaxation instead.
#### 4. **Adjust Distance from Average (Advanced)**
Only if periods seem "mediocre" (not really cheap/expensive):
```yaml
best_price_min_distance_from_avg: 10 # Increase from 5% for stricter quality
```
**⚠️ Careful:** High values (>10%) can make it impossible to find periods on flat price days.
#### 5. **Enable Level Filter (Expert)**
Only if you want absolute quality requirements:
```yaml
best_price_max_level: cheap # Only show objectively CHEAP periods
```
**⚠️ Very strict:** Many days may have zero qualifying periods. **Always enable relaxation when using this!**
### Common Mistakes to Avoid
**Don't increase flexibility to >30% manually** → Use relaxation instead
**Don't combine high distance (>10%) with strict level filter** → Too restrictive
**Don't disable relaxation with strict filters** → You'll get zero periods on some days
**Don't change all settings at once** → Adjust one at a time and observe results
**Do use defaults + relaxation** → Works for 90% of cases
**Do adjust one setting at a time** → Easier to understand impact
**Do check sensor attributes** → Shows why periods were/weren't found
---
## Understanding Relaxation
### What Is Relaxation?
Sometimes, strict filters find too few periods (or none). **Relaxation automatically loosens filters** until a minimum number of periods is found.
### How to Enable
```yaml
enable_min_periods_best: true
min_periods_best: 2 # Try to find at least 2 periods per day
relaxation_attempts_best: 11 # Flex levels to test (default: 11 steps = 22 filter combinations)
```
** Good news:** Relaxation is **enabled by default** with sensible settings. Most users don't need to change anything here!
Set the matching `relaxation_attempts_peak` value when tuning Peak Price periods. Both sliders accept 1-12 attempts, and the default of 11 flex levels translates to 22 filter-combination tries (11 flex levels × 2 filter combos) for each of Best and Peak calculations. Lower it for quick feedback, or raise it when either sensor struggles to hit the minimum-period target on volatile days.
### Why Relaxation Is Better Than Manual Tweaking
**Problem with manual settings:**
- You set flex to 25% → Works great on Monday (volatile prices)
- Same 25% flex on Tuesday (flat prices) → Finds "best price" periods that aren't really cheap
- You're stuck with one setting for all days
**Solution with relaxation:**
- Monday (volatile): Uses flex 15% (original) → Finds 2 perfect periods ✓
- Tuesday (flat): Escalates to flex 21% → Finds 2 decent periods ✓
- Wednesday (mixed): Uses flex 18% → Finds 2 good periods ✓
**Each day gets exactly the flexibility it needs!**
### How It Works (Adaptive Matrix)
Relaxation uses a **matrix approach** - trying _N_ flexibility levels (your configured **relaxation attempts**) with 2 filter combinations per level. With the default of 11 attempts, that means 11 flex levels × 2 filter combinations = **22 total filter-combination tries per day**; fewer attempts mean fewer flex increases, while more attempts extend the search further before giving up.
**Important:** The flexibility increment is **fixed at 3% per step** (hard-coded for reliability). This means:
- Base flex 15% → 18% → 21% → 24% → ... → 48% (with 11 attempts)
- Base flex 20% → 23% → 26% → 29% → ... → 50% (with 11 attempts)
#### Phase Matrix
For each day, the system tries:
**Flexibility Levels (Attempts):**
1. Attempt 1 = Original flex (e.g., 15%)
2. Attempt 2 = +3% step (18%)
3. Attempt 3 = +3% step (21%)
4. Attempt 4 = +3% step (24%)
5. … Attempts 5-11 (default) continue adding +3% each time
6. … Additional attempts keep extending the same pattern up to the 12-attempt maximum (up to 51%)
**2 Filter Combinations (per flexibility level):**
1. Original filters (your configured level filter)
2. Remove level filter (level=any)
**Example progression:**
```
Flex 15% + Original filters → Not enough periods
Flex 15% + Level=any → Not enough periods
Flex 18% + Original filters → Not enough periods
Flex 18% + Level=any → SUCCESS! Found 2 periods ✓
(stops here - no need to try more)
```
### Choosing the Number of Attempts
- **Default (11 attempts)** balances speed and completeness for most grids (22 combinations per day for both Best and Peak)
- **Lower (4-8 attempts)** if you only want mild relaxation and keep processing time minimal (reaches ~27-39% flex)
- **Higher (12 attempts)** for extremely volatile days when you must reach near the 50% maximum (24 combinations)
- Remember: each additional attempt adds two more filter combinations because every new flex level still runs both filter overrides (original + level=any)
#### Per-Day Independence
**Critical:** Each day relaxes **independently**:
```
Day 1: Finds 2 periods with flex 15% (original) → No relaxation needed
Day 2: Needs flex 21% + level=any → Uses relaxed settings
Day 3: Finds 2 periods with flex 15% (original) → No relaxation needed
```
**Why?** Price patterns vary daily. Some days have clear cheap/expensive windows (strict filters work), others don't (relaxation needed).
---
## Common Scenarios
### Scenario 1: Simple Best Price (Default)
**Goal:** Find the cheapest time each day to run dishwasher
**Configuration:**
```yaml
# Use defaults - no configuration needed!
best_price_flex: 15 # (default)
best_price_min_period_length: 60 # (default)
best_price_min_distance_from_avg: 5 # (default)
```
**What you get:**
- 1-3 periods per day with prices ≤ MIN + 15%
- Each period at least 1 hour long
- All periods at least 5% cheaper than daily average
**Automation example:**
```yaml
automation:
- trigger:
- platform: state
entity_id: binary_sensor.<home_name>_best_price_period
to: "on"
action:
- service: switch.turn_on
target:
entity_id: switch.dishwasher
```
---
## Troubleshooting
### No Periods Found
**Symptom:** `binary_sensor.<home_name>_best_price_period` never turns "on"
**Common Solutions:**
1. **Check if relaxation is enabled**
```yaml
enable_min_periods_best: true # Should be true (default)
min_periods_best: 2 # Try to find at least 2 periods
```
2. **If still no periods, check filters**
- Look at sensor attributes: `relaxation_active` and `relaxation_level`
- If relaxation exhausted all attempts: Filters too strict or flat price day
3. **Try increasing flexibility slightly**
```yaml
best_price_flex: 20 # Increase from default 15%
```
4. **Or reduce period length requirement**
```yaml
best_price_min_period_length: 45 # Reduce from default 60 minutes
```
### Periods Split Into Small Pieces
**Symptom:** Many short periods instead of one long period
**Common Solutions:**
1. **If using level filter, add gap tolerance**
```yaml
best_price_max_level: cheap
best_price_max_level_gap_count: 2 # Allow 2 NORMAL intervals
```
2. **Slightly increase flexibility**
```yaml
best_price_flex: 20 # From 15% → captures wider price range
```
3. **Check for price spikes**
- Automatic smoothing should handle this
- Check attribute: `period_interval_smoothed_count`
- If 0: Not isolated spikes, but real price levels
### Understanding Sensor Attributes
**Key attributes to check:**
```yaml
# Entity: binary_sensor.<home_name>_best_price_period
# When "on" (period active):
start: "2025-11-11T02:00:00+01:00" # Period start time
end: "2025-11-11T05:00:00+01:00" # Period end time
duration_minutes: 180 # Duration in minutes
price_mean: 18.5 # Arithmetic mean price in the period
price_median: 18.3 # Median price in the period
rating_level: "LOW" # All intervals have LOW rating
# Relaxation info (shows if filter loosening was needed):
relaxation_active: true # This day needed relaxation
relaxation_level: "price_diff_18.0%+level_any" # Found at 18% flex, level filter removed
# Optional (only shown when relevant):
period_interval_smoothed_count: 2 # Number of price spikes smoothed
period_interval_level_gap_count: 1 # Number of "mediocre" intervals tolerated
```
### Midnight Price Classification Changes
**Symptom:** A Best Price period at 23:45 suddenly changes to Peak Price at 00:00 (or vice versa), even though the absolute price barely changed.
**Why This Happens:**
This is **mathematically correct behavior** caused by how electricity prices are set in the day-ahead market:
**Market Timing:**
- The EPEX SPOT Day-Ahead auction closes at **12:00 CET** each day
- **All prices** for the next day (00:00-23:45) are set at this moment
- Late-day intervals (23:45) are priced **~36 hours before delivery**
- Early-day intervals (00:00) are priced **~12 hours before delivery**
**Why Prices Jump at Midnight:**
1. **Forecast Uncertainty:** Weather, demand, and renewable generation forecasts are more uncertain 36 hours ahead than 12 hours ahead
2. **Risk Buffer:** Late-day prices include a risk premium for this uncertainty
3. **Independent Days:** Each day has its own min/max/avg calculated from its 96 intervals
4. **Relative Classification:** Periods are classified based on their **position within the day's price range**, not absolute prices
**Example:**
```yaml
# Day 1 (low volatility, narrow range)
Price range: 18-22 ct/kWh (4 ct span)
Daily average: 20 ct/kWh
23:45: 18.5 ct/kWh → 7.5% below average → BEST PRICE ✅
# Day 2 (low volatility, narrow range)
Price range: 17-21 ct/kWh (4 ct span)
Daily average: 19 ct/kWh
00:00: 18.6 ct/kWh → 2.1% below average → PEAK PRICE ❌
# Observation: Absolute price barely changed (18.5 → 18.6 ct)
# But relative position changed dramatically:
# - Day 1: Near the bottom of the range
# - Day 2: Near the middle/top of the range
```
**When This Occurs:**
- **Low-volatility days:** When price span is narrow (< 5 ct/kWh)
- **Stable weather:** Similar conditions across multiple days
- **Market transitions:** Switching between high/low demand seasons
**How to Detect:**
Check the volatility sensors to understand if a period flip is meaningful:
```yaml
# Check daily volatility (available in integration)
sensor.<home_name>_today_s_price_volatility: 8.2% # Low volatility
sensor.<home_name>_tomorrow_s_price_volatility: 7.9% # Also low
# Low volatility (< 15%) means:
# - Small absolute price differences between periods
# - Classification changes may not be economically significant
# - Consider ignoring period classification on such days
```
**Handling in Automations:**
You can make your automations volatility-aware:
```yaml
# Option 1: Only act on high-volatility days
automation:
- alias: "Dishwasher - Best Price (High Volatility Only)"
trigger:
- platform: state
entity_id: binary_sensor.<home_name>_best_price_period
to: "on"
condition:
- condition: numeric_state
entity_id: sensor.<home_name>_today_s_price_volatility
above: 15 # Only act if volatility > 15%
action:
- service: switch.turn_on
entity_id: switch.dishwasher
# Option 2: Check absolute price, not just classification
automation:
- alias: "Heat Water - Cheap Enough"
trigger:
- platform: state
entity_id: binary_sensor.<home_name>_best_price_period
to: "on"
condition:
- condition: numeric_state
entity_id: sensor.<home_name>_current_electricity_price
below: 20 # Absolute threshold: < 20 ct/kWh
action:
- service: switch.turn_on
entity_id: switch.water_heater
# Option 3: Use per-period day volatility (available on period sensors)
automation:
- alias: "EV Charging - Volatility-Aware"
trigger:
- platform: state
entity_id: binary_sensor.<home_name>_best_price_period
to: "on"
condition:
# Check if the period's day has meaningful volatility
- condition: template
value_template: >
{{ state_attr('binary_sensor.<home_name>_best_price_period', 'day_volatility_%') | float(0) > 15 }}
action:
- service: switch.turn_on
entity_id: switch.ev_charger
```
**Available Per-Period Attributes:**
Each period sensor exposes day volatility and price statistics:
```yaml
binary_sensor.<home_name>_best_price_period:
day_volatility_%: 8.2 # Volatility % of the period's day
day_price_min: 1800.0 # Minimum price of the day (ct/kWh)
day_price_max: 2200.0 # Maximum price of the day (ct/kWh)
day_price_span: 400.0 # Difference (max - min) in ct
```
These attributes allow automations to check: "Is the classification meaningful on this particular day?"
**Summary:**
- ✅ **Expected behavior:** Periods are evaluated per-day, midnight is a natural boundary
- ✅ **Market reality:** Late-day prices have more uncertainty than early-day prices
- ✅ **Solution:** Use volatility sensors, absolute price thresholds, or per-period day volatility attributes
---
## Advanced Topics
For advanced configuration patterns and technical deep-dive, see:
- [Automation Examples](./automation-examples.md) - Real-world automation patterns
- [Actions](./actions.md) - Using the `tibber_prices.get_chartdata` action for custom visualizations
### Quick Reference
**Configuration Parameters:**
| Parameter | Default | Range | Purpose |
| ---------------------------------- | ------- | ---------------- | ------------------------------ |
| `best_price_flex` | 15% | 0-100% | Search range from daily MIN |
| `best_price_min_period_length` | 60 min | 15-240 | Minimum duration |
| `best_price_min_distance_from_avg` | 5% | 0-20% | Quality threshold |
| `best_price_max_level` | any | any/cheap/vcheap | Absolute quality |
| `best_price_max_level_gap_count` | 0 | 0-10 | Gap tolerance |
| `enable_min_periods_best` | true | true/false | Enable relaxation |
| `min_periods_best` | 2 | 1-10 | Target periods per day |
| `relaxation_attempts_best` | 11 | 1-12 | Flex levels (attempts) per day |
**Peak Price:** Same parameters with `peak_price_*` prefix (defaults: flex=-15%, same otherwise)
### Price Levels Reference
The Tibber API provides price levels for each 15-minute interval:
**Levels (based on trailing 24h average):**
- `VERY_CHEAP` - Significantly below average
- `CHEAP` - Below average
- `NORMAL` - Around average
- `EXPENSIVE` - Above average
- `VERY_EXPENSIVE` - Significantly above average
---
**Last updated:** November 20, 2025
**Integration version:** 2.0+

View file

@ -1,416 +0,0 @@
---
comments: false
---
# Sensors
> **Note:** This guide is under construction. For now, please refer to the [main README](https://github.com/jpawlowski/hass.tibber_prices/blob/v0.25.0b0/README.md) for available sensors.
> **Tip:** Many sensors have dynamic icons and colors! See the **[Dynamic Icons Guide](dynamic-icons.md)** and **[Dynamic Icon Colors Guide](icon-colors.md)** to enhance your dashboards.
> **Entity ID tip:** `<home_name>` is a placeholder for your Tibber home display name in Home Assistant. Entity IDs are derived from the displayed name (localized), so the exact slug may differ. Example suffixes below use the English display names (en.json) as a baseline. You can find the real ID in **Settings → Devices & Services → Entities** (or **Developer Tools → States**).
## Binary Sensors
### Best Price Period & Peak Price Period
These binary sensors indicate when you're in a detected best or peak price period. See the **[Period Calculation Guide](period-calculation.md)** for a detailed explanation of how these periods are calculated and configured.
**Quick overview:**
- **Best Price Period**: Turns ON during periods with significantly lower prices than the daily average
- **Peak Price Period**: Turns ON during periods with significantly higher prices than the daily average
Both sensors include rich attributes with period details, intervals, relaxation status, and more.
## Core Price Sensors
### Average Price Sensors
The integration provides several sensors that calculate average electricity prices over different time windows. These sensors show a **typical** price value that represents the overall price level, helping you make informed decisions about when to use electricity.
#### Available Average Sensors
| Sensor | Description | Time Window |
|--------|-------------|-------------|
| **Average Price Today** | Typical price for current calendar day | 00:00 - 23:59 today |
| **Average Price Tomorrow** | Typical price for next calendar day | 00:00 - 23:59 tomorrow |
| **Trailing Price Average** | Typical price for last 24 hours | Rolling 24h backward |
| **Leading Price Average** | Typical price for next 24 hours | Rolling 24h forward |
| **Current Hour Average** | Smoothed price around current time | 5 intervals (~75 min) |
| **Next Hour Average** | Smoothed price around next hour | 5 intervals (~75 min) |
| **Next N Hours Average** | Future price forecast | 1h, 2h, 3h, 4h, 5h, 6h, 8h, 12h |
#### Configurable Display: Median vs Mean
All average sensors support **two different calculation methods** for the state value:
- **Median** (default): The "middle value" when all prices are sorted. Resistant to extreme price spikes, shows the **typical** price level you experienced.
- **Arithmetic Mean**: The mathematical average including all prices. Better for **cost calculations** but affected by extreme spikes.
**Why two values matter:**
```yaml
# Example price data for one day:
# Prices: 10, 12, 13, 15, 80 ct/kWh (one extreme spike)
#
# Median = 13 ct/kWh ← "Typical" price level (middle value)
# Mean = 26 ct/kWh ← Mathematical average (affected by spike)
```
The median shows you what price level was **typical** during that period, while the mean shows the actual **average cost** if you consumed evenly throughout the period.
#### Configuring the Display
You can choose which value is displayed in the sensor state:
1. Go to **Settings → Devices & Services → Tibber Prices**
2. Click **Configure** on your home
3. Navigate to **Step 6: Average Sensor Display Settings**
4. Choose between:
- **Median** (default) - Shows typical price level, resistant to spikes
- **Arithmetic Mean** - Shows actual mathematical average
**Important:** Both values are **always available** as sensor attributes, regardless of your choice! This ensures your automations continue to work if you change the display setting.
#### Using Both Values in Automations
Both `price_mean` and `price_median` are always available as attributes:
```yaml
# Example: Get both values regardless of display setting
sensor:
- platform: template
sensors:
daily_price_analysis:
friendly_name: "Daily Price Analysis"
value_template: >
{% set median = state_attr('sensor.<home_name>_price_today', 'price_median') %}
{% set mean = state_attr('sensor.<home_name>_price_today', 'price_mean') %}
{% set current = states('sensor.<home_name>_current_electricity_price') | float %}
{% if current < median %}
Below typical ({{ ((1 - current/median) * 100) | round(1) }}% cheaper)
{% elif current < mean %}
Typical price range
{% else %}
Above average ({{ ((current/mean - 1) * 100) | round(1) }}% more expensive)
{% endif %}
```
#### Practical Examples
**Example 1: Smart dishwasher control**
Run dishwasher only when price is significantly below the daily typical level:
```yaml
automation:
- alias: "Start Dishwasher When Cheap"
trigger:
- platform: state
entity_id: binary_sensor.<home_name>_best_price_period
to: "on"
condition:
# Only if current price is at least 20% below typical (median)
- condition: template
value_template: >
{% set current = states('sensor.<home_name>_current_electricity_price') | float %}
{% set median = state_attr('sensor.<home_name>_price_today', 'price_median') | float %}
{{ current < (median * 0.8) }}
action:
- service: switch.turn_on
entity_id: switch.dishwasher
```
**Example 2: Cost-aware heating control**
Use mean for actual cost calculations:
```yaml
automation:
- alias: "Heating Budget Control"
trigger:
- platform: time
at: "06:00:00"
action:
# Calculate expected daily heating cost
- variables:
mean_price: "{{ state_attr('sensor.<home_name>_price_today', 'price_mean') | float }}"
heating_kwh_per_day: 15 # Estimated consumption
daily_cost: "{{ (mean_price * heating_kwh_per_day / 100) | round(2) }}"
- service: notify.mobile_app
data:
title: "Heating Cost Estimate"
message: "Expected cost today: €{{ daily_cost }} (avg price: {{ mean_price }} ct/kWh)"
```
**Example 3: Smart charging based on rolling average**
Use trailing average to understand recent price trends:
```yaml
automation:
- alias: "EV Charging - Price Trend Based"
trigger:
- platform: state
entity_id: sensor.ev_battery_level
condition:
# Start charging if current price < 90% of recent 24h average
- condition: template
value_template: >
{% set current = states('sensor.<home_name>_current_electricity_price') | float %}
{% set trailing_avg = state_attr('sensor.<home_name>_price_trailing_24h', 'price_median') | float %}
{{ current < (trailing_avg * 0.9) }}
# And battery < 80%
- condition: numeric_state
entity_id: sensor.ev_battery_level
below: 80
action:
- service: switch.turn_on
entity_id: switch.ev_charger
```
#### Key Attributes
All average sensors provide these attributes:
| Attribute | Description | Example |
|-----------|-------------|---------|
| `price_mean` | Arithmetic mean (always available) | 25.3 ct/kWh |
| `price_median` | Median value (always available) | 22.1 ct/kWh |
| `interval_count` | Number of intervals included | 96 |
| `timestamp` | Reference time for calculation | 2025-12-18T00:00:00+01:00 |
**Note:** The `price_mean` and `price_median` attributes are **always present** regardless of which value you configured for display. This ensures automation compatibility when changing the display setting.
#### When to Use Which Value
**Use Median for:**
- ✅ Comparing "typical" price levels across days
- ✅ Determining if current price is unusually high/low
- ✅ User-facing displays ("What was today like?")
- ✅ Volatility analysis (comparing typical vs extremes)
**Use Mean for:**
- ✅ Cost calculations and budgeting
- ✅ Energy cost estimations
- ✅ Comparing actual average costs between periods
- ✅ Financial planning and forecasting
**Both values tell different stories:**
- High median + much higher mean = Expensive spikes occurred
- Low median + higher mean = Generally cheap with occasional spikes
- Similar median and mean = Stable prices (low volatility)
## Volatility Sensors
Volatility sensors help you understand how much electricity prices fluctuate over a given period. Instead of just looking at the absolute price, they measure the **relative price variation**, which is a great indicator of whether it's a good day for price-based energy optimization.
The calculation is based on the **Coefficient of Variation (CV)**, a standardized statistical measure defined as:
`CV = (Standard Deviation / aAithmetic Mean) * 100%`
This results in a percentage that shows how much prices deviate from the average. A low CV means stable prices, while a high CV indicates significant price swings and thus, a high potential for saving money by shifting consumption.
The sensor's state can be `low`, `moderate`, `high`, or `very_high`, based on configurable thresholds.
### Available Volatility Sensors
| Sensor | Description | Time Window |
|---|---|---|
| **Today's Price Volatility** | Volatility for the current calendar day | 00:00 - 23:59 today |
| **Tomorrow's Price Volatility** | Volatility for the next calendar day | 00:00 - 23:59 tomorrow |
| **Next 24h Price Volatility** | Volatility for the next 24 hours from now | Rolling 24h forward |
| **Today + Tomorrow Price Volatility** | Volatility across both today and tomorrow | Up to 48 hours |
### Configuration
You can adjust the CV thresholds that determine the volatility level:
1. Go to **Settings → Devices & Services → Tibber Prices**.
2. Click **Configure**.
3. Go to the **Price Volatility Thresholds** step.
Default thresholds are:
- **Moderate:** 15%
- **High:** 30%
- **Very High:** 50%
### Key Attributes
All volatility sensors provide these attributes:
| Attribute | Description | Example |
|---|---|---|
| `price_coefficient_variation_%` | The calculated Coefficient of Variation | `23.5` |
| `price_spread` | The difference between the highest and lowest price | `12.3` |
| `price_min` | The lowest price in the period | `10.2` |
| `price_max` | The highest price in the period | `22.5` |
| `price_mean` | The arithmetic mean of all prices in the period | `15.1` |
| `interval_count` | Number of price intervals included in the calculation | `96` |
### Usage in Automations & Best Practices
You can use the volatility sensor to decide if a price-based optimization is worth it. For example, if your solar battery has conversion losses, you might only want to charge and discharge it on days with high volatility.
**Best Practice: Use the `price_volatility` Attribute**
For automations, it is strongly recommended to use the `price_volatility` attribute instead of the sensor's main state.
- **Why?** The main `state` of the sensor is translated into your Home Assistant language (e.g., "Hoch" in German). If you change your system language, automations based on this state will break. The `price_volatility` attribute is **always in lowercase English** (`"low"`, `"moderate"`, `"high"`, `"very_high"`) and therefore provides a stable, language-independent value.
**Good Example (Robust Automation):**
This automation triggers only if the volatility is classified as `high` or `very_high`, respecting your central settings and working independently of the system language.
```yaml
automation:
- alias: "Enable battery optimization only on volatile days"
trigger:
- platform: template
value_template: >
{{ state_attr('sensor.<home_name>_today_s_price_volatility', 'price_volatility') in ['high', 'very_high'] }}
action:
- service: input_boolean.turn_on
entity_id: input_boolean.battery_optimization_enabled
```
---
**Avoid Hard-Coding Numeric Thresholds**
You might be tempted to use the numeric `price_coefficient_variation_%` attribute directly in your automations. This is not recommended.
- **Why?** The integration provides central configuration options for the volatility thresholds. By using the classified `price_volatility` attribute, your automations automatically adapt if you decide to change what you consider "high" volatility (e.g., changing the threshold from 30% to 35%). Hard-coding values means you would have to find and update them in every single automation.
**Bad Example (Brittle Automation):**
This automation uses a hard-coded value. If you later change the "High" threshold in the integration's options to 35%, this automation will not respect that change and might trigger at the wrong time.
```yaml
automation:
- alias: "Brittle - Enable battery optimization"
trigger:
#
# BAD: Avoid hard-coding numeric values
#
- platform: numeric_state
entity_id: sensor.<home_name>_today_s_price_volatility
attribute: price_coefficient_variation_%
above: 30
action:
- service: input_boolean.turn_on
entity_id: input_boolean.battery_optimization_enabled
```
By following the "Good Example", your automations become simpler, more readable, and much easier to maintain.
## Rating Sensors
Coming soon...
## Diagnostic Sensors
### Chart Metadata
**Entity ID:** `sensor.<home_name>_chart_metadata`
> **✨ New Feature**: This sensor provides dynamic chart configuration metadata for optimal visualization. Perfect for use with the `get_apexcharts_yaml` action!
This diagnostic sensor provides essential chart configuration values as sensor attributes, enabling dynamic Y-axis scaling and optimal chart appearance in rolling window modes.
**Key Features:**
- **Dynamic Y-Axis Bounds**: Automatically calculates optimal `yaxis_min` and `yaxis_max` for your price data
- **Automatic Updates**: Refreshes when price data changes (coordinator updates)
- **Lightweight**: Metadata-only mode (no data processing) for fast response
- **State Indicator**: Shows `pending` (initialization), `ready` (data available), or `error` (service call failed)
**Attributes:**
- **`timestamp`**: When the metadata was last fetched
- **`yaxis_min`**: Suggested minimum value for Y-axis (optimal scaling)
- **`yaxis_max`**: Suggested maximum value for Y-axis (optimal scaling)
- **`currency`**: Currency code (e.g., "EUR", "NOK")
- **`resolution`**: Interval duration in minutes (usually 15)
- **`error`**: Error message if service call failed
**Usage:**
The `tibber_prices.get_apexcharts_yaml` action **automatically uses this sensor** for dynamic Y-axis scaling in `rolling_window` and `rolling_window_autozoom` modes! No manual configuration needed - just enable the action's result with `config-template-card` and the sensor provides optimal Y-axis bounds automatically.
See the **[Chart Examples Guide](chart-examples.md)** for practical examples!
---
### Chart Data Export
**Entity ID:** `sensor.<home_name>_chart_data_export`
**Default State:** Disabled (must be manually enabled)
> **⚠️ Legacy Feature**: This sensor is maintained for backward compatibility. For new integrations, use the **`tibber_prices.get_chartdata`** service instead, which offers more flexibility and better performance.
This diagnostic sensor provides cached chart-friendly price data that can be consumed by chart cards (ApexCharts, custom cards, etc.).
**Key Features:**
- **Configurable via Options Flow**: Service parameters can be configured through the integration's options menu (Step 7 of 7)
- **Automatic Updates**: Data refreshes on coordinator updates (every 15 minutes)
- **Attribute-Based Output**: Chart data is stored in sensor attributes for easy access
- **State Indicator**: Shows `pending` (before first call), `ready` (data available), or `error` (service call failed)
**Important Notes:**
- ⚠️ Disabled by default - must be manually enabled in entity settings
- ⚠️ Consider using the service instead for better control and flexibility
- ⚠️ Configuration updates require HA restart
**Attributes:**
The sensor exposes chart data with metadata in attributes:
- **`timestamp`**: When the data was last fetched
- **`error`**: Error message if service call failed
- **`data`** (or custom name): Array of price data points in configured format
**Configuration:**
To configure the sensor's output format:
1. Go to **Settings → Devices & Services → Tibber Prices**
2. Click **Configure** on your Tibber home
3. Navigate through the options wizard to **Step 7: Chart Data Export Settings**
4. Configure output format, filters, field names, and other options
5. Save and restart Home Assistant
**Available Settings:**
See the `tibber_prices.get_chartdata` service documentation below for a complete list of available parameters. All service parameters can be configured through the options flow.
**Example Usage:**
```yaml
# ApexCharts card consuming the sensor
type: custom:apexcharts-card
series:
- entity: sensor.<home_name>_chart_data_export
data_generator: |
return entity.attributes.data;
```
**Migration Path:**
If you're currently using this sensor, consider migrating to the service:
```yaml
# Old approach (sensor)
- service: apexcharts_card.update
data:
entity: sensor.<home_name>_chart_data_export
# New approach (service)
- service: tibber_prices.get_chartdata
data:
entry_id: YOUR_ENTRY_ID
day: ["today", "tomorrow"]
output_format: array_of_objects
response_variable: chart_data
```

View file

@ -1,21 +0,0 @@
---
comments: false
---
# Troubleshooting
> **Note:** This guide is under construction.
## Common Issues
Coming soon...
## Debug Logging
Coming soon...
## Getting Help
- Check [existing issues](https://github.com/jpawlowski/hass.tibber_prices/issues)
- Open a [new issue](https://github.com/jpawlowski/hass.tibber_prices/issues/new) with detailed information
- Include logs, configuration, and steps to reproduce

View file

@ -1,5 +1,4 @@
[
"v0.25.0b0",
"v0.24.0",
"v0.23.1",
"v0.23.0",

View file

@ -54,6 +54,15 @@ if ! echo "$VERSION" | grep -qE '^[0-9]+\.[0-9]+\.[0-9]+(b[0-9]+)?$'; then
die "Invalid version format: $VERSION\nExpected format: X.Y.Z or X.Y.ZbN (e.g., 0.3.0, 1.0.0, 0.25.0b0)"
fi
IS_PRERELEASE=false
if [[ $VERSION =~ b[0-9]+$ ]]; then
IS_PRERELEASE=true
log_header "Documentation guardrail"
log_step "Detected beta/prerelease version ($VERSION)"
log_step "Skip Docusaurus versioning: leave docs on 'next'"
log_step "Only version docs on final stable releases"
fi
TAG="v$VERSION"
MANIFEST="custom_components/tibber_prices/manifest.json"