hass.tibber_prices/developer/search-doc-1764985317891.json
github-actions[bot] e9aea64a2e deploy: 6898c126e3
2025-12-06 01:42:39 +00:00

1 line
No EOL
177 KiB
JSON
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{"searchDocs":[{"title":"Coding Guidelines","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/coding-guidelines","content":"","keywords":"","version":"Next 🚧"},{"title":"Code Style","type":1,"pageTitle":"Coding Guidelines","url":"/hass.tibber_prices/developer/coding-guidelines#code-style","content":" Formatter/Linter: Ruff (replaces Black, Flake8, isort)Max line length: 120 charactersMax complexity: 25 (McCabe)Target: Python 3.13 Run before committing: ./scripts/lint # Auto-fix issues ./scripts/release/hassfest # Validate integration structure ","version":"Next 🚧","tagName":"h2"},{"title":"Naming Conventions","type":1,"pageTitle":"Coding Guidelines","url":"/hass.tibber_prices/developer/coding-guidelines#naming-conventions","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Class Names","type":1,"pageTitle":"Coding Guidelines","url":"/hass.tibber_prices/developer/coding-guidelines#class-names","content":" All public classes MUST use the integration name as prefix. This is a Home Assistant standard to avoid naming conflicts between integrations. # ✅ CORRECT class TibberPricesApiClient: class TibberPricesDataUpdateCoordinator: class TibberPricesSensor: # ❌ WRONG - Missing prefix class ApiClient: class DataFetcher: class TimeService: When prefix is required: Public classes used across multiple modulesAll exception classesAll coordinator and entity classesData classes (dataclasses, NamedTuples) used as public APIs When prefix can be omitted: Private helper classes within a single module (prefix with _ underscore)Type aliases and callbacks (e.g., TimeServiceCallback)Small internal NamedTuples for function returns Private Classes: If a helper class is ONLY used within a single module file, prefix it with underscore: # ✅ Private class - used only in this file class _InternalHelper: """Helper used only within this module.""" pass # ❌ Wrong - no prefix but used across modules class DataFetcher: # Should be TibberPricesDataFetcher pass Note: Currently (Nov 2025), this project has NO private classes - all classes are used across module boundaries. Current Technical Debt: Many existing classes lack the TibberPrices prefix. Before refactoring: Document the plan in /planning/class-naming-refactoring.mdUse multi_replace_string_in_file for bulk renamesTest thoroughly after each module See AGENTS.md for complete list of classes needing rename. ","version":"Next 🚧","tagName":"h3"},{"title":"Import Order","type":1,"pageTitle":"Coding Guidelines","url":"/hass.tibber_prices/developer/coding-guidelines#import-order","content":" Python stdlib (specific types only)Third-party (homeassistant.*, aiohttp)Local (.api, .const) ","version":"Next 🚧","tagName":"h2"},{"title":"Critical Patterns","type":1,"pageTitle":"Coding Guidelines","url":"/hass.tibber_prices/developer/coding-guidelines#critical-patterns","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Time Handling","type":1,"pageTitle":"Coding Guidelines","url":"/hass.tibber_prices/developer/coding-guidelines#time-handling","content":" Always use dt_util from homeassistant.util: from homeassistant.util import dt as dt_util price_time = dt_util.parse_datetime(starts_at) price_time = dt_util.as_local(price_time) # Convert to HA timezone now = dt_util.now() ","version":"Next 🚧","tagName":"h3"},{"title":"Translation Loading","type":1,"pageTitle":"Coding Guidelines","url":"/hass.tibber_prices/developer/coding-guidelines#translation-loading","content":" # In __init__.py async_setup_entry: await async_load_translations(hass, "en") await async_load_standard_translations(hass, "en") ","version":"Next 🚧","tagName":"h3"},{"title":"Price Data Enrichment","type":1,"pageTitle":"Coding Guidelines","url":"/hass.tibber_prices/developer/coding-guidelines#price-data-enrichment","content":" Always enrich raw API data: from .price_utils import enrich_price_info_with_differences enriched = enrich_price_info_with_differences( price_info_data, thresholds, ) See AGENTS.md for complete guidelines. ","version":"Next 🚧","tagName":"h3"},{"title":"Contributing Guide","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/contributing","content":"","keywords":"","version":"Next 🚧"},{"title":"Getting Started","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#getting-started","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Prerequisites","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#prerequisites","content":" GitVS Code with Remote Containers extensionDocker Desktop ","version":"Next 🚧","tagName":"h3"},{"title":"Fork and Clone","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#fork-and-clone","content":" Fork the repository on GitHubClone your fork: git clone https://github.com/YOUR_USERNAME/hass.tibber_prices.git cd hass.tibber_prices Open in VS CodeClick "Reopen in Container" when prompted The DevContainer will set up everything automatically. ","version":"Next 🚧","tagName":"h3"},{"title":"Development Workflow","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#development-workflow","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"1. Create a Branch","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#1-create-a-branch","content":" git checkout -b feature/your-feature-name # or git checkout -b fix/issue-123-description Branch naming: feature/ - New featuresfix/ - Bug fixesdocs/ - Documentation onlyrefactor/ - Code restructuringtest/ - Test improvements ","version":"Next 🚧","tagName":"h3"},{"title":"2. Make Changes","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#2-make-changes","content":" Edit code, following Coding Guidelines. Run checks frequently: ./scripts/type-check # Pyright type checking ./scripts/lint # Ruff linting (auto-fix) ./scripts/test # Run tests ","version":"Next 🚧","tagName":"h3"},{"title":"3. Test Locally","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#3-test-locally","content":" ./scripts/develop # Start HA with integration loaded Access at http://localhost:8123 ","version":"Next 🚧","tagName":"h3"},{"title":"4. Write Tests","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#4-write-tests","content":" Add tests in /tests/ for new features: @pytest.mark.unit async def test_your_feature(hass, coordinator): """Test your new feature.""" # Arrange coordinator.data = {...} # Act result = your_function(coordinator.data) # Assert assert result == expected_value Run your test: ./scripts/test tests/test_your_feature.py -v ","version":"Next 🚧","tagName":"h3"},{"title":"5. Commit Changes","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#5-commit-changes","content":" Follow Conventional Commits: git add . git commit -m "feat(sensors): add volatility trend sensor Add new sensor showing 3-hour volatility trend direction. Includes attributes with historical volatility data. Impact: Users can predict when prices will stabilize or continue fluctuating." Commit types: feat: - New featurefix: - Bug fixdocs: - Documentationrefactor: - Code restructuringtest: - Test changeschore: - Maintenance Add scope when relevant: feat(sensors): - Sensor platformfix(coordinator): - Data coordinatordocs(user): - User documentation ","version":"Next 🚧","tagName":"h3"},{"title":"6. Push and Create PR","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#6-push-and-create-pr","content":" git push origin your-branch-name Then open Pull Request on GitHub. ","version":"Next 🚧","tagName":"h3"},{"title":"Pull Request Guidelines","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#pull-request-guidelines","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"PR Template","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#pr-template","content":" Title: Short, descriptive (50 chars max) Description should include: ## What Brief description of changes ## Why Problem being solved or feature rationale ## How Implementation approach ## Testing - [ ] Manual testing in Home Assistant - [ ] Unit tests added/updated - [ ] Type checking passes - [ ] Linting passes ## Breaking Changes (If any - describe migration path) ## Related Issues Closes #123 ","version":"Next 🚧","tagName":"h3"},{"title":"PR Checklist","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#pr-checklist","content":" Before submitting: Code follows Coding Guidelines All tests pass (./scripts/test) Type checking passes (./scripts/type-check) Linting passes (./scripts/lint-check) Documentation updated (if needed) AGENTS.md updated (if patterns changed) Commit messages follow Conventional Commits ","version":"Next 🚧","tagName":"h3"},{"title":"Review Process","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#review-process","content":" Automated checks run (CI/CD)Maintainer review (usually within 3 days)Address feedback if requestedApproval → Maintainer merges ","version":"Next 🚧","tagName":"h3"},{"title":"Code Review Tips","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#code-review-tips","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"What Reviewers Look For","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#what-reviewers-look-for","content":" ✅ Good: Clear, self-explanatory codeAppropriate comments for complex logicTests covering edge casesType hints on all functionsFollows existing patterns ❌ Avoid: Large PRs (>500 lines) - split into smaller onesMixing unrelated changesMissing tests for new featuresBreaking changes without migration pathCopy-pasted code (refactor into shared functions) ","version":"Next 🚧","tagName":"h3"},{"title":"Responding to Feedback","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#responding-to-feedback","content":" Don't take it personally - we're improving code togetherAsk questions if feedback unclearPush additional commits to address commentsMark conversations as resolved when fixed ","version":"Next 🚧","tagName":"h3"},{"title":"Finding Issues to Work On","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#finding-issues-to-work-on","content":" Good first issues are labeled: good first issue - Beginner-friendlyhelp wanted - Maintainers welcome contributionsdocumentation - Docs improvements Comment on issue before starting work to avoid duplicates. ","version":"Next 🚧","tagName":"h2"},{"title":"Communication","type":1,"pageTitle":"Contributing Guide","url":"/hass.tibber_prices/developer/contributing#communication","content":" GitHub Issues - Bug reports, feature requestsPull Requests - Code discussionDiscussions - General questions, ideas Be respectful, constructive, and patient. We're all volunteers! 🙏 💡 Related: Setup Guide - DevContainer setupCoding Guidelines - Style guideTesting - Writing testsRelease Management - How releases work ","version":"Next 🚧","tagName":"h2"},{"title":"Architecture","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/architecture","content":"","keywords":"","version":"Next 🚧"},{"title":"End-to-End Data Flow","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#end-to-end-data-flow","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Flow Description","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#flow-description","content":" Setup (__init__.py) Integration loads, creates coordinator instanceRegisters entity platforms (sensor, binary_sensor)Sets up custom services Data Fetch (every 15 minutes) Coordinator triggers update via api.pyAPI client checks persistent cache first (coordinator/cache.py)If cache valid → return cached dataIf cache stale → query Tibber GraphQL APIStore fresh data in persistent cache (survives HA restart) Price Enrichment Coordinator passes raw prices to DataTransformerTransformer checks transformation cache (memory)If cache valid → return enriched dataIf cache invalid → enrich via price_utils.py + average_utils.py Calculate 24h trailing/leading averagesCalculate price differences (% from average)Assign rating levels (LOW/NORMAL/HIGH) Store enriched data in transformation cache Period Calculation Coordinator passes enriched data to PeriodCalculatorCalculator computes hash from prices + configIf hash matches cache → return cached periodsIf hash differs → recalculate best/peak price periodsStore periods with new hash Entity Updates Coordinator provides complete data (prices + periods)Sensors read values via unified handlersBinary sensors evaluate period statesEntities update on quarter-hour boundaries (00/15/30/45) Service Calls Custom services access coordinator data directlyReturn formatted responses (JSON, ApexCharts format) ","version":"Next 🚧","tagName":"h3"},{"title":"Caching Architecture","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#caching-architecture","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Overview","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#overview","content":" The integration uses 5 independent caching layers for optimal performance: Layer\tLocation\tLifetime\tInvalidation\tMemoryAPI Cache\tcoordinator/cache.py\t24h (user) Until midnight (prices)\tAutomatic\t50KB Translation Cache\tconst.py\tUntil HA restart\tNever\t5KB Config Cache\tcoordinator/*\tUntil config change\tExplicit\t1KB Period Cache\tcoordinator/periods.py\tUntil data/config change\tHash-based\t10KB Transformation Cache\tcoordinator/data_transformation.py\tUntil midnight/config\tAutomatic\t60KB Total cache overhead: ~126KB per coordinator instance (main entry + subentries) ","version":"Next 🚧","tagName":"h3"},{"title":"Cache Coordination","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#cache-coordination","content":" Key insight: No cascading invalidations - each cache is independent and rebuilds on-demand. For detailed cache behavior, see Caching Strategy. ","version":"Next 🚧","tagName":"h3"},{"title":"Component Responsibilities","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#component-responsibilities","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Core Components","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#core-components","content":" Component\tFile\tResponsibilityAPI Client\tapi.py\tGraphQL queries to Tibber, retry logic, error handling Coordinator\tcoordinator.py\tUpdate orchestration, cache management, absolute-time scheduling with boundary tolerance Data Transformer\tcoordinator/data_transformation.py\tPrice enrichment (averages, ratings, differences) Period Calculator\tcoordinator/periods.py\tBest/peak price period calculation with relaxation Sensors\tsensor/\t80+ entities for prices, levels, ratings, statistics Binary Sensors\tbinary_sensor/\tPeriod indicators (best/peak price active) Services\tservices/\tCustom service endpoints (get_chartdata, get_apexcharts_yaml, refresh_user_data) ","version":"Next 🚧","tagName":"h3"},{"title":"Sensor Architecture (Calculator Pattern)","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#sensor-architecture-calculator-pattern","content":" The sensor platform uses Calculator Pattern for clean separation of concerns (refactored Nov 2025): Component\tFiles\tLines\tResponsibilityEntity Class\tsensor/core.py\t909\tEntity lifecycle, coordinator, delegates to calculators Calculators\tsensor/calculators/\t1,838\tBusiness logic (8 specialized calculators) Attributes\tsensor/attributes/\t1,209\tState presentation (8 specialized modules) Routing\tsensor/value_getters.py\t276\tCentralized sensor → calculator mapping Chart Export\tsensor/chart_data.py\t144\tService call handling, YAML parsing Helpers\tsensor/helpers.py\t188\tAggregation functions, utilities Calculator Package (sensor/calculators/): base.py - Abstract BaseCalculator with coordinator accessinterval.py - Single interval calculations (current/next/previous)rolling_hour.py - 5-interval rolling windowsdaily_stat.py - Calendar day min/max/avg statisticswindow_24h.py - Trailing/leading 24h windowsvolatility.py - Price volatility analysistrend.py - Complex trend analysis with cachingtiming.py - Best/peak price period timingmetadata.py - Home/metering metadata Benefits: 58% reduction in core.py (2,170 → 909 lines)Clear separation: Calculators (logic) vs Attributes (presentation)Independent testability for each calculatorEasy to add sensors: Choose calculation pattern, add to routing ","version":"Next 🚧","tagName":"h3"},{"title":"Helper Utilities","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#helper-utilities","content":" Utility\tFile\tPurposePrice Utils\tutils/price.py\tRating calculation, enrichment, level aggregation Average Utils\tutils/average.py\tTrailing/leading 24h average calculations Entity Utils\tentity_utils/\tShared icon/color/attribute logic Translations\tconst.py\tTranslation loading and caching ","version":"Next 🚧","tagName":"h3"},{"title":"Key Patterns","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#key-patterns","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"1. Dual Translation System","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#1-dual-translation-system","content":" Standard translations (/translations/*.json): HA-compliant schema for entity namesCustom translations (/custom_translations/*.json): Extended descriptions, usage tipsBoth loaded at integration setup, cached in memoryAccess via get_translation() helper function ","version":"Next 🚧","tagName":"h3"},{"title":"2. Price Data Enrichment","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#2-price-data-enrichment","content":" All quarter-hourly price intervals get augmented via utils/price.py: # Original from Tibber API { "startsAt": "2025-11-03T14:00:00+01:00", "total": 0.2534, "level": "NORMAL" } # After enrichment (utils/price.py) { "startsAt": "2025-11-03T14:00:00+01:00", "total": 0.2534, "level": "NORMAL", "trailing_avg_24h": 0.2312, # ← Added: 24h trailing average "difference": 9.6, # ← Added: % diff from trailing avg "rating_level": "NORMAL" # ← Added: LOW/NORMAL/HIGH based on thresholds } ","version":"Next 🚧","tagName":"h3"},{"title":"3. Quarter-Hour Precision","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#3-quarter-hour-precision","content":" API polling: Every 15 minutes (coordinator fetch cycle)Entity updates: On 00/15/30/45-minute boundaries via coordinator/listeners.pyTimer scheduling: Uses async_track_utc_time_change(minute=[0, 15, 30, 45], second=0) HA may trigger ±few milliseconds before/after exact boundarySmart boundary tolerance (±2 seconds) handles scheduling jitter in sensor/helpers.pyIf HA schedules at 14:59:58 → rounds to 15:00:00 (shows new interval data)If HA restarts at 14:59:30 → stays at 14:45:00 (shows current interval data) Absolute time tracking: Timer plans for all future boundaries (not relative delays) Prevents double-updates (if triggered at 14:59:58, next trigger is 15:15:00, not 15:00:00) Result: Current price sensors update without waiting for next API poll ","version":"Next 🚧","tagName":"h3"},{"title":"4. Calculator Pattern (Sensor Platform)","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#4-calculator-pattern-sensor-platform","content":" Sensors organized by calculation method (refactored Nov 2025): Unified Handler Methods (sensor/core.py): _get_interval_value(offset, type) - current/next/previous intervals_get_rolling_hour_value(offset, type) - 5-interval rolling windows_get_daily_stat_value(day, stat_func) - calendar day min/max/avg_get_24h_window_value(stat_func) - trailing/leading statistics Routing (sensor/value_getters.py): Single source of truth mapping 80+ entity keys to calculator methodsOrganized by calculation type (Interval, Rolling Hour, Daily Stats, etc.) Calculators (sensor/calculators/): Each calculator inherits from BaseCalculator with coordinator accessFocused responsibility: IntervalCalculator, TrendCalculator, etc.Complex logic isolated (e.g., TrendCalculator has internal caching) Attributes (sensor/attributes/): Separate from business logic, handles state presentationBuilds extra_state_attributes dicts for entity classesUnified builders: build_sensor_attributes(), build_extra_state_attributes() Benefits: Minimal code duplication across 80+ sensorsClear separation of concerns (calculation vs presentation)Easy to extend: Add sensor → choose pattern → add to routingIndependent testability for each component ","version":"Next 🚧","tagName":"h3"},{"title":"Performance Characteristics","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#performance-characteristics","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"API Call Reduction","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#api-call-reduction","content":" Without caching: 96 API calls/day (every 15 min)With caching: ~1-2 API calls/day (only when cache expires)Reduction: ~98% ","version":"Next 🚧","tagName":"h3"},{"title":"CPU Optimization","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#cpu-optimization","content":" Optimization\tLocation\tSavingsConfig caching\tcoordinator/*\t~50% on config checks Period caching\tcoordinator/periods.py\t~70% on period recalculation Lazy logging\tThroughout\t~15% on log-heavy operations Import optimization\tModule structure\t~20% faster loading ","version":"Next 🚧","tagName":"h3"},{"title":"Memory Usage","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#memory-usage","content":" Per coordinator instance: ~126KB cache overheadTypical setup: 1 main + 2 subentries = ~378KB totalRedundancy eliminated: 14% reduction (10KB saved per coordinator) ","version":"Next 🚧","tagName":"h3"},{"title":"Related Documentation","type":1,"pageTitle":"Architecture","url":"/hass.tibber_prices/developer/architecture#related-documentation","content":" Timer Architecture - Timer system, scheduling, coordination (3 independent timers)Caching Strategy - Detailed cache behavior, invalidation, debuggingSetup Guide - Development environment setupTesting Guide - How to test changesRelease Management - Release workflow and versioningAGENTS.md - Complete reference for AI development ","version":"Next 🚧","tagName":"h2"},{"title":"Caching Strategy","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/caching-strategy","content":"","keywords":"","version":"Next 🚧"},{"title":"Overview","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#overview","content":" The integration uses 4 distinct caching layers with different purposes and lifetimes: Persistent API Data Cache (HA Storage) - Hours to daysTranslation Cache (Memory) - Forever (until HA restart)Config Dictionary Cache (Memory) - Until config changesPeriod Calculation Cache (Memory) - Until price data or config changes ","version":"Next 🚧","tagName":"h2"},{"title":"1. Persistent API Data Cache","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#1-persistent-api-data-cache","content":" Location: coordinator/cache.py → HA Storage (.storage/tibber_prices.<entry_id>) Purpose: Reduce API calls to Tibber by caching user data and price data between HA restarts. What is cached: Price data (price_data): Day before yesterday/yesterday/today/tomorrow price intervals with enriched fields (384 intervals total)User data (user_data): Homes, subscriptions, features from Tibber GraphQL viewer queryTimestamps: Last update times for validation Lifetime: Price data: Until midnight turnover (cleared daily at 00:00 local time)User data: 24 hours (refreshed daily)Survives: HA restarts via persistent Storage Invalidation triggers: Midnight turnover (Timer #2 in coordinator): # coordinator/day_transitions.py def _handle_midnight_turnover() -> None: self._cached_price_data = None # Force fresh fetch for new day self._last_price_update = None await self.store_cache() Cache validation on load: # coordinator/cache.py def is_cache_valid(cache_data: CacheData) -> bool: # Checks if price data is from a previous day if today_date < local_now.date(): # Yesterday's data return False Tomorrow data check (after 13:00): # coordinator/data_fetching.py if tomorrow_missing or tomorrow_invalid: return "tomorrow_check" # Update needed Why this cache matters: Reduces API load on Tibber (~192 intervals per fetch), speeds up HA restarts, enables offline operation until cache expires. ","version":"Next 🚧","tagName":"h2"},{"title":"2. Translation Cache","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#2-translation-cache","content":" Location: const.py → _TRANSLATIONS_CACHE and _STANDARD_TRANSLATIONS_CACHE (in-memory dicts) Purpose: Avoid repeated file I/O when accessing entity descriptions, UI strings, etc. What is cached: Standard translations (/translations/*.json): Config flow, selector options, entity namesCustom translations (/custom_translations/*.json): Entity descriptions, usage tips, long descriptions Lifetime: Forever (until HA restart)No invalidation during runtime When populated: At integration setup: async_load_translations(hass, "en") in __init__.pyLazy loading: If translation missing, attempts file load once Access pattern: # Non-blocking synchronous access from cached data description = get_translation("binary_sensor.best_price_period.description", "en") Why this cache matters: Entity attributes are accessed on every state update (~15 times per hour per entity). File I/O would block the event loop. Cache enables synchronous, non-blocking attribute generation. ","version":"Next 🚧","tagName":"h2"},{"title":"3. Config Dictionary Cache","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#3-config-dictionary-cache","content":" Location: coordinator/data_transformation.py and coordinator/periods.py (per-instance fields) Purpose: Avoid ~30-40 options.get() calls on every coordinator update (every 15 minutes). What is cached: ","version":"Next 🚧","tagName":"h2"},{"title":"DataTransformer Config Cache","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#datatransformer-config-cache","content":" { "thresholds": {"low": 15, "high": 35}, "volatility_thresholds": {"moderate": 15.0, "high": 25.0, "very_high": 40.0}, # ... 20+ more config fields } ","version":"Next 🚧","tagName":"h3"},{"title":"PeriodCalculator Config Cache","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#periodcalculator-config-cache","content":" { "best": {"flex": 0.15, "min_distance_from_avg": 5.0, "min_period_length": 60}, "peak": {"flex": 0.15, "min_distance_from_avg": 5.0, "min_period_length": 60} } Lifetime: Until invalidate_config_cache() is calledBuilt once on first use per coordinator update cycle Invalidation trigger: Options change (user reconfigures integration): # coordinator/core.py async def _handle_options_update(...) -> None: self._data_transformer.invalidate_config_cache() self._period_calculator.invalidate_config_cache() await self.async_request_refresh() Performance impact: Before: ~30 dict lookups + type conversions per update = ~50μsAfter: 1 cache check = ~1μsSavings: ~98% (50μs → 1μs per update) Why this cache matters: Config is read multiple times per update (transformation + period calculation + validation). Caching eliminates redundant lookups without changing behavior. ","version":"Next 🚧","tagName":"h3"},{"title":"4. Period Calculation Cache","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#4-period-calculation-cache","content":" Location: coordinator/periods.py → PeriodCalculator._cached_periods Purpose: Avoid expensive period calculations (~100-500ms) when price data and config haven't changed. What is cached: { "best_price": { "periods": [...], # Calculated period objects "intervals": [...], # All intervals in periods "metadata": {...} # Config snapshot }, "best_price_relaxation": {"relaxation_active": bool, ...}, "peak_price": {...}, "peak_price_relaxation": {...} } Cache key: Hash of relevant inputs hash_data = ( today_signature, # (startsAt, rating_level) for each interval tuple(best_config.items()), # Best price config tuple(peak_config.items()), # Peak price config best_level_filter, # Level filter overrides peak_level_filter ) Lifetime: Until price data changes (today's intervals modified)Until config changes (flex, thresholds, filters)Recalculated at midnight (new today data) Invalidation triggers: Config change (explicit): def invalidate_config_cache() -> None: self._cached_periods = None self._last_periods_hash = None Price data change (automatic via hash mismatch): current_hash = self._compute_periods_hash(price_info) if self._last_periods_hash != current_hash: # Cache miss - recalculate Cache hit rate: High: During normal operation (coordinator updates every 15min, price data unchanged)Low: After midnight (new today data) or when tomorrow data arrives (~13:00-14:00) Performance impact: Period calculation: ~100-500ms (depends on interval count, relaxation attempts)Cache hit: <1ms (hash comparison + dict lookup)Savings: ~70% of calculation time (most updates hit cache) Why this cache matters: Period calculation is CPU-intensive (filtering, gap tolerance, relaxation). Caching avoids recalculating unchanged periods 3-4 times per hour. ","version":"Next 🚧","tagName":"h2"},{"title":"5. Transformation Cache (Price Enrichment Only)","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#5-transformation-cache-price-enrichment-only","content":" Location: coordinator/data_transformation.py → _cached_transformed_data Status: ✅ Clean separation - enrichment only, no redundancy What is cached: { "timestamp": ..., "homes": {...}, "priceInfo": {...}, # Enriched price data (trailing_avg_24h, difference, rating_level) # NO periods - periods are exclusively managed by PeriodCalculator } Purpose: Avoid re-enriching price data when config unchanged between midnight checks. Current behavior: Caches only enriched price data (price + statistics)Does NOT cache periods (handled by Period Calculation Cache)Invalidated when: Config changes (thresholds affect enrichment)Midnight turnover detectedNew update cycle begins Architecture: DataTransformer: Handles price enrichment onlyPeriodCalculator: Handles period calculation only (with hash-based cache)Coordinator: Assembles final data on-demand from both caches Memory savings: Eliminating redundant period storage saves ~10KB per coordinator (14% reduction). ","version":"Next 🚧","tagName":"h2"},{"title":"Cache Invalidation Flow","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#cache-invalidation-flow","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"User Changes Options (Config Flow)","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#user-changes-options-config-flow","content":" User saves options ↓ config_entry.add_update_listener() triggers ↓ coordinator._handle_options_update() ↓ ├─> DataTransformer.invalidate_config_cache() │ └─> _config_cache = None │ _config_cache_valid = False │ _cached_transformed_data = None │ └─> PeriodCalculator.invalidate_config_cache() └─> _config_cache = None _config_cache_valid = False _cached_periods = None _last_periods_hash = None ↓ coordinator.async_request_refresh() ↓ Fresh data fetch with new config ","version":"Next 🚧","tagName":"h3"},{"title":"Midnight Turnover (Day Transition)","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#midnight-turnover-day-transition","content":" Timer #2 fires at 00:00 ↓ coordinator._handle_midnight_turnover() ↓ ├─> Clear persistent cache │ └─> _cached_price_data = None │ _last_price_update = None │ └─> Clear transformation cache └─> _cached_transformed_data = None _last_transformation_config = None ↓ Period cache auto-invalidates (hash mismatch on new "today") ↓ Fresh API fetch for new day ","version":"Next 🚧","tagName":"h3"},{"title":"Tomorrow Data Arrives (~13:00)","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#tomorrow-data-arrives-1300","content":" Coordinator update cycle ↓ should_update_price_data() checks tomorrow ↓ Tomorrow data missing/invalid ↓ API fetch with new tomorrow data ↓ Price data hash changes (new intervals) ↓ Period cache auto-invalidates (hash mismatch) ↓ Periods recalculated with tomorrow included ","version":"Next 🚧","tagName":"h3"},{"title":"Cache Coordination","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#cache-coordination","content":" All caches work together: Persistent Storage (HA restart) ↓ API Data Cache (price_data, user_data) ↓ ├─> Enrichment (add rating_level, difference, etc.) │ ↓ │ Transformation Cache (_cached_transformed_data) │ └─> Period Calculation ↓ Period Cache (_cached_periods) ↓ Config Cache (avoid re-reading options) ↓ Translation Cache (entity descriptions) No cache invalidation cascades: Config cache invalidation is explicit (on options update)Period cache invalidation is automatic (via hash mismatch)Transformation cache invalidation is automatic (on midnight/config change)Translation cache is never invalidated (read-only after load) Thread safety: All caches are accessed from MainThread only (Home Assistant event loop)No locking needed (single-threaded execution model) ","version":"Next 🚧","tagName":"h2"},{"title":"Performance Characteristics","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#performance-characteristics","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Typical Operation (No Changes)","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#typical-operation-no-changes","content":" Coordinator Update (every 15 min) ├─> API fetch: SKIP (cache valid) ├─> Config dict build: ~1μs (cached) ├─> Period calculation: ~1ms (cached, hash match) ├─> Transformation: ~10ms (enrichment only, periods cached) └─> Entity updates: ~5ms (translation cache hit) Total: ~16ms (down from ~600ms without caching) ","version":"Next 🚧","tagName":"h3"},{"title":"After Midnight Turnover","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#after-midnight-turnover","content":" Coordinator Update (00:00) ├─> API fetch: ~500ms (cache cleared, fetch new day) ├─> Config dict build: ~50μs (rebuild, no cache) ├─> Period calculation: ~200ms (cache miss, recalculate) ├─> Transformation: ~50ms (re-enrich, rebuild) └─> Entity updates: ~5ms (translation cache still valid) Total: ~755ms (expected once per day) ","version":"Next 🚧","tagName":"h3"},{"title":"After Config Change","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#after-config-change","content":" Options Update ├─> Cache invalidation: `<`1ms ├─> Coordinator refresh: ~600ms │ ├─> API fetch: SKIP (data unchanged) │ ├─> Config rebuild: ~50μs │ ├─> Period recalculation: ~200ms (new thresholds) │ ├─> Re-enrichment: ~50ms │ └─> Entity updates: ~5ms └─> Total: ~600ms (expected on manual reconfiguration) ","version":"Next 🚧","tagName":"h3"},{"title":"Summary Table","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#summary-table","content":" Cache Type\tLifetime\tSize\tInvalidation\tPurposeAPI Data\tHours to 1 day\t~50KB\tMidnight, validation\tReduce API calls Translations\tForever (until HA restart)\t~5KB\tNever\tAvoid file I/O Config Dicts\tUntil options change\t<1KB\tExplicit (options update)\tAvoid dict lookups Period Calculation\tUntil data/config change\t~10KB\tAuto (hash mismatch)\tAvoid CPU-intensive calculation Transformation\tUntil midnight/config change\t~50KB\tAuto (midnight/config)\tAvoid re-enrichment Total memory overhead: ~116KB per coordinator instance (main + subentries) Benefits: 97% reduction in API calls (from every 15min to once per day)70% reduction in period calculation time (cache hits during normal operation)98% reduction in config access time (30+ lookups → 1 cache check)Zero file I/O during runtime (translations cached at startup) Trade-offs: Memory usage: ~116KB per home (negligible for modern systems)Code complexity: 5 cache invalidation points (well-tested, documented)Debugging: Must understand cache lifetime when investigating stale data issues ","version":"Next 🚧","tagName":"h2"},{"title":"Debugging Cache Issues","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#debugging-cache-issues","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Symptom: Stale data after config change","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#symptom-stale-data-after-config-change","content":" Check: Is _handle_options_update() called? (should see "Options updated" log)Are invalidate_config_cache() methods executed?Does async_request_refresh() trigger? Fix: Ensure config_entry.add_update_listener() is registered in coordinator init. ","version":"Next 🚧","tagName":"h3"},{"title":"Symptom: Period calculation not updating","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#symptom-period-calculation-not-updating","content":" Check: Verify hash changes when data changes: _compute_periods_hash()Check _last_periods_hash vs current_hashLook for "Using cached period calculation" vs "Calculating periods" logs Fix: Hash function may not include all relevant data. Review _compute_periods_hash() inputs. ","version":"Next 🚧","tagName":"h3"},{"title":"Symptom: Yesterday's prices shown as today","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#symptom-yesterdays-prices-shown-as-today","content":" Check: is_cache_valid() logic in coordinator/cache.pyMidnight turnover execution (Timer #2)Cache clear confirmation in logs Fix: Timer may not be firing. Check _schedule_midnight_turnover() registration. ","version":"Next 🚧","tagName":"h3"},{"title":"Symptom: Missing translations","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#symptom-missing-translations","content":" Check: async_load_translations() called at startup?Translation files exist in /translations/ and /custom_translations/?Cache population: _TRANSLATIONS_CACHE keys Fix: Re-install integration or restart HA to reload translation files. ","version":"Next 🚧","tagName":"h3"},{"title":"Related Documentation","type":1,"pageTitle":"Caching Strategy","url":"/hass.tibber_prices/developer/caching-strategy#related-documentation","content":" Timer Architecture - Timer system, scheduling, midnight coordinationArchitecture - Overall system design, data flowAGENTS.md - Complete reference for AI development ","version":"Next 🚧","tagName":"h2"},{"title":"Critical Behavior Patterns - Testing Guide","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/critical-patterns","content":"","keywords":"","version":"Next 🚧"},{"title":"🎯 Why Are These Tests Critical?","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-why-are-these-tests-critical","content":" Home Assistant integrations run continuously in the background. Resource leaks lead to: Memory Leaks: RAM usage grows over days/weeks until HA becomes unstableCallback Leaks: Listeners remain registered after entity removal → CPU load increasesTimer Leaks: Timers continue running after unload → unnecessary background tasksFile Handle Leaks: Storage files remain open → system resources exhausted ","version":"Next 🚧","tagName":"h2"},{"title":"✅ Test Categories","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-test-categories","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"1. Resource Cleanup (Memory Leak Prevention)","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#1-resource-cleanup-memory-leak-prevention","content":" File: tests/test_resource_cleanup.py 1.1 Listener Cleanup ✅​ What is tested: Time-sensitive listeners are correctly removed (async_add_time_sensitive_listener())Minute-update listeners are correctly removed (async_add_minute_update_listener())Lifecycle callbacks are correctly unregistered (register_lifecycle_callback())Sensor cleanup removes ALL registered listenersBinary sensor cleanup removes ALL registered listeners Why critical: Each registered listener holds references to Entity + CoordinatorWithout cleanup: Entities are not freed by GC → Memory LeakWith 80+ sensors × 3 listener types = 240+ callbacks that must be cleanly removed Code Locations: coordinator/listeners.py → async_add_time_sensitive_listener(), async_add_minute_update_listener()coordinator/core.py → register_lifecycle_callback()sensor/core.py → async_will_remove_from_hass()binary_sensor/core.py → async_will_remove_from_hass() 1.2 Timer Cleanup ✅​ What is tested: Quarter-hour timer is cancelled and reference clearedMinute timer is cancelled and reference clearedBoth timers are cancelled togetherCleanup works even when timers are None Why critical: Uncancelled timers continue running after integration unloadHA's async_track_utc_time_change() creates persistent callbacksWithout cleanup: Timers keep firing → CPU load + unnecessary coordinator updates Code Locations: coordinator/listeners.py → cancel_timers()coordinator/core.py → async_shutdown() 1.3 Config Entry Cleanup ✅​ What is tested: Options update listener is registered via async_on_unload()Cleanup function is correctly passed to async_on_unload() Why critical: entry.add_update_listener() registers permanent callbackWithout async_on_unload(): Listener remains active after reload → duplicate updatesPattern: entry.async_on_unload(entry.add_update_listener(handler)) Code Locations: coordinator/core.py → __init__() (listener registration)__init__.py → async_unload_entry() ","version":"Next 🚧","tagName":"h3"},{"title":"2. Cache Invalidation ✅​","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#2-cache-invalidation-","content":" File: tests/test_resource_cleanup.py 2.1 Config Cache Invalidation What is tested: DataTransformer config cache is invalidated on options changePeriodCalculator config + period cache is invalidatedTrend calculator cache is cleared on coordinator update Why critical: Stale config → Sensors use old user settingsStale period cache → Incorrect best/peak price periodsStale trend cache → Outdated trend analysis Code Locations: coordinator/data_transformation.py → invalidate_config_cache()coordinator/periods.py → invalidate_config_cache()sensor/calculators/trend.py → clear_trend_cache() ","version":"Next 🚧","tagName":"h3"},{"title":"3. Storage Cleanup ✅​","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#3-storage-cleanup-","content":" File: tests/test_resource_cleanup.py + tests/test_coordinator_shutdown.py 3.1 Persistent Storage Removal What is tested: Storage file is deleted on config entry removalCache is saved on shutdown (no data loss) Why critical: Without storage removal: Old files remain after uninstallationWithout cache save on shutdown: Data loss on HA restartStorage path: .storage/tibber_prices.{entry_id} Code Locations: __init__.py → async_remove_entry()coordinator/core.py → async_shutdown() ","version":"Next 🚧","tagName":"h3"},{"title":"4. Timer Scheduling ✅​","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#4-timer-scheduling-","content":" File: tests/test_timer_scheduling.py What is tested: Quarter-hour timer is registered with correct parametersMinute timer is registered with correct parametersTimers can be re-scheduled (override old timer)Midnight turnover detection works correctly Why critical: Wrong timer parameters → Entities update at wrong timesWithout timer override on re-schedule → Multiple parallel timers → Performance problem ","version":"Next 🚧","tagName":"h3"},{"title":"5. Sensor-to-Timer Assignment ✅​","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#5-sensor-to-timer-assignment-","content":" File: tests/test_sensor_timer_assignment.py What is tested: All TIME_SENSITIVE_ENTITY_KEYS are valid entity keysAll MINUTE_UPDATE_ENTITY_KEYS are valid entity keysBoth lists are disjoint (no overlap)Sensor and binary sensor platforms are checked Why critical: Wrong timer assignment → Sensors update at wrong timesOverlap → Duplicate updates → Performance problem ","version":"Next 🚧","tagName":"h3"},{"title":"🚨 Additional Analysis (Nice-to-Have Patterns)","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-additional-analysis-nice-to-have-patterns","content":" These patterns were analyzed and classified as not critical: ","version":"Next 🚧","tagName":"h2"},{"title":"6. Async Task Management","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#6-async-task-management","content":" Current Status: Fire-and-forget pattern for short tasks sensor/core.py → Chart data refresh (short-lived, max 1-2 seconds)coordinator/core.py → Cache storage (short-lived, max 100ms) Why no tests needed: No long-running tasks (all < 2 seconds)HA's event loop handles short tasks automaticallyTask exceptions are already logged If needed: _chart_refresh_task tracking + cancel in async_will_remove_from_hass() ","version":"Next 🚧","tagName":"h3"},{"title":"7. API Session Cleanup","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#7-api-session-cleanup","content":" Current Status: ✅ Correctly implemented async_get_clientsession(hass) is used (shared session)No new sessions are createdHA manages session lifecycle automatically Code: api/client.py + __init__.py ","version":"Next 🚧","tagName":"h3"},{"title":"8. Translation Cache Memory","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#8-translation-cache-memory","content":" Current Status: ✅ Bounded cache Max ~5-10 languages × 5KB = 50KB totalModule-level cache without re-loadingPractically no memory issue Code: const.py → _TRANSLATIONS_CACHE, _STANDARD_TRANSLATIONS_CACHE ","version":"Next 🚧","tagName":"h3"},{"title":"9. Coordinator Data Structure Integrity","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#9-coordinator-data-structure-integrity","content":" Current Status: Manually tested via ./scripts/develop Midnight turnover works correctly (observed over several days)Missing keys are handled via .get() with defaults80+ sensors access coordinator.data without errors Structure: coordinator.data = { "user_data": {...}, "priceInfo": [...], # Flat list of all enriched intervals "currency": "EUR" # Top-level for easy access } ","version":"Next 🚧","tagName":"h3"},{"title":"10. Service Response Memory","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#10-service-response-memory","content":" Current Status: HA's response lifecycle HA automatically frees service responses after returnApexCharts ~20KB response is one-time per callNo response accumulation in integration code Code: services/apexcharts.py ","version":"Next 🚧","tagName":"h3"},{"title":"📊 Test Coverage Status","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-test-coverage-status","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"✅ Implemented Tests (41 total)","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-implemented-tests-41-total","content":" Category\tStatus\tTests\tFile\tCoverageListener Cleanup\t✅\t5\ttest_resource_cleanup.py\t100% Timer Cleanup\t✅\t4\ttest_resource_cleanup.py\t100% Config Entry Cleanup\t✅\t1\ttest_resource_cleanup.py\t100% Cache Invalidation\t✅\t3\ttest_resource_cleanup.py\t100% Storage Cleanup\t✅\t1\ttest_resource_cleanup.py\t100% Storage Persistence\t✅\t2\ttest_coordinator_shutdown.py\t100% Timer Scheduling\t✅\t8\ttest_timer_scheduling.py\t100% Sensor-Timer Assignment\t✅\t17\ttest_sensor_timer_assignment.py\t100% TOTAL\t✅\t41 100% (critical) ","version":"Next 🚧","tagName":"h3"},{"title":"📋 Analyzed but Not Implemented (Nice-to-Have)","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-analyzed-but-not-implemented-nice-to-have","content":" Category\tStatus\tRationaleAsync Task Management\t📋\tFire-and-forget pattern used (no long-running tasks) API Session Cleanup\t✅\tPattern correct (async_get_clientsession used) Translation Cache\t✅\tCache size bounded (~50KB max for 10 languages) Data Structure Integrity\t📋\tWould add test time without finding real issues Service Response Memory\t📋\tHA automatically frees service responses Legend: ✅ = Fully tested or pattern verified correct📋 = Analyzed, low priority for testing (no known issues) ","version":"Next 🚧","tagName":"h3"},{"title":"🎯 Development Status","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-development-status","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"✅ All Critical Patterns Tested","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-all-critical-patterns-tested","content":" All essential memory leak prevention patterns are covered by 41 tests: ✅ Listeners are correctly removed (no callback leaks)✅ Timers are cancelled (no background task leaks)✅ Config entry cleanup works (no dangling listeners)✅ Caches are invalidated (no stale data issues)✅ Storage is saved and cleaned up (no data loss)✅ Timer scheduling works correctly (no update issues)✅ Sensor-timer assignment is correct (no wrong updates) ","version":"Next 🚧","tagName":"h3"},{"title":"📋 Nice-to-Have Tests (Optional)","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-nice-to-have-tests-optional","content":" If problems arise in the future, these tests can be added: Async Task Management - Pattern analyzed (fire-and-forget for short tasks)Data Structure Integrity - Midnight rotation manually testedService Response Memory - HA's response lifecycle automatic Conclusion: The integration has production-quality test coverage for all critical resource leak patterns. ","version":"Next 🚧","tagName":"h3"},{"title":"🔍 How to Run Tests","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-how-to-run-tests","content":" # Run all resource cleanup tests (14 tests) ./scripts/test tests/test_resource_cleanup.py -v # Run all critical pattern tests (41 tests) ./scripts/test tests/test_resource_cleanup.py tests/test_coordinator_shutdown.py \\ tests/test_timer_scheduling.py tests/test_sensor_timer_assignment.py -v # Run all tests with coverage ./scripts/test --cov=custom_components.tibber_prices --cov-report=html # Type checking and linting ./scripts/check # Manual memory leak test # 1. Start HA: ./scripts/develop # 2. Monitor RAM: watch -n 1 'ps aux | grep home-assistant' # 3. Reload integration multiple times (HA UI: Settings → Devices → Tibber Prices → Reload) # 4. RAM should stabilize (not grow continuously) ","version":"Next 🚧","tagName":"h2"},{"title":"📚 References","type":1,"pageTitle":"Critical Behavior Patterns - Testing Guide","url":"/hass.tibber_prices/developer/critical-patterns#-references","content":" Home Assistant Cleanup Patterns: https://developers.home-assistant.io/docs/integration_setup_failures/#cleanupAsync Best Practices: https://developers.home-assistant.io/docs/asyncio_101/Memory Profiling: https://docs.python.org/3/library/tracemalloc.html ","version":"Next 🚧","tagName":"h2"},{"title":"API Reference","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/api-reference","content":"","keywords":"","version":"Next 🚧"},{"title":"GraphQL Endpoint","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#graphql-endpoint","content":" https://api.tibber.com/v1-beta/gql Authentication: Bearer token in Authorization header ","version":"Next 🚧","tagName":"h2"},{"title":"Queries Used","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#queries-used","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"User Data Query","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#user-data-query","content":" Fetches home information and metadata: query { viewer { homes { id appNickname address { address1 postalCode city country } timeZone currentSubscription { priceInfo { current { currency } } } meteringPointData { consumptionEan gridAreaCode } } } } Cached for: 24 hours ","version":"Next 🚧","tagName":"h3"},{"title":"Price Data Query","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#price-data-query","content":" Fetches quarter-hourly prices: query($homeId: ID!) { viewer { home(id: $homeId) { currentSubscription { priceInfo { range(resolution: QUARTER_HOURLY, first: 384) { nodes { total startsAt level } } } } } } } Parameters: homeId: Tibber home identifierresolution: Always QUARTER_HOURLYfirst: 384 intervals (4 days of data) Cached until: Midnight local time ","version":"Next 🚧","tagName":"h3"},{"title":"Rate Limits","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#rate-limits","content":" Tibber API rate limits (as of 2024): 5000 requests per hour per tokenBurst limit: 100 requests per minute Integration stays well below these limits: Polls every 15 minutes = 96 requests/dayUser data cached for 24h = 1 request/dayTotal: ~100 requests/day per home ","version":"Next 🚧","tagName":"h2"},{"title":"Response Format","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#response-format","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Price Node Structure","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#price-node-structure","content":" { "total": 0.2456, "startsAt": "2024-12-06T14:00:00.000+01:00", "level": "NORMAL" } Fields: total: Price including VAT and fees (currency's major unit, e.g., EUR)startsAt: ISO 8601 timestamp with timezonelevel: Tibber's own classification (VERY_CHEAP, CHEAP, NORMAL, EXPENSIVE, VERY_EXPENSIVE) ","version":"Next 🚧","tagName":"h3"},{"title":"Currency Information","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#currency-information","content":" { "currency": "EUR" } Supported currencies: EUR (Euro) - displayed as ct/kWhNOK (Norwegian Krone) - displayed as øre/kWhSEK (Swedish Krona) - displayed as öre/kWh ","version":"Next 🚧","tagName":"h3"},{"title":"Error Handling","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#error-handling","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Common Error Responses","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#common-error-responses","content":" Invalid Token: { "errors": [{ "message": "Unauthorized", "extensions": { "code": "UNAUTHENTICATED" } }] } Rate Limit Exceeded: { "errors": [{ "message": "Too Many Requests", "extensions": { "code": "RATE_LIMIT_EXCEEDED" } }] } Home Not Found: { "errors": [{ "message": "Home not found", "extensions": { "code": "NOT_FOUND" } }] } Integration handles these with: Exponential backoff retry (3 attempts)ConfigEntryAuthFailed for auth errorsConfigEntryNotReady for temporary failures ","version":"Next 🚧","tagName":"h3"},{"title":"Data Transformation","type":1,"pageTitle":"API Reference","url":"/hass.tibber_prices/developer/api-reference#data-transformation","content":" Raw API data is enriched with: Trailing 24h average - Calculated from previous intervalsLeading 24h average - Calculated from future intervalsPrice difference % - Deviation from averageCustom rating - Based on user thresholds (different from Tibber's level) See utils/price.py for enrichment logic. 💡 External Resources: Tibber API DocumentationGraphQL ExplorerGet API Token ","version":"Next 🚧","tagName":"h2"},{"title":"Debugging Guide","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/debugging","content":"","keywords":"","version":"Next 🚧"},{"title":"Logging","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#logging","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Enable Debug Logging","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#enable-debug-logging","content":" Add to configuration.yaml: logger: default: info logs: custom_components.tibber_prices: debug Restart Home Assistant to apply. ","version":"Next 🚧","tagName":"h3"},{"title":"Key Log Messages","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#key-log-messages","content":" Coordinator Updates: [custom_components.tibber_prices.coordinator] Successfully fetched price data [custom_components.tibber_prices.coordinator] Cache valid, using cached data [custom_components.tibber_prices.coordinator] Midnight turnover detected, clearing cache Period Calculation: [custom_components.tibber_prices.coordinator.periods] Calculating BEST PRICE periods: flex=15.0% [custom_components.tibber_prices.coordinator.periods] Day 2024-12-06: Found 2 periods [custom_components.tibber_prices.coordinator.periods] Period 1: 02:00-05:00 (12 intervals) API Errors: [custom_components.tibber_prices.api] API request failed: Unauthorized [custom_components.tibber_prices.api] Retrying (attempt 2/3) after 2.0s ","version":"Next 🚧","tagName":"h3"},{"title":"VS Code Debugging","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#vs-code-debugging","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Launch Configuration","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#launch-configuration","content":" .vscode/launch.json: { "version": "0.2.0", "configurations": [ { "name": "Home Assistant", "type": "debugpy", "request": "launch", "module": "homeassistant", "args": ["-c", "config", "--debug"], "justMyCode": false, "env": { "PYTHONPATH": "${workspaceFolder}/.venv/lib/python3.13/site-packages" } } ] } ","version":"Next 🚧","tagName":"h3"},{"title":"Set Breakpoints","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#set-breakpoints","content":" Coordinator update: # coordinator/core.py async def _async_update_data(self) -> dict: """Fetch data from API.""" breakpoint() # Or set VS Code breakpoint Period calculation: # coordinator/period_handlers/core.py def calculate_periods(...) -> list[dict]: """Calculate best/peak price periods.""" breakpoint() ","version":"Next 🚧","tagName":"h3"},{"title":"pytest Debugging","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#pytest-debugging","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Run Single Test with Output","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#run-single-test-with-output","content":" .venv/bin/python -m pytest tests/test_period_calculation.py::test_midnight_crossing -v -s Flags: -v - Verbose output-s - Show print statements-k pattern - Run tests matching pattern ","version":"Next 🚧","tagName":"h3"},{"title":"Debug Test in VS Code","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#debug-test-in-vs-code","content":" Set breakpoint in test file, use "Debug Test" CodeLens. ","version":"Next 🚧","tagName":"h3"},{"title":"Useful Test Patterns","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#useful-test-patterns","content":" Print coordinator data: def test_something(coordinator): print(f"Coordinator data: {coordinator.data}") print(f"Price info count: {len(coordinator.data['priceInfo'])}") Inspect period attributes: def test_periods(hass, coordinator): periods = coordinator.data.get('best_price_periods', []) for period in periods: print(f"Period: {period['start']} to {period['end']}") print(f" Intervals: {len(period['intervals'])}") ","version":"Next 🚧","tagName":"h3"},{"title":"Common Issues","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#common-issues","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Integration Not Loading","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#integration-not-loading","content":" Check: grep "tibber_prices" config/home-assistant.log Common causes: Syntax error in Python code → Check logs for tracebackMissing dependency → Run uv syncWrong file permissions → chmod +x scripts/* ","version":"Next 🚧","tagName":"h3"},{"title":"Sensors Not Updating","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#sensors-not-updating","content":" Check coordinator state: # In Developer Tools > Template {{ states.sensor.tibber_home_current_interval_price.last_updated }} Debug in code: # Add logging in sensor/core.py _LOGGER.debug("Updating sensor %s: old=%s new=%s", self.entity_id, self._attr_native_value, new_value) ","version":"Next 🚧","tagName":"h3"},{"title":"Period Calculation Wrong","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#period-calculation-wrong","content":" Enable detailed period logs: # coordinator/period_handlers/period_building.py _LOGGER.debug("Candidate intervals: %s", [(i['startsAt'], i['total']) for i in candidates]) Check filter statistics: [period_building] Flex filter blocked: 45 intervals [period_building] Min distance blocked: 12 intervals [period_building] Level filter blocked: 8 intervals ","version":"Next 🚧","tagName":"h3"},{"title":"Performance Profiling","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#performance-profiling","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Time Execution","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#time-execution","content":" import time start = time.perf_counter() result = expensive_function() duration = time.perf_counter() - start _LOGGER.debug("Function took %.3fs", duration) ","version":"Next 🚧","tagName":"h3"},{"title":"Memory Usage","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#memory-usage","content":" import tracemalloc tracemalloc.start() # ... your code ... current, peak = tracemalloc.get_traced_memory() _LOGGER.debug("Memory: current=%d peak=%d", current, peak) tracemalloc.stop() ","version":"Next 🚧","tagName":"h3"},{"title":"Profile with cProfile","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#profile-with-cprofile","content":" python -m cProfile -o profile.stats -m homeassistant -c config python -m pstats profile.stats # Then: sort cumtime, stats 20 ","version":"Next 🚧","tagName":"h3"},{"title":"Live Debugging in Running HA","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#live-debugging-in-running-ha","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Remote Debugging with debugpy","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#remote-debugging-with-debugpy","content":" Add to coordinator code: import debugpy debugpy.listen(5678) _LOGGER.info("Waiting for debugger attach on port 5678") debugpy.wait_for_client() Connect from VS Code with remote attach configuration. ","version":"Next 🚧","tagName":"h3"},{"title":"IPython REPL","type":1,"pageTitle":"Debugging Guide","url":"/hass.tibber_prices/developer/debugging#ipython-repl","content":" Install in container: uv pip install ipython Add breakpoint: from IPython import embed embed() # Drops into interactive shell 💡 Related: Testing Guide - Writing and running testsSetup Guide - Development environmentArchitecture - Code structure ","version":"Next 🚧","tagName":"h3"},{"title":"Developer Documentation","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/intro","content":"","keywords":"","version":"Next 🚧"},{"title":"📚 Developer Guides","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-developer-guides","content":" Setup - DevContainer, environment setup, and dependenciesArchitecture - Code structure, patterns, and conventionsPeriod Calculation Theory - Mathematical foundations, Flex/Distance interaction, Relaxation strategyTimer Architecture - Timer system, scheduling, coordination (3 independent timers)Caching Strategy - Cache layers, invalidation, debuggingTesting - How to run tests and write new test casesRelease Management - Release workflow and versioning processCoding Guidelines - Style guide, linting, and best practicesRefactoring Guide - How to plan and execute major refactorings ","version":"Next 🚧","tagName":"h2"},{"title":"🤖 AI Documentation","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-ai-documentation","content":" The main AI/Copilot documentation is in AGENTS.md. This file serves as long-term memory for AI assistants and contains: Detailed architectural patternsCode quality rules and conventionsDevelopment workflow guidanceCommon pitfalls and anti-patternsProject-specific patterns and utilities Important: When proposing changes to patterns or conventions, always update AGENTS.md to keep AI guidance consistent. ","version":"Next 🚧","tagName":"h2"},{"title":"AI-Assisted Development","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#ai-assisted-development","content":" This integration is developed with extensive AI assistance (GitHub Copilot, Claude, and other AI tools). The AI handles: Pattern Recognition: Understanding and applying Home Assistant best practicesCode Generation: Implementing features with proper type hints, error handling, and documentationRefactoring: Maintaining consistency across the codebase during structural changesTranslation Management: Keeping 5 language files synchronizedDocumentation: Generating and maintaining comprehensive documentation Quality Assurance: Automated linting with Ruff (120-char line length, max complexity 25)Home Assistant's type checking and validationReal-world testing in development environmentCode review by maintainer before merging Benefits: Rapid feature development while maintaining qualityConsistent code patterns across all modulesComprehensive documentation maintained alongside codeQuick bug fixes with proper understanding of context Limitations: AI may occasionally miss edge cases or subtle bugsSome complex Home Assistant patterns may need human reviewTranslation quality depends on AI's understanding of target languageUser feedback is crucial for discovering real-world issues If you're working with AI tools on this project, the AGENTS.md file provides the context and patterns that ensure consistency. ","version":"Next 🚧","tagName":"h3"},{"title":"🚀 Quick Start for Contributors","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-quick-start-for-contributors","content":" Fork and clone the repositoryOpen in DevContainer (VS Code: "Reopen in Container")Run setup: ./scripts/setup/setup (happens automatically via postCreateCommand)Start development environment: ./scripts/developMake your changes following the Coding GuidelinesRun linting: ./scripts/lintValidate integration: ./scripts/release/hassfestTest your changes in the running Home Assistant instanceCommit using Conventional Commits formatOpen a Pull Request with clear description ","version":"Next 🚧","tagName":"h2"},{"title":"🛠️ Development Tools","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-development-tools","content":" The project includes several helper scripts in ./scripts/: bootstrap - Initial setup of dependenciesdevelop - Start Home Assistant in debug mode (auto-cleans .egg-info)clean - Remove build artifacts and cacheslint - Auto-fix code issues with rufflint-check - Check code without modifications (CI mode)hassfest - Validate integration structure (JSON, Python syntax, required files)setup - Install development tools (git-cliff, @github/copilot)prepare-release - Prepare a new release (bump version, create tag)generate-release-notes - Generate release notes from commits ","version":"Next 🚧","tagName":"h2"},{"title":"📦 Project Structure","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-project-structure","content":" custom_components/tibber_prices/ ├── __init__.py # Integration setup ├── coordinator.py # Data update coordinator with caching ├── api.py # Tibber GraphQL API client ├── price_utils.py # Price enrichment functions ├── average_utils.py # Average calculation utilities ├── sensor/ # Sensor platform (package) │ ├── __init__.py # Platform setup │ ├── core.py # TibberPricesSensor class │ ├── definitions.py # Entity descriptions │ ├── helpers.py # Pure helper functions │ └── attributes.py # Attribute builders ├── binary_sensor.py # Binary sensor platform ├── entity_utils/ # Shared entity helpers │ ├── icons.py # Icon mapping logic │ ├── colors.py # Color mapping logic │ └── attributes.py # Common attribute builders ├── services.py # Custom services ├── config_flow.py # UI configuration flow ├── const.py # Constants and helpers ├── translations/ # Standard HA translations └── custom_translations/ # Extended translations (descriptions) ","version":"Next 🚧","tagName":"h2"},{"title":"🔍 Key Concepts","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-key-concepts","content":" DataUpdateCoordinator Pattern: Centralized data fetching and cachingAutomatic entity updates on data changesPersistent storage via StoreQuarter-hour boundary refresh scheduling Price Data Enrichment: Raw API data is enriched with statistical analysisTrailing/leading 24h averages calculated per intervalPrice differences and ratings addedAll via pure functions in price_utils.py Translation System: Dual system: /translations/ (HA schema) + /custom_translations/ (extended)Both must stay in sync across all languages (de, en, nb, nl, sv)Async loading at integration setup ","version":"Next 🚧","tagName":"h2"},{"title":"🧪 Testing","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-testing","content":" # Validate integration structure ./scripts/release/hassfest # Run all tests pytest tests/ # Run specific test file pytest tests/test_coordinator.py # Run with coverage pytest --cov=custom_components.tibber_prices tests/ ","version":"Next 🚧","tagName":"h2"},{"title":"📝 Documentation Standards","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-documentation-standards","content":" User-facing docs go in docs/user/Developer docs go in docs/development/AI guidance goes in AGENTS.mdUse clear examples and code snippetsKeep docs up-to-date with code changes ","version":"Next 🚧","tagName":"h2"},{"title":"🤝 Contributing","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-contributing","content":" See CONTRIBUTING.md for detailed contribution guidelines, code of conduct, and pull request process. ","version":"Next 🚧","tagName":"h2"},{"title":"📄 License","type":1,"pageTitle":"Developer Documentation","url":"/hass.tibber_prices/developer/intro#-license","content":" This project is licensed under the Apache License 2.0. Note: This documentation is for developers. End users should refer to the User Documentation. ","version":"Next 🚧","tagName":"h2"},{"title":"Refactoring Guide","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/refactoring-guide","content":"","keywords":"","version":"Next 🚧"},{"title":"When to Plan a Refactoring","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#when-to-plan-a-refactoring","content":" Not every code change needs a detailed plan. Create a refactoring plan when: 🔴 Major changes requiring planning: Splitting modules into packages (>5 files affected, >500 lines moved)Architectural changes (new packages, module restructuring)Breaking changes (API changes, config format migrations) 🟡 Medium changes that might benefit from planning: Complex features with multiple moving partsChanges affecting many files (>3 files, unclear best approach)Refactorings with unclear scope 🟢 Small changes - no planning needed: Bug fixes (straightforward, <100 lines)Small features (<3 files, clear approach)Documentation updatesCosmetic changes (formatting, renaming) ","version":"Next 🚧","tagName":"h2"},{"title":"The Planning Process","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#the-planning-process","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"1. Create a Planning Document","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#1-create-a-planning-document","content":" Create a file in the planning/ directory (git-ignored for free iteration): # Example: touch planning/my-feature-refactoring-plan.md Note: The planning/ directory is git-ignored, so you can iterate freely without polluting git history. ","version":"Next 🚧","tagName":"h3"},{"title":"2. Use the Planning Template","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#2-use-the-planning-template","content":" Every planning document should include: # <Feature> Refactoring Plan **Status**: 🔄 PLANNING | 🚧 IN PROGRESS | ✅ COMPLETED | ❌ CANCELLED **Created**: YYYY-MM-DD **Last Updated**: YYYY-MM-DD ## Problem Statement - What's the issue? - Why does it need fixing? - Current pain points ## Proposed Solution - High-level approach - File structure (before/after) - Module responsibilities ## Migration Strategy - Phase-by-phase breakdown - File lifecycle (CREATE/MODIFY/DELETE/RENAME) - Dependencies between phases - Testing checkpoints ## Risks & Mitigation - What could go wrong? - How to prevent it? - Rollback strategy ## Success Criteria - Measurable improvements - Testing requirements - Verification steps See planning/README.md for detailed template explanation. ","version":"Next 🚧","tagName":"h3"},{"title":"3. Iterate Freely","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#3-iterate-freely","content":" Since planning/ is git-ignored: Draft multiple versionsGet AI assistance without commit pressureRefine until the plan is solidNo need to clean up intermediate versions ","version":"Next 🚧","tagName":"h3"},{"title":"4. Implementation Phase","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#4-implementation-phase","content":" Once plan is approved: Follow the phases defined in the planTest after each phase (don't skip!)Update plan if issues discoveredTrack progress through phase status ","version":"Next 🚧","tagName":"h3"},{"title":"5. After Completion","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#5-after-completion","content":" Option A: Archive in docs/development/If the plan has lasting value (successful pattern, reusable approach): mv planning/my-feature-refactoring-plan.md docs/development/ git add docs/development/my-feature-refactoring-plan.md git commit -m "docs: archive successful refactoring plan" Option B: DeleteIf the plan served its purpose and code is the source of truth: rm planning/my-feature-refactoring-plan.md Option C: Keep locally (not committed)For "why we didn't do X" reference: mkdir -p planning/archive mv planning/my-feature-refactoring-plan.md planning/archive/ # Still git-ignored, just organized ","version":"Next 🚧","tagName":"h3"},{"title":"Real-World Example","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#real-world-example","content":" The sensor/ package refactoring (Nov 2025) is a successful example: Before: sensor.py - 2,574 lines, hard to navigate After: sensor/ package with 5 focused modulesEach module <800 linesClear separation of concerns Process: Created planning/module-splitting-plan.md (now in docs/development/)Defined 6 phases with clear file lifecycleImplemented phase by phaseTested after each phaseDocumented in AGENTS.mdMoved plan to docs/development/ as reference Key learnings: Temporary _impl.py files avoid Python package conflictsTest after EVERY phase (don't accumulate changes)Clear file lifecycle (CREATE/MODIFY/DELETE/RENAME)Phase-by-phase approach enables safe rollback Note: The complete module splitting plan was documented during implementation but has been superseded by the actual code structure. ","version":"Next 🚧","tagName":"h2"},{"title":"Phase-by-Phase Implementation","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#phase-by-phase-implementation","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Why Phases Matter","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#why-phases-matter","content":" Breaking refactorings into phases: ✅ Enables testing after each change (catch bugs early)✅ Allows rollback to last good state✅ Makes progress visible✅ Reduces cognitive load (focus on one thing)❌ Takes more time (but worth it!) ","version":"Next 🚧","tagName":"h3"},{"title":"Phase Structure","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#phase-structure","content":" Each phase should: Have clear goal - What's being changed?Document file lifecycle - CREATE/MODIFY/DELETE/RENAMEDefine success criteria - How to verify it worked?Include testing steps - What to test?Estimate time - Realistic time budget ","version":"Next 🚧","tagName":"h3"},{"title":"Example Phase Documentation","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#example-phase-documentation","content":" ### Phase 3: Extract Helper Functions (Session 3) **Goal**: Move pure utility functions to helpers.py **File Lifecycle**: - ✨ CREATE `sensor/helpers.py` (utility functions) - ✏️ MODIFY `sensor/core.py` (import from helpers.py) **Steps**: 1. Create sensor/helpers.py 2. Move pure functions (no state, no self) 3. Add comprehensive docstrings 4. Update imports in core.py **Estimated time**: 45 minutes **Success criteria**: - ✅ All pure functions moved - ✅ `./scripts/lint-check` passes - ✅ HA starts successfully - ✅ All entities work correctly ","version":"Next 🚧","tagName":"h3"},{"title":"Testing Strategy","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#testing-strategy","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"After Each Phase","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#after-each-phase","content":" Minimum testing checklist: # 1. Linting passes ./scripts/lint-check # 2. Home Assistant starts ./scripts/develop # Watch for startup errors in logs # 3. Integration loads # Check: Settings → Devices & Services → Tibber Prices # Verify: All entities appear # 4. Basic functionality # Test: Data updates without errors # Check: Entity states update correctly ","version":"Next 🚧","tagName":"h3"},{"title":"Comprehensive Testing (Final Phase)","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#comprehensive-testing-final-phase","content":" After completing all phases: Test all entities (sensors, binary sensors)Test configuration flow (add/modify/remove)Test options flow (change settings)Test services (custom service calls)Test error handling (disconnect API, invalid data)Test caching (restart HA, verify cache loads)Test time-based updates (quarter-hour refresh) ","version":"Next 🚧","tagName":"h3"},{"title":"Common Pitfalls","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#common-pitfalls","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"❌ Skip Planning for Large Changes","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#-skip-planning-for-large-changes","content":" Problem: "This seems straightforward, I'll just start coding..." Result: Halfway through, realize the approach doesn't work. Wasted time. Solution: If unsure, spend 30 minutes on a rough plan. Better to plan and discard than get stuck. ","version":"Next 🚧","tagName":"h3"},{"title":"❌ Implement All Phases at Once","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#-implement-all-phases-at-once","content":" Problem: "I'll do all phases, then test everything..." Result: 10+ files changed, 2000+ lines modified, hard to debug if something breaks. Solution: Test after EVERY phase. Commit after each successful phase. ","version":"Next 🚧","tagName":"h3"},{"title":"❌ Forget to Update Documentation","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#-forget-to-update-documentation","content":" Problem: Code is refactored, but AGENTS.md and docs/ still reference old structure. Result: AI/humans get confused by outdated documentation. Solution: Include "Documentation Phase" at the end of every refactoring plan. ","version":"Next 🚧","tagName":"h3"},{"title":"❌ Ignore the Planning Directory","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#-ignore-the-planning-directory","content":" Problem: "I'll just create the plan in docs/ directly..." Result: Git history polluted with draft iterations, or pressure to "commit something" too early. Solution: Always use planning/ for work-in-progress. Move to docs/ only when done. ","version":"Next 🚧","tagName":"h3"},{"title":"Integration with AI Development","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#integration-with-ai-development","content":" This project uses AI heavily (GitHub Copilot, Claude). The planning process supports AI development: AI reads from: AGENTS.md - Long-term memory, patterns, conventions (AI-focused)docs/development/ - Human-readable guides (human-focused)planning/ - Active refactoring plans (shared context) AI updates: AGENTS.md - When patterns changeplanning/*.md - During refactoring implementationdocs/development/ - After successful completion Why separate AGENTS.md and docs/development/? AGENTS.md: Technical, comprehensive, AI-optimizeddocs/development/: Practical, focused, human-optimizedBoth stay in sync but serve different audiences See AGENTS.md section "Planning Major Refactorings" for AI-specific guidance. ","version":"Next 🚧","tagName":"h2"},{"title":"Tools and Resources","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#tools-and-resources","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Planning Directory","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#planning-directory","content":" planning/ - Git-ignored workspace for draftsplanning/README.md - Detailed planning documentationplanning/*.md - Active refactoring plans ","version":"Next 🚧","tagName":"h3"},{"title":"Example Plans","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#example-plans","content":" docs/development/module-splitting-plan.md - ✅ Completed, archivedplanning/config-flow-refactoring-plan.md - 🔄 Planned (1013 lines → 4 modules)planning/binary-sensor-refactoring-plan.md - 🔄 Planned (644 lines → 4 modules)planning/coordinator-refactoring-plan.md - 🔄 Planned (1446 lines, high complexity) ","version":"Next 🚧","tagName":"h3"},{"title":"Helper Scripts","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#helper-scripts","content":" ./scripts/lint-check # Verify code quality ./scripts/develop # Start HA for testing ./scripts/lint # Auto-fix issues ","version":"Next 🚧","tagName":"h3"},{"title":"FAQ","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#faq","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Q: When should I create a plan vs. just start coding?","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#q-when-should-i-create-a-plan-vs-just-start-coding","content":" A: If you're asking this question, you probably need a plan. 😊 Simple rule: If you can't describe the entire change in 3 sentences, create a plan. ","version":"Next 🚧","tagName":"h3"},{"title":"Q: How detailed should the plan be?","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#q-how-detailed-should-the-plan-be","content":" A: Detailed enough to execute without major surprises, but not a line-by-line script. Good plan level: Lists all files affected (CREATE/MODIFY/DELETE)Defines phases with clear boundariesIncludes testing strategyEstimates time per phase Too detailed: Exact code snippets for every changeLine-by-line instructions Too vague: "Refactor sensor.py to be better"No phase breakdownNo testing strategy ","version":"Next 🚧","tagName":"h3"},{"title":"Q: What if the plan changes during implementation?","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#q-what-if-the-plan-changes-during-implementation","content":" A: Update the plan! Planning documents are living documents. If you discover: Better approach → Update "Proposed Solution"More phases needed → Add to "Migration Strategy"New risks → Update "Risks & Mitigation" Document WHY the plan changed (helps future refactorings). ","version":"Next 🚧","tagName":"h3"},{"title":"Q: Should every refactoring follow this process?","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#q-should-every-refactoring-follow-this-process","content":" A: No! Use judgment: Small changes (<100 lines, clear approach): Just do it, no plan neededMedium changes (unclear scope): Write rough outline, refine if neededLarge changes (>500 lines, >5 files): Full planning process ","version":"Next 🚧","tagName":"h3"},{"title":"Q: How do I know when a refactoring is successful?","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#q-how-do-i-know-when-a-refactoring-is-successful","content":" A: Check the "Success Criteria" from your plan: Typical criteria: ✅ All linting checks pass✅ HA starts without errors✅ All entities functional✅ No regressions (existing features work)✅ Code easier to understand/modify✅ Documentation updated If you can't tick all boxes, the refactoring isn't done. ","version":"Next 🚧","tagName":"h3"},{"title":"Summary","type":1,"pageTitle":"Refactoring Guide","url":"/hass.tibber_prices/developer/refactoring-guide#summary","content":" Key takeaways: Plan when scope is unclear (>500 lines, >5 files, breaking changes)Use planning/ directory for free iteration (git-ignored)Work in phases and test after each phaseDocument file lifecycle (CREATE/MODIFY/DELETE/RENAME)Update documentation after completion (AGENTS.md, docs/)Archive or delete plan after implementation Remember: Good planning prevents half-finished refactorings and makes rollback easier when things go wrong. Next steps: Read planning/README.md for detailed templateCheck docs/development/module-splitting-plan.md for real exampleBrowse planning/ for active refactoring plans ","version":"Next 🚧","tagName":"h2"},{"title":"Release Notes Generation","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/release-management","content":"","keywords":"","version":"Next 🚧"},{"title":"🚀 Quick Start: Preparing a Release","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-quick-start-preparing-a-release","content":" Recommended workflow (automatic & foolproof): # 1. Use the helper script to prepare release ./scripts/release/prepare 0.3.0 # This will: # - Update manifest.json version to 0.3.0 # - Create commit: "chore(release): bump version to 0.3.0" # - Create tag: v0.3.0 # - Show you what will be pushed # 2. Review and push when ready git push origin main v0.3.0 # 3. CI/CD automatically: # - Detects the new tag # - Generates release notes (excluding version bump commit) # - Creates GitHub release If you forget to bump manifest.json: # Just edit manifest.json manually and commit vim custom_components/tibber_prices/manifest.json # "version": "0.3.0" git commit -am "chore(release): bump version to 0.3.0" git push # Auto-Tag workflow detects manifest.json change and creates tag automatically! # Then Release workflow kicks in and creates the GitHub release ","version":"Next 🚧","tagName":"h2"},{"title":"📋 Release Options","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-release-options","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"1. GitHub UI Button (Easiest)","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#1-github-ui-button-easiest","content":" Use GitHub's built-in release notes generator: Go to ReleasesClick "Draft a new release"Select your tagClick "Generate release notes" buttonEdit if needed and publish Uses: .github/release.yml configurationBest for: Quick releases, works with PRs that have labelsNote: Direct commits appear in "Other Changes" category ","version":"Next 🚧","tagName":"h3"},{"title":"2. Local Script (Intelligent)","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#2-local-script-intelligent","content":" Run ./scripts/release/generate-notes to parse conventional commits locally. Automatic backend detection: # Generate from latest tag to HEAD ./scripts/release/generate-notes # Generate between specific tags ./scripts/release/generate-notes v1.0.0 v1.1.0 # Generate from tag to HEAD ./scripts/release/generate-notes v1.0.0 HEAD Force specific backend: # Use AI (GitHub Copilot CLI) RELEASE_NOTES_BACKEND=copilot ./scripts/release/generate-notes # Use git-cliff (template-based) RELEASE_NOTES_BACKEND=git-cliff ./scripts/release/generate-notes # Use manual parsing (grep/awk fallback) RELEASE_NOTES_BACKEND=manual ./scripts/release/generate-notes Disable AI (useful for CI/CD): USE_AI=false ./scripts/release/generate-notes Backend Priority The script automatically selects the best available backend: GitHub Copilot CLI - AI-powered, context-aware (best quality)git-cliff - Fast Rust tool with templates (reliable)Manual - Simple grep/awk parsing (always works) In CI/CD ($CI or $GITHUB_ACTIONS), AI is automatically disabled. Installing Optional Backends In DevContainer (automatic): git-cliff is automatically installed when the DevContainer is built: Rust toolchain: Installed via ghcr.io/devcontainers/features/rust:1 (minimal profile)git-cliff: Installed via cargo in scripts/setup/setup Simply rebuild the container (VS Code: "Dev Containers: Rebuild Container") and git-cliff will be available. Manual installation (outside DevContainer): git-cliff (template-based): # See: https://git-cliff.org/docs/installation # macOS brew install git-cliff # Cargo (all platforms) cargo install git-cliff # Manual binary download wget https://github.com/orhun/git-cliff/releases/latest/download/git-cliff-x86_64-unknown-linux-gnu.tar.gz tar -xzf git-cliff-*.tar.gz sudo mv git-cliff-*/git-cliff /usr/local/bin/ ","version":"Next 🚧","tagName":"h3"},{"title":"3. CI/CD Automation","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#3-cicd-automation","content":" Automatic release notes on tag push. Workflow: .github/workflows/release.yml Triggers: Version tags (v1.0.0, v2.1.3, etc.) # Create and push a tag to trigger automatic release git tag v1.0.0 git push origin v1.0.0 # GitHub Actions will: # 1. Detect the new tag # 2. Generate release notes using git-cliff # 3. Create a GitHub release automatically Backend: Uses git-cliff (AI disabled in CI for reliability) ","version":"Next 🚧","tagName":"h3"},{"title":"📝 Output Format","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-output-format","content":" All methods produce GitHub-flavored Markdown with emoji categories: ## 🎉 New Features - **scope**: Description ([abc1234](link-to-commit)) ## 🐛 Bug Fixes - **scope**: Description ([def5678](link-to-commit)) ## 📚 Documentation - **scope**: Description ([ghi9012](link-to-commit)) ## 🔧 Maintenance & Refactoring - **scope**: Description ([jkl3456](link-to-commit)) ## 🧪 Testing - **scope**: Description ([mno7890](link-to-commit)) ","version":"Next 🚧","tagName":"h2"},{"title":"🎯 When to Use Which","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-when-to-use-which","content":" Method\tUse Case\tPros\tConsHelper Script\tNormal releases\tFoolproof, automatic\tRequires script Auto-Tag Workflow\tForgot script\tSafety net, automatic tagging\tStill need manifest bump GitHub Button\tManual quick release\tEasy, no script\tLimited categorization Local Script\tTesting release notes\tPreview before release\tManual process CI/CD\tAfter tag push\tFully automatic\tNeeds tag first ","version":"Next 🚧","tagName":"h2"},{"title":"🔄 Complete Release Workflows","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-complete-release-workflows","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Workflow A: Using Helper Script (Recommended)","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#workflow-a-using-helper-script-recommended","content":" # Step 1: Prepare release (all-in-one) ./scripts/release/prepare 0.3.0 # Step 2: Review changes git log -1 --stat git show v0.3.0 # Step 3: Push when ready git push origin main v0.3.0 # Done! CI/CD creates the release automatically What happens: Script bumps manifest.json → commits → creates tag locallyYou push commit + tag togetherRelease workflow sees tag → generates notes → creates release ","version":"Next 🚧","tagName":"h3"},{"title":"Workflow B: Manual (with Auto-Tag Safety Net)","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#workflow-b-manual-with-auto-tag-safety-net","content":" # Step 1: Bump version manually vim custom_components/tibber_prices/manifest.json # Change: "version": "0.3.0" # Step 2: Commit git commit -am "chore(release): bump version to 0.3.0" git push # Step 3: Wait for Auto-Tag workflow # GitHub Actions automatically creates v0.3.0 tag # Then Release workflow creates the release What happens: You push manifest.json changeAuto-Tag workflow detects change → creates tag automaticallyRelease workflow sees new tag → creates release ","version":"Next 🚧","tagName":"h3"},{"title":"Workflow C: Manual Tag (Old Way)","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#workflow-c-manual-tag-old-way","content":" # Step 1: Bump version vim custom_components/tibber_prices/manifest.json git commit -am "chore(release): bump version to 0.3.0" # Step 2: Create tag manually git tag v0.3.0 git push origin main v0.3.0 # Release workflow creates release What happens: You create and push tag manuallyRelease workflow creates releaseAuto-Tag workflow skips (tag already exists) ","version":"Next 🚧","tagName":"h3"},{"title":"⚙️ Configuration Files","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-configuration-files","content":" scripts/release/prepare - Helper script to bump version + create tag.github/workflows/auto-tag.yml - Automatic tag creation on manifest.json change.github/workflows/release.yml - Automatic release on tag push.github/release.yml - GitHub UI button configurationcliff.toml - git-cliff template (filters out version bumps) ","version":"Next 🚧","tagName":"h2"},{"title":"🛡️ Safety Features","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-safety-features","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"1. Version Validation","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#1-version-validation","content":" Both helper script and auto-tag workflow validate version format (X.Y.Z). ","version":"Next 🚧","tagName":"h3"},{"title":"2. No Duplicate Tags","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#2-no-duplicate-tags","content":" Helper script checks if tag exists (local + remote)Auto-tag workflow checks if tag exists before creating ","version":"Next 🚧","tagName":"h3"},{"title":"3. Atomic Operations","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#3-atomic-operations","content":" Helper script creates commit + tag locally. You decide when to push. ","version":"Next 🚧","tagName":"h3"},{"title":"4. Version Bumps Filtered","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#4-version-bumps-filtered","content":" Release notes automatically exclude chore(release): bump version commits. ","version":"Next 🚧","tagName":"h3"},{"title":"5. Rollback Instructions","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#5-rollback-instructions","content":" Helper script shows how to undo if you change your mind. ","version":"Next 🚧","tagName":"h3"},{"title":"🐛 Troubleshooting","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-troubleshooting","content":" "Tag already exists" error: # Local tag git tag -d v0.3.0 # Remote tag (only if you need to recreate) git push origin :refs/tags/v0.3.0 Manifest version doesn't match tag: This shouldn't happen with the new workflows, but if it does: # 1. Fix manifest.json vim custom_components/tibber_prices/manifest.json # 2. Amend the commit git commit --amend -am "chore(release): bump version to 0.3.0" # 3. Move the tag git tag -f v0.3.0 git push -f origin main v0.3.0 Auto-tag didn't create tag: Check workflow runs in GitHub Actions. Common causes: Tag already exists remotelyInvalid version format in manifest.jsonmanifest.json not in the commit that was pushed ","version":"Next 🚧","tagName":"h2"},{"title":"🔍 Format Requirements","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-format-requirements","content":" HACS: No specific format required, uses GitHub releases as-isHome Assistant: No specific format required for custom integrationsMarkdown: Standard GitHub-flavored Markdown supportedHTML: Can include <ha-alert> tags if needed ","version":"Next 🚧","tagName":"h2"},{"title":"💡 Tips","type":1,"pageTitle":"Release Notes Generation","url":"/hass.tibber_prices/developer/release-management#-tips","content":" Conventional Commits: Use proper commit format for best results: feat(scope): Add new feature Detailed description of what changed. Impact: Users can now do X and Y. Impact Section: Add Impact: in commit body for user-friendly descriptions Test Locally: Run ./scripts/release/generate-notes before creating release AI vs Template: GitHub Copilot CLI provides better descriptions, git-cliff is faster and more reliable CI/CD: Tag push triggers automatic release - no manual intervention needed ","version":"Next 🚧","tagName":"h2"},{"title":"Performance Optimization","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/performance","content":"","keywords":"","version":"Next 🚧"},{"title":"Performance Goals","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#performance-goals","content":" Target metrics: Coordinator update: <500ms (typical: 200-300ms)Sensor update: <10ms per sensorPeriod calculation: <100ms (typical: 20-50ms)Memory footprint: <10MB per homeAPI calls: <100 per day per home ","version":"Next 🚧","tagName":"h2"},{"title":"Profiling","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#profiling","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Timing Decorator","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#timing-decorator","content":" Use for performance-critical functions: import time import functools def timing(func): @functools.wraps(func) def wrapper(*args, **kwargs): start = time.perf_counter() result = func(*args, **kwargs) duration = time.perf_counter() - start _LOGGER.debug("%s took %.3fms", func.__name__, duration * 1000) return result return wrapper @timing def expensive_calculation(): # Your code here ","version":"Next 🚧","tagName":"h3"},{"title":"Memory Profiling","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#memory-profiling","content":" import tracemalloc tracemalloc.start() # Run your code current, peak = tracemalloc.get_traced_memory() _LOGGER.info("Memory: current=%.2fMB peak=%.2fMB", current / 1024**2, peak / 1024**2) tracemalloc.stop() ","version":"Next 🚧","tagName":"h3"},{"title":"Async Profiling","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#async-profiling","content":" # Install aioprof uv pip install aioprof # Run with profiling python -m aioprof homeassistant -c config ","version":"Next 🚧","tagName":"h3"},{"title":"Optimization Patterns","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#optimization-patterns","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Caching","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#caching","content":" 1. Persistent Cache (API data): # Already implemented in coordinator/cache.py store = Store(hass, STORAGE_VERSION, STORAGE_KEY) data = await store.async_load() 2. Translation Cache (in-memory): # Already implemented in const.py _TRANSLATION_CACHE: dict[str, dict] = {} def get_translation(path: str, language: str) -> dict: cache_key = f"{path}_{language}" if cache_key not in _TRANSLATION_CACHE: _TRANSLATION_CACHE[cache_key] = load_translation(path, language) return _TRANSLATION_CACHE[cache_key] 3. Config Cache (invalidated on options change): class DataTransformer: def __init__(self): self._config_cache: dict | None = None def get_config(self) -> dict: if self._config_cache is None: self._config_cache = self._build_config() return self._config_cache def invalidate_config_cache(self): self._config_cache = None ","version":"Next 🚧","tagName":"h3"},{"title":"Lazy Loading","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#lazy-loading","content":" Load data only when needed: @property def extra_state_attributes(self) -> dict | None: """Return attributes.""" # Calculate only when accessed if self.entity_description.key == "complex_sensor": return self._calculate_complex_attributes() return None ","version":"Next 🚧","tagName":"h3"},{"title":"Bulk Operations","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#bulk-operations","content":" Process multiple items at once: # ❌ Slow - loop with individual operations for interval in intervals: enriched = enrich_single_interval(interval) results.append(enriched) # ✅ Fast - bulk processing results = enrich_intervals_bulk(intervals) ","version":"Next 🚧","tagName":"h3"},{"title":"Async Best Practices","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#async-best-practices","content":" 1. Concurrent API calls: # ❌ Sequential (slow) user_data = await fetch_user_data() price_data = await fetch_price_data() # ✅ Concurrent (fast) user_data, price_data = await asyncio.gather( fetch_user_data(), fetch_price_data() ) 2. Don't block event loop: # ❌ Blocking result = heavy_computation() # Blocks for seconds # ✅ Non-blocking result = await hass.async_add_executor_job(heavy_computation) ","version":"Next 🚧","tagName":"h3"},{"title":"Memory Management","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#memory-management","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Avoid Memory Leaks","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#avoid-memory-leaks","content":" 1. Clear references: class Coordinator: async def async_shutdown(self): """Clean up resources.""" self._listeners.clear() self._data = None self._cache = None 2. Use weak references for callbacks: import weakref class Manager: def __init__(self): self._callbacks: list[weakref.ref] = [] def register(self, callback): self._callbacks.append(weakref.ref(callback)) ","version":"Next 🚧","tagName":"h3"},{"title":"Efficient Data Structures","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#efficient-data-structures","content":" Use appropriate types: # ❌ List for lookups (O(n)) if timestamp in timestamp_list: ... # ✅ Set for lookups (O(1)) if timestamp in timestamp_set: ... # ❌ List comprehension with filter results = [x for x in items if condition(x)] # ✅ Generator for large datasets results = (x for x in items if condition(x)) ","version":"Next 🚧","tagName":"h3"},{"title":"Coordinator Optimization","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#coordinator-optimization","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Minimize API Calls","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#minimize-api-calls","content":" Already implemented: Cache valid until midnightUser data cached for 24hOnly poll when tomorrow data expected Monitor API usage: _LOGGER.debug("API call: %s (cache_age=%s)", endpoint, cache_age) ","version":"Next 🚧","tagName":"h3"},{"title":"Smart Updates","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#smart-updates","content":" Only update when needed: async def _async_update_data(self) -> dict: """Fetch data from API.""" if self._is_cache_valid(): _LOGGER.debug("Using cached data") return self.data # Fetch new data return await self._fetch_data() ","version":"Next 🚧","tagName":"h3"},{"title":"Database Impact","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#database-impact","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"State Class Selection","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#state-class-selection","content":" Affects long-term statistics storage: # ❌ MEASUREMENT for prices (stores every change) state_class=SensorStateClass.MEASUREMENT # ~35K records/year # ✅ None for prices (no long-term stats) state_class=None # Only current state # ✅ TOTAL for counters only state_class=SensorStateClass.TOTAL # For cumulative values ","version":"Next 🚧","tagName":"h3"},{"title":"Attribute Size","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#attribute-size","content":" Keep attributes minimal: # ❌ Large nested structures (KB per update) attributes = { "all_intervals": [...], # 384 intervals "full_history": [...], # Days of data } # ✅ Essential data only (bytes per update) attributes = { "timestamp": "...", "rating_level": "...", "next_interval": "...", } ","version":"Next 🚧","tagName":"h3"},{"title":"Testing Performance","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#testing-performance","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Benchmark Tests","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#benchmark-tests","content":" import pytest import time @pytest.mark.benchmark def test_period_calculation_performance(coordinator): """Period calculation should complete in <100ms.""" start = time.perf_counter() periods = calculate_periods(coordinator.data) duration = time.perf_counter() - start assert duration < 0.1, f"Too slow: {duration:.3f}s" ","version":"Next 🚧","tagName":"h3"},{"title":"Load Testing","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#load-testing","content":" @pytest.mark.integration async def test_multiple_homes_performance(hass): """Test with 10 homes.""" coordinators = [] for i in range(10): coordinator = create_coordinator(hass, home_id=f"home_{i}") await coordinator.async_refresh() coordinators.append(coordinator) # Verify memory usage # Verify update times ","version":"Next 🚧","tagName":"h3"},{"title":"Monitoring in Production","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#monitoring-in-production","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Log Performance Metrics","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#log-performance-metrics","content":" @timing async def _async_update_data(self) -> dict: """Fetch data with timing.""" result = await self._fetch_data() _LOGGER.info("Update completed in %.2fs", timing_duration) return result ","version":"Next 🚧","tagName":"h3"},{"title":"Memory Tracking","type":1,"pageTitle":"Performance Optimization","url":"/hass.tibber_prices/developer/performance#memory-tracking","content":" import psutil import os process = psutil.Process(os.getpid()) memory_mb = process.memory_info().rss / 1024**2 _LOGGER.debug("Current memory usage: %.2f MB", memory_mb) 💡 Related: Caching Strategy - Cache layersArchitecture - System designDebugging - Profiling tools ","version":"Next 🚧","tagName":"h3"},{"title":"Testing","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/testing","content":"","keywords":"","version":"Next 🚧"},{"title":"Integration Validation","type":1,"pageTitle":"Testing","url":"/hass.tibber_prices/developer/testing#integration-validation","content":" Before running tests or committing changes, validate the integration structure: # Run local validation (JSON syntax, Python syntax, required files) ./scripts/release/hassfest This lightweight script checks: ✓ config_flow.py exists✓ manifest.json is valid JSON with required fields✓ Translation files have valid JSON syntax✓ All Python files compile without syntax errors Note: Full hassfest validation runs in GitHub Actions on push. ","version":"Next 🚧","tagName":"h2"},{"title":"Running Tests","type":1,"pageTitle":"Testing","url":"/hass.tibber_prices/developer/testing#running-tests","content":" # Run all tests pytest tests/ # Run specific test file pytest tests/test_coordinator.py # Run with coverage pytest --cov=custom_components.tibber_prices tests/ ","version":"Next 🚧","tagName":"h2"},{"title":"Manual Testing","type":1,"pageTitle":"Testing","url":"/hass.tibber_prices/developer/testing#manual-testing","content":" # Start development environment ./scripts/develop Then test in Home Assistant UI: Configuration flowSensor states and attributesServicesTranslation strings ","version":"Next 🚧","tagName":"h2"},{"title":"Test Guidelines","type":1,"pageTitle":"Testing","url":"/hass.tibber_prices/developer/testing#test-guidelines","content":" Coming soon... ","version":"Next 🚧","tagName":"h2"},{"title":"Development Setup","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/setup","content":"","keywords":"","version":"Next 🚧"},{"title":"Prerequisites","type":1,"pageTitle":"Development Setup","url":"/hass.tibber_prices/developer/setup#prerequisites","content":" VS Code with Dev Container supportDocker installed and runningGitHub account (for Tibber API token) ","version":"Next 🚧","tagName":"h2"},{"title":"Quick Setup","type":1,"pageTitle":"Development Setup","url":"/hass.tibber_prices/developer/setup#quick-setup","content":" # Clone the repository git clone https://github.com/jpawlowski/hass.tibber_prices.git cd hass.tibber_prices # Open in VS Code code . # Reopen in DevContainer (VS Code will prompt) # Or manually: Ctrl+Shift+P → "Dev Containers: Reopen in Container" ","version":"Next 🚧","tagName":"h2"},{"title":"Development Environment","type":1,"pageTitle":"Development Setup","url":"/hass.tibber_prices/developer/setup#development-environment","content":" The DevContainer includes: Python 3.13 with .venv at /home/vscode/.venv/uv package manager (fast, modern Python tooling)Home Assistant development dependenciesRuff linter/formatterGit, GitHub CLI, Node.js, Rust toolchain ","version":"Next 🚧","tagName":"h2"},{"title":"Running the Integration","type":1,"pageTitle":"Development Setup","url":"/hass.tibber_prices/developer/setup#running-the-integration","content":" # Start Home Assistant in debug mode ./scripts/develop Visit http://localhost:8123 ","version":"Next 🚧","tagName":"h2"},{"title":"Making Changes","type":1,"pageTitle":"Development Setup","url":"/hass.tibber_prices/developer/setup#making-changes","content":" # Lint and format code ./scripts/lint # Check-only (CI mode) ./scripts/lint-check # Validate integration structure ./scripts/release/hassfest See AGENTS.md for detailed patterns and conventions. ","version":"Next 🚧","tagName":"h2"},{"title":"Period Calculation Theory","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/period-calculation-theory","content":"","keywords":"","version":"Next 🚧"},{"title":"Overview","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#overview","content":" This document explains the mathematical foundations and design decisions behind the period calculation algorithm, particularly focusing on the interaction between Flexibility (Flex), Minimum Distance from Average, and Relaxation Strategy. Target Audience: Developers maintaining or extending the period calculation logic. Related Files: coordinator/period_handlers/core.py - Main calculation entry pointcoordinator/period_handlers/level_filtering.py - Flex and distance filteringcoordinator/period_handlers/relaxation.py - Multi-phase relaxation strategycoordinator/periods.py - Period calculator orchestration ","version":"Next 🚧","tagName":"h2"},{"title":"Core Filtering Criteria","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#core-filtering-criteria","content":" Period detection uses three independent filters (all must pass): ","version":"Next 🚧","tagName":"h2"},{"title":"1. Flex Filter (Price Distance from Reference)","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#1-flex-filter-price-distance-from-reference","content":" Purpose: Limit how far prices can deviate from the daily min/max. Logic: # Best Price: Price must be within flex% ABOVE daily minimum in_flex = price <= (daily_min + daily_min × flex) # Peak Price: Price must be within flex% BELOW daily maximum in_flex = price >= (daily_max - daily_max × flex) Example (Best Price): Daily Min: 10 ct/kWhFlex: 15%Acceptance Range: 0 - 11.5 ct/kWh (10 + 10×0.15) ","version":"Next 🚧","tagName":"h3"},{"title":"2. Min Distance Filter (Distance from Daily Average)","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#2-min-distance-filter-distance-from-daily-average","content":" Purpose: Ensure periods are significantly cheaper/more expensive than average, not just marginally better. Logic: # Best Price: Price must be at least min_distance% BELOW daily average meets_distance = price <= (daily_avg × (1 - min_distance/100)) # Peak Price: Price must be at least min_distance% ABOVE daily average meets_distance = price >= (daily_avg × (1 + min_distance/100)) Example (Best Price): Daily Avg: 15 ct/kWhMin Distance: 5%Acceptance Range: 0 - 14.25 ct/kWh (15 × 0.95) ","version":"Next 🚧","tagName":"h3"},{"title":"3. Level Filter (Price Level Classification)","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#3-level-filter-price-level-classification","content":" Purpose: Restrict periods to specific price classifications (VERY_CHEAP, CHEAP, NORMAL, EXPENSIVE, VERY_EXPENSIVE). Logic: See level_filtering.py for gap tolerance details. Volatility Thresholds - Important Separation: The integration maintains two independent sets of volatility thresholds: Sensor Thresholds (user-configurable via CONF_VOLATILITY_*_THRESHOLD) Purpose: Display classification in sensor.tibber_home_volatility_*Default: LOW < 10%, MEDIUM < 20%, HIGH ≥ 20%User can adjust in config flow optionsAffects: Sensor state/attributes only Period Filter Thresholds (internal, fixed) Purpose: Level filter criteria when using level="volatility_low" etc.Source: PRICE_LEVEL_THRESHOLDS in const.pyValues: Same as sensor defaults (LOW < 10%, MEDIUM < 20%, HIGH ≥ 20%)User cannot adjust theseAffects: Period candidate selection Rationale for Separation: Sensor thresholds = Display preference ("I want to see LOW at 15% instead of 10%")Period thresholds = Algorithm configuration (tested defaults, complex interactions)Changing sensor display should not affect automation behaviorPrevents unexpected side effects when user adjusts sensor classificationPeriod calculation has many interacting filters (Flex, Distance, Level) - exposing all internals would be error-prone Implementation: # Sensor classification uses user config user_low_threshold = config_entry.options.get(CONF_VOLATILITY_LOW_THRESHOLD, 10) # Period filter uses fixed constants period_low_threshold = PRICE_LEVEL_THRESHOLDS["volatility_low"] # Always 10% Status: Intentional design decision (Nov 2025). No plans to expose period thresholds to users. ","version":"Next 🚧","tagName":"h3"},{"title":"The Flex × Min_Distance Conflict","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#the-flex--min_distance-conflict","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Problem Statement","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#problem-statement","content":" These two filters can conflict when Flex is high! Scenario: Best Price with Flex=50%, Min_Distance=5% Given: Daily Min: 10 ct/kWhDaily Avg: 15 ct/kWhDaily Max: 20 ct/kWh Flex Filter (50%): Max accepted = 10 + (10 × 0.50) = 15 ct/kWh Min Distance Filter (5%): Max accepted = 15 × (1 - 0.05) = 14.25 ct/kWh Conflict: Interval at 14.8 ct/kWh: ✅ Flex: 14.8 ≤ 15 (PASS)❌ Distance: 14.8 > 14.25 (FAIL)Result: Rejected by Min_Distance even though Flex allows it! The Issue: At high Flex values, Min_Distance becomes the dominant filter and blocks intervals that Flex would permit. This defeats the purpose of having high Flex. ","version":"Next 🚧","tagName":"h3"},{"title":"Mathematical Analysis","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#mathematical-analysis","content":" Conflict condition for Best Price: daily_min × (1 + flex) > daily_avg × (1 - min_distance/100) Typical values: Min = 10, Avg = 15, Min_Distance = 5%Conflict occurs when: 10 × (1 + flex) > 14.25Simplify: flex > 0.425 (42.5%) Below 42.5% Flex: Both filters contribute meaningfully.Above 42.5% Flex: Min_Distance dominates and blocks intervals. ","version":"Next 🚧","tagName":"h3"},{"title":"Solution: Dynamic Min_Distance Scaling","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#solution-dynamic-min_distance-scaling","content":" Approach: Reduce Min_Distance proportionally as Flex increases. Formula: if flex > 0.20: # 20% threshold flex_excess = flex - 0.20 scale_factor = max(0.25, 1.0 - (flex_excess × 2.5)) adjusted_min_distance = original_min_distance × scale_factor Scaling Table (Original Min_Distance = 5%): Flex\tScale Factor\tAdjusted Min_Distance\tRationale≤20%\t1.00\t5.0%\tStandard - both filters relevant 25%\t0.88\t4.4%\tSlight reduction 30%\t0.75\t3.75%\tModerate reduction 40%\t0.50\t2.5%\tStrong reduction - Flex dominates 50%\t0.25\t1.25%\tMinimal distance - Flex decides Why stop at 25% of original? Min_Distance ensures periods are significantly different from averageEven at 1.25%, prevents "flat days" (little price variation) from accepting every intervalMaintains semantic meaning: "this is a meaningful best/peak price period" Implementation: See level_filtering.py → check_interval_criteria() Code Extract: # coordinator/period_handlers/level_filtering.py FLEX_SCALING_THRESHOLD = 0.20 # 20% - start adjusting min_distance SCALE_FACTOR_WARNING_THRESHOLD = 0.8 # Log when reduction > 20% def check_interval_criteria(price, criteria): # ... flex check ... # Dynamic min_distance scaling adjusted_min_distance = criteria.min_distance_from_avg flex_abs = abs(criteria.flex) if flex_abs > FLEX_SCALING_THRESHOLD: flex_excess = flex_abs - 0.20 # How much above 20% scale_factor = max(0.25, 1.0 - (flex_excess × 2.5)) adjusted_min_distance = criteria.min_distance_from_avg × scale_factor if scale_factor < SCALE_FACTOR_WARNING_THRESHOLD: _LOGGER.debug( "High flex %.1f%% detected: Reducing min_distance %.1f%% → %.1f%%", flex_abs × 100, criteria.min_distance_from_avg, adjusted_min_distance, ) # Apply adjusted min_distance in distance check meets_min_distance = ( price <= avg_price × (1 - adjusted_min_distance/100) # Best Price # OR price >= avg_price × (1 + adjusted_min_distance/100) # Peak Price ) Why Linear Scaling? Simple and predictableNo abrupt behavior changesEasy to reason about for users and developersAlternative considered: Exponential scaling (rejected as too aggressive) Why 25% Minimum? Below this, min_distance loses semantic meaningEven on flat days, some quality filter neededPrevents "every interval is a period" scenarioMaintains user expectation: "best/peak price means notably different" ","version":"Next 🚧","tagName":"h3"},{"title":"Flex Limits and Safety Caps","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#flex-limits-and-safety-caps","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Implementation Constants","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#implementation-constants","content":" Defined in coordinator/period_handlers/core.py: MAX_SAFE_FLEX = 0.50 # 50% - hard cap: above this, period detection becomes unreliable MAX_OUTLIER_FLEX = 0.25 # 25% - cap for outlier filtering: above this, spike detection too permissive Defined in const.py: DEFAULT_BEST_PRICE_FLEX = 15 # 15% base - optimal for relaxation mode (default enabled) DEFAULT_PEAK_PRICE_FLEX = -20 # 20% base (negative for peak detection) DEFAULT_RELAXATION_ATTEMPTS_BEST = 11 # 11 steps: 15% → 48% (3% increment per step) DEFAULT_RELAXATION_ATTEMPTS_PEAK = 11 # 11 steps: 20% → 50% (3% increment per step) DEFAULT_BEST_PRICE_MIN_PERIOD_LENGTH = 60 # 60 minutes DEFAULT_PEAK_PRICE_MIN_PERIOD_LENGTH = 30 # 30 minutes DEFAULT_BEST_PRICE_MIN_DISTANCE_FROM_AVG = 5 # 5% minimum distance DEFAULT_PEAK_PRICE_MIN_DISTANCE_FROM_AVG = 5 # 5% minimum distance ","version":"Next 🚧","tagName":"h3"},{"title":"Rationale for Asymmetric Defaults","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#rationale-for-asymmetric-defaults","content":" Why Best Price ≠ Peak Price? The different defaults reflect fundamentally different use cases: Best Price: Optimization Focus Goal: Find practical time windows for running appliances Constraints: Appliances need time to complete cycles (dishwasher: 2-3h, EV charging: 4-8h)Short periods are impractical (not worth automation overhead)User wants genuinely cheap times, not just "slightly below average" Defaults: 60 min minimum - Ensures period is long enough for meaningful use15% flex - Stricter selection, focuses on truly cheap timesReasoning: Better to find fewer, higher-quality periods than many mediocre ones User behavior: Automations trigger actions (turn on devices)Wrong automation = wasted energy/moneyPreference: Conservative (miss some savings) over aggressive (false positives) Peak Price: Warning Focus Goal: Alert users to expensive periods for consumption reduction Constraints: Brief price spikes still matter (even 15-30 min is worth avoiding)Early warning more valuable than perfect accuracyUser can manually decide whether to react Defaults: 30 min minimum - Catches shorter expensive spikes20% flex - More permissive, earlier detectionReasoning: Better to warn early (even if not peak) than miss expensive periods User behavior: Notifications/alerts (informational)Wrong alert = minor inconvenience, not costPreference: Sensitive (catch more) over specific (catch only extremes) Mathematical Justification Peak Price Volatility: Price curves tend to have: Sharp spikes during peak hours (morning/evening)Shorter duration at maximum (1-2 hours typical)Higher variance in peak times than cheap times Example day: Cheap period: 02:00-07:00 (5 hours at 10-12 ct) ← Gradual, stable Expensive period: 17:00-18:30 (1.5 hours at 35-40 ct) ← Sharp, brief Implication: Stricter flex on peak (15%) might miss real expensive periods (too brief)Longer min_length (60 min) might exclude legitimate spikesSolution: More flexible thresholds for peak detection Design Alternatives Considered Option 1: Symmetric defaults (rejected) Both 60 min, both 15% flexProblem: Misses short but expensive spikesUser feedback: "Why didn't I get warned about the 30-min price spike?" Option 2: Same defaults, let users figure it out (rejected) No guidance on best practicesUsers would need to experiment to find good valuesMost users stick with defaults, so defaults matter Option 3: Current approach (adopted) All values user-configurable via config flow optionsDifferent installation defaults for Best Price vs. Peak PriceDefaults reflect recommended practices for each use caseUsers who need different behavior can adjustMost users benefit from sensible defaults without configuration ","version":"Next 🚧","tagName":"h3"},{"title":"Flex Limits and Safety Caps","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#flex-limits-and-safety-caps-1","content":" 1. Absolute Maximum: 50% (MAX_SAFE_FLEX) Enforcement: core.py caps abs(flex) at 0.50 (50%) Rationale: Above 50%, period detection becomes unreliableBest Price: Almost entire day qualifies (Min + 50% typically covers 60-80% of intervals)Peak Price: Similar issue with Max - 50%Result: Either massive periods (entire day) or no periods (min_length not met) Warning Message: Flex XX% exceeds maximum safe value! Capping at 50%. Recommendation: Use 15-20% with relaxation enabled, or 25-35% without relaxation. 2. Outlier Filtering Maximum: 25% Enforcement: core.py caps outlier filtering flex at 0.25 (25%) Rationale: Outlier filtering uses Flex to determine "stable context" thresholdAt > 25% Flex, almost any price swing is considered "stable"Result: Legitimate price shifts aren't smoothed, breaking period formation Note: User's Flex still applies to period criteria (in_flex check), only outlier filtering is capped. ","version":"Next 🚧","tagName":"h2"},{"title":"Recommended Ranges (User Guidance)","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#recommended-ranges-user-guidance","content":" With Relaxation Enabled (Recommended) Optimal: 10-20% Relaxation increases Flex incrementally: 15% → 18% → 21% → ...Low baseline ensures relaxation has room to work Warning Threshold: > 25% INFO log: "Base flex is on the high side" High Warning: > 30% WARNING log: "Base flex is very high for relaxation mode!"Recommendation: Lower to 15-20% Without Relaxation Optimal: 20-35% No automatic adjustment, must be sufficient from startHigher baseline acceptable since no relaxation fallback Maximum Useful: ~50% Above this, period detection degrades (see Hard Limits) ","version":"Next 🚧","tagName":"h3"},{"title":"Relaxation Strategy","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#relaxation-strategy","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Purpose","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#purpose","content":" Ensure minimum periods per day are found even when baseline filters are too strict. Use Case: User configures strict filters (low Flex, restrictive Level) but wants guarantee of N periods/day for automation reliability. ","version":"Next 🚧","tagName":"h3"},{"title":"Multi-Phase Approach","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#multi-phase-approach","content":" Each day processed independently: Calculate baseline periods with user's configIf insufficient periods found, enter relaxation loopTry progressively relaxed filter combinationsStop when target reached or all attempts exhausted ","version":"Next 🚧","tagName":"h3"},{"title":"Relaxation Increments","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#relaxation-increments","content":" Current Implementation (November 2025): File: coordinator/period_handlers/relaxation.py # Hard-coded 3% increment per step (reliability over configurability) flex_increment = 0.03 # 3% per step base_flex = abs(config.flex) # Generate flex levels for attempt in range(max_relaxation_attempts): flex_level = base_flex + (attempt × flex_increment) # Try flex_level with both filter combinations Constants: FLEX_WARNING_THRESHOLD_RELAXATION = 0.25 # 25% - INFO: suggest lowering to 15-20% FLEX_HIGH_THRESHOLD_RELAXATION = 0.30 # 30% - WARNING: very high for relaxation mode MAX_FLEX_HARD_LIMIT = 0.50 # 50% - absolute maximum (enforced in core.py) Design Decisions: Why 3% fixed increment? Predictable escalation path (15% → 18% → 21% → ...)Independent of base flex (works consistently)11 attempts covers full useful range (15% → 48%)Balance: Not too slow (2%), not too fast (5%) Why hard-coded, not configurable? Prevents user misconfigurationSimplifies mental model (fewer knobs to turn)Reliable behavior across all configurationsIf needed, user adjusts max_relaxation_attempts (fewer/more steps) Why warn at 25% base flex? At 25% base, first relaxation step reaches 28%Above 30%, entering diminishing returns territoryUser likely doesn't need relaxation with such high base flexShould either: (a) lower base flex, or (b) disable relaxation Historical Context (Pre-November 2025): The algorithm previously used percentage-based increments that scaled with base flex: increment = base_flex × (step_pct / 100) # REMOVED This caused exponential escalation with high base flex values (e.g., 40% → 50% → 60% → 70% in just 6 steps), making behavior unpredictable. The fixed 3% increment solves this by providing consistent, controlled escalation regardless of starting point. Warning Messages: if base_flex >= FLEX_HIGH_THRESHOLD_RELAXATION: # 30% _LOGGER.warning( "Base flex %.1f%% is very high for relaxation mode! " "Consider lowering to 15-20%% or disabling relaxation.", base_flex × 100, ) elif base_flex >= FLEX_WARNING_THRESHOLD_RELAXATION: # 25% _LOGGER.info( "Base flex %.1f%% is on the high side. " "Consider 15-20%% for optimal relaxation effectiveness.", base_flex × 100, ) ","version":"Next 🚧","tagName":"h3"},{"title":"Filter Combination Strategy","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#filter-combination-strategy","content":" Per Flex level, try in order: Original Level filterLevel filter = "any" (disabled) Early Exit: Stop immediately when target reached (don't try unnecessary combinations) Example Flow (target=2 periods/day): Day 2025-11-19: 1. Baseline flex=15%: Found 1 period (need 2) 2. Flex=18% + level=cheap: Found 1 period 3. Flex=18% + level=any: Found 2 periods → SUCCESS (stop) ","version":"Next 🚧","tagName":"h3"},{"title":"Implementation Notes","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#implementation-notes","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Key Files and Functions","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#key-files-and-functions","content":" Period Calculation Entry Point: # coordinator/period_handlers/core.py def calculate_periods( all_prices: list[dict], config: PeriodConfig, time: TimeService, ) -> dict[str, Any] Flex + Distance Filtering: # coordinator/period_handlers/level_filtering.py def check_interval_criteria( price: float, criteria: IntervalCriteria, ) -> tuple[bool, bool] # (in_flex, meets_min_distance) Relaxation Orchestration: # coordinator/period_handlers/relaxation.py def calculate_periods_with_relaxation(...) -> tuple[dict, dict] def relax_single_day(...) -> tuple[dict, dict] Outlier Filtering Implementation File: coordinator/period_handlers/outlier_filtering.py Purpose: Detect and smooth isolated price spikes before period identification to prevent artificial fragmentation. Algorithm Details: Linear Regression Prediction: Uses surrounding intervals to predict expected priceWindow size: 3+ intervals (MIN_CONTEXT_SIZE)Calculates trend slope and standard deviationFormula: predicted = mean + slope × (position - center) Confidence Intervals: 95% confidence level (2 standard deviations)Tolerance = 2.0 × std_dev (CONFIDENCE_LEVEL constant)Outlier if: |actual - predicted| > toleranceAccounts for natural price volatility in context window Symmetry Check: Rejects asymmetric outliers (threshold: 1.5 std dev)Preserves legitimate price shifts (morning/evening peaks)Algorithm: residual = abs(actual - predicted) symmetry_threshold = 1.5 × std_dev if residual > tolerance: # Check if spike is symmetric in context context_residuals = [abs(p - pred) for p, pred in context] avg_context_residual = mean(context_residuals) if residual > symmetry_threshold × avg_context_residual: # Asymmetric spike → smooth it else: # Symmetric (part of trend) → keep it Enhanced Zigzag Detection: Detects spike clusters via relative volatilityThreshold: 2.0× local volatility (RELATIVE_VOLATILITY_THRESHOLD)Single-pass algorithm (no iteration needed)Catches patterns like: 18, 35, 19, 34, 18 (alternating spikes) Constants: # coordinator/period_handlers/outlier_filtering.py CONFIDENCE_LEVEL = 2.0 # 95% confidence (2 std deviations) SYMMETRY_THRESHOLD = 1.5 # Asymmetry detection threshold RELATIVE_VOLATILITY_THRESHOLD = 2.0 # Zigzag spike detection MIN_CONTEXT_SIZE = 3 # Minimum intervals for regression Data Integrity: Original prices stored in _original_price fieldAll statistics (daily min/max/avg) use original pricesSmoothing only affects period formation logicSmart counting: Only counts smoothing that changed period outcome Performance: Single pass through price dataO(n) complexity with small context windowNo iterative refinement neededTypical processing time: <1ms for 96 intervals Example Debug Output: DEBUG: [2025-11-11T14:30:00+01:00] Outlier detected: 35.2 ct DEBUG: Context: 18.5, 19.1, 19.3, 19.8, 20.2 ct DEBUG: Residual: 14.5 ct > tolerance: 4.8 ct (2×2.4 std dev) DEBUG: Trend slope: 0.3 ct/interval (gradual increase) DEBUG: Predicted: 20.7 ct (linear regression) DEBUG: Smoothed to: 20.7 ct DEBUG: Asymmetry ratio: 3.2 (>1.5 threshold) → confirmed outlier Why This Approach? Linear regression over moving average: Accounts for price trends (morning ramp-up, evening decline)Moving average can't predict direction, only levelBetter accuracy on non-stationary price curves Symmetry check over fixed threshold: Prevents false positives on legitimate price shiftsAdapts to local volatility patternsPreserves user expectation: "expensive during peak hours" Single-pass over iterative: Predictable behavior (no convergence issues)Fast and deterministicEasier to debug and reason about Alternative Approaches Considered: Median filtering - Rejected: Too aggressive, removes legitimate peaksMoving average - Rejected: Can't handle trendsIQR (Interquartile Range) - Rejected: Assumes normal distributionRANSAC - Rejected: Overkill for 1D data, slow ","version":"Next 🚧","tagName":"h3"},{"title":"Debugging Tips","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#debugging-tips","content":" Enable DEBUG logging: # configuration.yaml logger: default: info logs: custom_components.tibber_prices.coordinator.period_handlers: debug Key log messages to watch: "Filter statistics: X intervals checked" - Shows how many intervals filtered by each criterion"After build_periods: X raw periods found" - Periods before min_length filtering"Day X: Success with flex=Y%" - Relaxation succeeded"High flex X% detected: Reducing min_distance Y% → Z%" - Distance scaling active ","version":"Next 🚧","tagName":"h2"},{"title":"Common Configuration Pitfalls","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#common-configuration-pitfalls","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"❌ Anti-Pattern 1: High Flex with Relaxation","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#-anti-pattern-1-high-flex-with-relaxation","content":" Configuration: best_price_flex: 40 enable_relaxation_best: true Problem: Base Flex 40% already very permissiveRelaxation increments further (43%, 46%, 49%, ...)Quickly approaches 50% cap with diminishing returns Solution: best_price_flex: 15 # Let relaxation increase it enable_relaxation_best: true ","version":"Next 🚧","tagName":"h3"},{"title":"❌ Anti-Pattern 2: Zero Min_Distance","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#-anti-pattern-2-zero-min_distance","content":" Configuration: best_price_min_distance_from_avg: 0 Problem: "Flat days" (little price variation) accept all intervalsPeriods lose semantic meaning ("significantly cheap")May create periods during barely-below-average times Solution: best_price_min_distance_from_avg: 5 # Use default 5% ","version":"Next 🚧","tagName":"h3"},{"title":"❌ Anti-Pattern 3: Conflicting Flex + Distance","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#-anti-pattern-3-conflicting-flex--distance","content":" Configuration: best_price_flex: 45 best_price_min_distance_from_avg: 10 Problem: Distance filter dominates, making Flex irrelevantDynamic scaling helps but still suboptimal Solution: best_price_flex: 20 best_price_min_distance_from_avg: 5 ","version":"Next 🚧","tagName":"h3"},{"title":"Testing Scenarios","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#testing-scenarios","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Scenario 1: Normal Day (Good Variation)","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#scenario-1-normal-day-good-variation","content":" Price Range: 10 - 20 ct/kWh (100% variation)Average: 15 ct/kWh Expected Behavior: Flex 15%: Should find 2-4 clear best price periodsFlex 30%: Should find 4-8 periods (more lenient)Min_Distance 5%: Effective throughout range Debug Checks: DEBUG: Filter statistics: 96 intervals checked DEBUG: Filtered by FLEX: 12/96 (12.5%) ← Low percentage = good variation DEBUG: Filtered by MIN_DISTANCE: 8/96 (8.3%) ← Both filters active DEBUG: After build_periods: 3 raw periods found ","version":"Next 🚧","tagName":"h3"},{"title":"Scenario 2: Flat Day (Poor Variation)","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#scenario-2-flat-day-poor-variation","content":" Price Range: 14 - 16 ct/kWh (14% variation)Average: 15 ct/kWh Expected Behavior: Flex 15%: May find 1-2 small periods (or zero if no clear winners)Min_Distance 5%: Critical here - ensures only truly cheaper intervals qualifyWithout Min_Distance: Would accept almost entire day as "best price" Debug Checks: DEBUG: Filter statistics: 96 intervals checked DEBUG: Filtered by FLEX: 45/96 (46.9%) ← High percentage = poor variation DEBUG: Filtered by MIN_DISTANCE: 52/96 (54.2%) ← Distance filter dominant DEBUG: After build_periods: 1 raw period found DEBUG: Day 2025-11-11: Baseline insufficient (1 < 2), starting relaxation ","version":"Next 🚧","tagName":"h3"},{"title":"Scenario 3: Extreme Day (High Volatility)","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#scenario-3-extreme-day-high-volatility","content":" Price Range: 5 - 40 ct/kWh (700% variation)Average: 18 ct/kWh Expected Behavior: Flex 15%: Finds multiple very cheap periods (5-6 ct)Outlier filtering: May smooth isolated spikes (30-40 ct)Distance filter: Less impactful (clear separation between cheap/expensive) Debug Checks: DEBUG: Outlier detected: 38.5 ct (threshold: 4.2 ct) DEBUG: Smoothed to: 20.1 ct (trend prediction) DEBUG: Filter statistics: 96 intervals checked DEBUG: Filtered by FLEX: 8/96 (8.3%) ← Very selective DEBUG: Filtered by MIN_DISTANCE: 4/96 (4.2%) ← Flex dominates DEBUG: After build_periods: 4 raw periods found ","version":"Next 🚧","tagName":"h3"},{"title":"Scenario 4: Relaxation Success","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#scenario-4-relaxation-success","content":" Initial State: Baseline finds 1 period, target is 2 Expected Flow: INFO: Calculating BEST PRICE periods: relaxation=ON, target=2/day, flex=15.0% DEBUG: Day 2025-11-11: Baseline found 1 period (need 2) DEBUG: Phase 1: flex 18.0% + original filters DEBUG: Found 1 period (insufficient) DEBUG: Phase 2: flex 18.0% + level=any DEBUG: Found 2 periods → SUCCESS INFO: Day 2025-11-11: Success after 1 relaxation phase (2 periods) ","version":"Next 🚧","tagName":"h3"},{"title":"Scenario 5: Relaxation Exhausted","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#scenario-5-relaxation-exhausted","content":" Initial State: Strict filters, very flat day Expected Flow: INFO: Calculating BEST PRICE periods: relaxation=ON, target=2/day, flex=15.0% DEBUG: Day 2025-11-11: Baseline found 0 periods (need 2) DEBUG: Phase 1-11: flex 15%→48%, all filter combinations tried WARNING: Day 2025-11-11: All relaxation phases exhausted, still only 1 period found INFO: Period calculation completed: 1/2 days reached target ","version":"Next 🚧","tagName":"h3"},{"title":"Debugging Checklist","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#debugging-checklist","content":" When debugging period calculation issues: Check Filter Statistics Which filter blocks most intervals? (flex, distance, or level)High flex filtering (>30%) = Need more flexibility or relaxationHigh distance filtering (>50%) = Min_distance too strict or flat dayHigh level filtering = Level filter too restrictive Check Relaxation Behavior Did relaxation activate? Check for "Baseline insufficient" messageWhich phase succeeded? Early success (phase 1-3) = good configLate success (phase 8-11) = Consider adjusting base configExhausted all phases = Unrealistic target for this day's price curve Check Flex Warnings INFO at 25% base flex = On the high sideWARNING at 30% base flex = Too high for relaxationIf seeing these: Lower base flex to 15-20% Check Min_Distance Scaling Debug messages show "High flex X% detected: Reducing min_distance Y% → Z%"If scale factor <0.8 (20% reduction): High flex is activeIf periods still not found: Filters conflict even with scaling Check Outlier Filtering Look for "Outlier detected" messagesCheck period_interval_smoothed_count attributeIf no smoothing but periods fragmented: Not isolated spikes, but legitimate price levels ","version":"Next 🚧","tagName":"h3"},{"title":"Future Enhancements","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#future-enhancements","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Potential Improvements","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#potential-improvements","content":" Adaptive Flex Calculation: Auto-adjust Flex based on daily price variationHigh variation days: Lower Flex neededLow variation days: Higher Flex needed Machine Learning Approach: Learn optimal Flex/Distance from user feedbackClassify days by pattern (normal/flat/volatile/bimodal)Apply pattern-specific defaults Multi-Objective Optimization: Balance period count vs. qualityConsider period duration vs. price levelOptimize for user's stated use case (EV charging vs. heat pump) ","version":"Next 🚧","tagName":"h3"},{"title":"Known Limitations","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#known-limitations","content":" Fixed increment step: 3% cap may be too aggressive for very low base FlexLinear distance scaling: Could benefit from non-linear curveNo consideration of temporal distribution: May find all periods in one part of day ","version":"Next 🚧","tagName":"h3"},{"title":"Future Enhancements","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#future-enhancements-1","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Potential Improvements","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#potential-improvements-1","content":" 1. Adaptive Flex Calculation (Not Yet Implemented) Concept: Auto-adjust Flex based on daily price variation Algorithm: # Pseudo-code for adaptive flex variation = (daily_max - daily_min) / daily_avg if variation < 0.15: # Flat day (< 15% variation) adaptive_flex = 0.30 # Need higher flex elif variation > 0.50: # High volatility (> 50% variation) adaptive_flex = 0.10 # Lower flex sufficient else: # Normal day adaptive_flex = 0.15 # Standard flex Benefits: Eliminates need for relaxation on most daysSelf-adjusting to market conditionsBetter user experience (less configuration needed) Challenges: Harder to predict behavior (less transparent)May conflict with user's mental modelNeeds extensive testing across different markets Status: Considered but not implemented (prefer explicit relaxation) 2. Machine Learning Approach (Future Work) Concept: Learn optimal Flex/Distance from user feedback Approach: Track which periods user actually uses (automation triggers)Classify days by pattern (normal/flat/volatile/bimodal)Apply pattern-specific defaultsLearn per-user preferences over time Benefits: Personalized to user's actual behaviorAdapts to local market patternsCould discover non-obvious patterns Challenges: Requires user feedback mechanism (not implemented)Privacy concerns (storing usage patterns)Complexity for users to understand "why this period?"Cold start problem (new users have no history) Status: Theoretical only (no implementation planned) 3. Multi-Objective Optimization (Research Idea) Concept: Balance multiple goals simultaneously Goals: Period count vs. quality (cheap vs. very cheap)Period duration vs. price level (long mediocre vs. short excellent)Temporal distribution (spread throughout day vs. clustered)User's stated use case (EV charging vs. heat pump vs. dishwasher) Algorithm: Pareto optimization (find trade-off frontier)User chooses point on frontier via preferencesGenetic algorithm or simulated annealing Benefits: More sophisticated period selectionBetter match to user's actual needsCould handle complex appliance requirements Challenges: Much more complex to implementHarder to explain to usersComputational cost (may need caching)Configuration explosion (too many knobs) Status: Research idea only (not planned) ","version":"Next 🚧","tagName":"h3"},{"title":"Known Limitations","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#known-limitations-1","content":" 1. Fixed Increment Step Current: 3% cap may be too aggressive for very low base Flex Example: Base flex 5% + 3% increment = 8% (60% increase!)Base flex 15% + 3% increment = 18% (20% increase) Possible Solution: Percentage-based increment: increment = max(base_flex × 0.20, 0.03)This gives: 5% → 6% (20%), 15% → 18% (20%), 40% → 43% (7.5%) Why Not Implemented: Very low base flex (<10%) unusualUsers with strict requirements likely disable relaxationSimplicity preferred over edge case optimization 2. Linear Distance Scaling Current: Linear scaling may be too aggressive/conservative Alternative: Non-linear curve # Example: Exponential scaling scale_factor = 0.25 + 0.75 × exp(-5 × (flex - 0.20)) # Or: Sigmoid scaling scale_factor = 0.25 + 0.75 / (1 + exp(10 × (flex - 0.35))) Why Not Implemented: Linear is easier to reason aboutNo evidence that non-linear is betterWould need extensive testing 3. No Temporal Distribution Consideration Issue: May find all periods in one part of day Example: All 3 "best price" periods between 02:00-08:00No periods in evening (when user might want to run appliances) Possible Solution: Add "spread" parameter (prefer distributed periods)Weight periods by time-of-day preferencesConsider user's typical usage patterns Why Not Implemented: Adds complexityUsers can work around with multiple automationsDifferent users have different needs (no one-size-fits-all) 4. Period Boundary Handling Current Behavior: Periods can cross midnight naturally Design Principle: Each interval is evaluated using its own day's reference prices (daily min/max/avg). Implementation: # In period_building.py build_periods(): for price_data in all_prices: starts_at = time.get_interval_time(price_data) date_key = starts_at.date() # CRITICAL: Use interval's own day, not period_start_date ref_date = date_key criteria = TibberPricesIntervalCriteria( ref_price=ref_prices[ref_date], # Interval's day avg_price=avg_prices[ref_date], # Interval's day flex=flex, min_distance_from_avg=min_distance_from_avg, reverse_sort=reverse_sort, ) Why Per-Day Evaluation? Periods can cross midnight (e.g., 23:45 → 01:00). Each day has independent reference prices calculated from its 96 intervals. Example showing the problem with period-start-day approach: Day 1 (2025-11-21): Cheap day daily_min = 10 ct, daily_avg = 20 ct, flex = 15% Criteria: price ≤ 11.5 ct (10 + 10×0.15) Day 2 (2025-11-22): Expensive day daily_min = 20 ct, daily_avg = 30 ct, flex = 15% Criteria: price ≤ 23 ct (20 + 20×0.15) Period crossing midnight: 23:45 Day 1 → 00:15 Day 2 23:45 (Day 1): 11 ct → ✅ Passes (11 ≤ 11.5) 00:00 (Day 2): 21 ct → Should this pass? ❌ WRONG (using period start day): 00:00 evaluated against Day 1's 11.5 ct threshold 21 ct > 11.5 ct → Fails But 21ct IS cheap on Day 2 (min=20ct)! ✅ CORRECT (using interval's own day): 00:00 evaluated against Day 2's 23 ct threshold 21 ct ≤ 23 ct → Passes Correctly identified as cheap relative to Day 2 Trade-off: Periods May Break at Midnight When days differ significantly, period can split: Day 1: Min=10ct, Avg=20ct, 23:45=11ct → ✅ Cheap (relative to Day 1) Day 2: Min=25ct, Avg=35ct, 00:00=21ct → ❌ Expensive (relative to Day 2) Result: Period stops at 23:45, new period starts later This is mathematically correct - 21ct is genuinely expensive on a day where minimum is 25ct. Market Reality Explains Price Jumps: Day-ahead electricity markets (EPEX SPOT) set prices at 12:00 CET for all next-day hours: Late intervals (23:45): Priced ~36h before delivery → high forecast uncertainty → risk premiumEarly intervals (00:00): Priced ~12h before delivery → better forecasts → lower risk buffer This explains why absolute prices jump at midnight despite minimal demand changes. User-Facing Solution (Nov 2025): Added per-period day volatility attributes to detect when classification changes are meaningful: day_volatility_%: Percentage spread (span/avg × 100)day_price_min, day_price_max, day_price_span: Daily price range (ct/øre) Automations can check volatility before acting: condition: - condition: template value_template: > {{ state_attr('binary_sensor.tibber_home_best_price_period', 'day_volatility_%') | float(0) > 15 }} Low volatility (< 15%) means classification changes are less economically significant. Alternative Approaches Rejected: Use period start day for all intervals Problem: Mathematically incorrect - lends cheap day's criteria to expensive dayRejected: Violates relative evaluation principle Adjust flex/distance at midnight Problem: Complex, unpredictable, hides market realityRejected: Users should understand price context, not have it hidden Split at midnight always Problem: Artificially fragments natural periodsRejected: Worse user experience Use next day's reference after midnight Problem: Period criteria inconsistent across durationRejected: Confusing and unpredictable Status: Per-day evaluation is intentional design prioritizing mathematical correctness. See Also: User documentation: docs/user/period-calculation.md → "Midnight Price Classification Changes"Implementation: coordinator/period_handlers/period_building.py (line ~126: ref_date = date_key)Attributes: coordinator/period_handlers/period_statistics.py (day volatility calculation) ","version":"Next 🚧","tagName":"h3"},{"title":"References","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#references","content":" User Documentation: Period CalculationArchitecture OverviewCaching StrategyAGENTS.md - AI assistant memory (implementation patterns) ","version":"Next 🚧","tagName":"h2"},{"title":"Changelog","type":1,"pageTitle":"Period Calculation Theory","url":"/hass.tibber_prices/developer/period-calculation-theory#changelog","content":" 2025-11-19: Initial documentation of Flex/Distance interaction and Relaxation strategy fixes ","version":"Next 🚧","tagName":"h2"},{"title":"Timer Architecture","type":0,"sectionRef":"#","url":"/hass.tibber_prices/developer/timer-architecture","content":"","keywords":"","version":"Next 🚧"},{"title":"Overview","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#overview","content":" The integration uses three independent timer mechanisms for different purposes: Timer\tType\tInterval\tPurpose\tTrigger MethodTimer #1\tHA built-in\t15 minutes\tAPI data updates\tDataUpdateCoordinator Timer #2\tCustom\t:00, :15, :30, :45\tEntity state refresh\tasync_track_utc_time_change() Timer #3\tCustom\tEvery minute\tCountdown/progress\tasync_track_utc_time_change() Key principle: Timer #1 (HA) controls data fetching, Timer #2 controls entity updates, Timer #3 controls timing displays. ","version":"Next 🚧","tagName":"h2"},{"title":"Timer #1: DataUpdateCoordinator (HA Built-in)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#timer-1-dataupdatecoordinator-ha-built-in","content":" File: coordinator/core.py → TibberPricesDataUpdateCoordinator Type: Home Assistant's built-in DataUpdateCoordinator with UPDATE_INTERVAL = 15 minutes What it is: HA provides this timer system automatically when you inherit from DataUpdateCoordinatorTriggers _async_update_data() method every 15 minutesNot synchronized to clock boundaries (each installation has different start time) Purpose: Check if fresh API data is needed, fetch if necessary What it does: async def _async_update_data(self) -> TibberPricesData: # Step 1: Check midnight turnover FIRST (prevents race with Timer #2) if self._check_midnight_turnover_needed(dt_util.now()): await self._perform_midnight_data_rotation(dt_util.now()) # Notify ALL entities after midnight turnover return self.data # Early return # Step 2: Check if we need tomorrow data (after 13:00) if self._should_update_price_data() == "tomorrow_check": await self._fetch_and_update_data() # Fetch from API return self.data # Step 3: Use cached data (fast path - most common) return self.data Load Distribution: Each HA installation starts Timer #1 at different times → natural distributionTomorrow data check adds 0-30s random delay → prevents "thundering herd" on Tibber APIResult: API load spread over ~30 minutes instead of all at once Midnight Coordination: Atomic check: _check_midnight_turnover_needed(now) compares dates only (no side effects)If midnight turnover needed → performs it and returns earlyTimer #2 will see turnover already done and skip gracefully Why we use HA's timer: Automatic restart after HA restartBuilt-in retry logic for temporary failuresStandard HA integration patternHandles backpressure (won't queue up if previous update still running) ","version":"Next 🚧","tagName":"h2"},{"title":"Timer #2: Quarter-Hour Refresh (Custom)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#timer-2-quarter-hour-refresh-custom","content":" File: coordinator/listeners.py → ListenerManager.schedule_quarter_hour_refresh() Type: Custom timer using async_track_utc_time_change(minute=[0, 15, 30, 45], second=0) Purpose: Update time-sensitive entity states at interval boundaries without waiting for API poll Problem it solves: Timer #1 runs every 15 minutes but NOT synchronized to clock (:03, :18, :33, :48)Current price changes at :00, :15, :30, :45 → entities would show stale data for up to 15 minutesExample: 14:00 new price, but Timer #1 ran at 13:58 → next update at 14:13 → users see old price until 14:13 What it does: async def _handle_quarter_hour_refresh(self, now: datetime) -> None: # Step 1: Check midnight turnover (coordinates with Timer #1) if self._check_midnight_turnover_needed(now): # Timer #1 might have already done this → atomic check handles it await self._perform_midnight_data_rotation(now) # Notify ALL entities after midnight turnover return # Step 2: Normal quarter-hour refresh (most common path) # Only notify time-sensitive entities (current_interval_price, etc.) self._listener_manager.async_update_time_sensitive_listeners() Smart Boundary Tolerance: Uses round_to_nearest_quarter_hour() with ±2 second toleranceHA may schedule timer at 14:59:58 → rounds to 15:00:00 (shows new interval)HA restart at 14:59:30 → stays at 14:45:00 (shows current interval)See Architecture for details Absolute Time Scheduling: async_track_utc_time_change() plans for all future boundaries (15:00, 15:15, 15:30, ...)NOT relative delays ("in 15 minutes")If triggered at 14:59:58 → next trigger is 15:15:00, NOT 15:00:00 (prevents double updates) Which entities listen: All sensors that depend on "current interval" (e.g., current_interval_price, next_interval_price)Binary sensors that check "is now in period?" (e.g., best_price_period_active)~50-60 entities out of 120+ total Why custom timer: HA's built-in coordinator doesn't support exact boundary timingWe need absolute time triggers, not periodic intervalsAllows fast entity updates without expensive data transformation ","version":"Next 🚧","tagName":"h2"},{"title":"Timer #3: Minute Refresh (Custom)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#timer-3-minute-refresh-custom","content":" File: coordinator/listeners.py → ListenerManager.schedule_minute_refresh() Type: Custom timer using async_track_utc_time_change(second=0) (every minute) Purpose: Update countdown and progress sensors for smooth UX What it does: async def _handle_minute_refresh(self, now: datetime) -> None: # Only notify minute-update entities # No data fetching, no transformation, no midnight handling self._listener_manager.async_update_minute_listeners() Which entities listen: best_price_remaining_minutes - Countdown timerpeak_price_remaining_minutes - Countdown timerbest_price_progress - Progress bar (0-100%)peak_price_progress - Progress bar (0-100%)~10 entities total Why custom timer: Users want smooth countdowns (not jumping 15 minutes at a time)Progress bars need minute-by-minute updatesVery lightweight (no data processing, just state recalculation) Why NOT every second: Minute precision sufficient for countdown UXReduces CPU load (60× fewer updates than seconds)Home Assistant best practice (avoid sub-minute updates) ","version":"Next 🚧","tagName":"h2"},{"title":"Listener Pattern (Python/HA Terminology)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#listener-pattern-pythonha-terminology","content":" Your question: "Sind Timer für dich eigentlich 'Listener'?" Answer: In Home Assistant terminology: Timer = The mechanism that triggers at specific times (async_track_utc_time_change)Listener = A callback function that gets called when timer triggersObserver Pattern = Entities register callbacks, coordinator notifies them How it works: # Entity registers a listener callback class TibberPricesSensor(CoordinatorEntity): async def async_added_to_hass(self): # Register this entity's update callback self._remove_listener = self.coordinator.async_add_time_sensitive_listener( self._handle_coordinator_update ) # Coordinator maintains list of listeners class ListenerManager: def __init__(self): self._time_sensitive_listeners = [] # List of callbacks def async_add_time_sensitive_listener(self, callback): self._time_sensitive_listeners.append(callback) def async_update_time_sensitive_listeners(self): # Timer triggered → notify all listeners for callback in self._time_sensitive_listeners: callback() # Entity updates itself Why this pattern: Decouples timer logic from entity logicOne timer can notify many entities efficientlyEntities can unregister when removed (cleanup)Standard HA pattern for coordinator-based integrations ","version":"Next 🚧","tagName":"h2"},{"title":"Timer Coordination Scenarios","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#timer-coordination-scenarios","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Scenario 1: Normal Operation (No Midnight)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#scenario-1-normal-operation-no-midnight","content":" 14:00:00 → Timer #2 triggers → Update time-sensitive entities (current price changed) → 60 entities updated (~5ms) 14:03:12 → Timer #1 triggers (HA's 15-min cycle) → Check if tomorrow data needed (no, still cached) → Return cached data (fast path, ~2ms) 14:15:00 → Timer #2 triggers → Update time-sensitive entities → 60 entities updated (~5ms) 14:16:00 → Timer #3 triggers → Update countdown/progress entities → 10 entities updated (~1ms) Key observation: Timer #1 and Timer #2 run independently, no conflicts. ","version":"Next 🚧","tagName":"h3"},{"title":"Scenario 2: Midnight Turnover","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#scenario-2-midnight-turnover","content":" 23:45:12 → Timer #1 triggers → Check midnight: current_date=2025-11-17, last_check=2025-11-17 → No turnover needed → Return cached data 00:00:00 → Timer #2 triggers FIRST (synchronized to midnight) → Check midnight: current_date=2025-11-18, last_check=2025-11-17 → Turnover needed! Perform rotation, save cache → _last_midnight_check = 2025-11-18 → Notify ALL entities 00:03:12 → Timer #1 triggers (its regular cycle) → Check midnight: current_date=2025-11-18, last_check=2025-11-18 → Turnover already done → skip → Return existing data (fast path) Key observation: Atomic date comparison prevents double-turnover, whoever runs first wins. ","version":"Next 🚧","tagName":"h3"},{"title":"Scenario 3: Tomorrow Data Check (After 13:00)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#scenario-3-tomorrow-data-check-after-1300","content":" 13:00:00 → Timer #2 triggers → Normal quarter-hour refresh → Update time-sensitive entities 13:03:12 → Timer #1 triggers → Check tomorrow data: missing or invalid → Fetch from Tibber API (~300ms) → Transform data (~200ms) → Calculate periods (~100ms) → Notify ALL entities (new data available) 13:15:00 → Timer #2 triggers → Normal quarter-hour refresh (uses newly fetched data) → Update time-sensitive entities Key observation: Timer #1 does expensive work (API + transform), Timer #2 does cheap work (entity notify). ","version":"Next 🚧","tagName":"h3"},{"title":"Why We Keep HA's Timer (Timer #1)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#why-we-keep-has-timer-timer-1","content":" Your question: "warum wir den HA timer trotzdem weiter benutzen, da er ja für uns unkontrollierte aktualisierte änderungen triggert" Answer: You're correct that it's not synchronized, but that's actually intentional: ","version":"Next 🚧","tagName":"h2"},{"title":"Reason 1: Load Distribution on Tibber API","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#reason-1-load-distribution-on-tibber-api","content":" If all installations used synchronized timers: ❌ Everyone fetches at 13:00:00 → Tibber API overload❌ Everyone fetches at 14:00:00 → Tibber API overload❌ "Thundering herd" problem With HA's unsynchronized timer: ✅ Installation A: 13:03:12, 13:18:12, 13:33:12, ...✅ Installation B: 13:07:45, 13:22:45, 13:37:45, ...✅ Installation C: 13:11:28, 13:26:28, 13:41:28, ...✅ Natural distribution over ~30 minutes✅ Plus: Random 0-30s delay on tomorrow checks Result: API load spread evenly, no spikes. ","version":"Next 🚧","tagName":"h3"},{"title":"Reason 2: What Timer #1 Actually Checks","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#reason-2-what-timer-1-actually-checks","content":" Timer #1 does NOT blindly update. It checks: def _should_update_price_data(self) -> str: # Check 1: Do we have tomorrow data? (only relevant after ~13:00) if tomorrow_missing or tomorrow_invalid: return "tomorrow_check" # Fetch needed # Check 2: Is cache still valid? if cache_valid: return "cached" # No fetch needed (most common!) # Check 3: Has enough time passed? if time_since_last_update < threshold: return "cached" # Too soon, skip fetch return "update_needed" # Rare case Most Timer #1 cycles: Fast path (~2ms), no API call, just returns cached data. API fetch only when: Tomorrow data missing/invalid (after 13:00)Cache expired (midnight turnover)Explicit user refresh ","version":"Next 🚧","tagName":"h3"},{"title":"Reason 3: HA Integration Best Practices","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#reason-3-ha-integration-best-practices","content":" ✅ Standard HA pattern: DataUpdateCoordinator is recommended by HA docs✅ Automatic retry logic for temporary API failures✅ Backpressure handling (won't queue updates if previous still running)✅ Developer tools integration (users can manually trigger refresh)✅ Diagnostics integration (shows last update time, success/failure) ","version":"Next 🚧","tagName":"h3"},{"title":"What We DO Synchronize","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#what-we-do-synchronize","content":" ✅ Timer #2: Entity state updates at exact boundaries (user-visible)✅ Timer #3: Countdown/progress at exact minutes (user-visible)❌ Timer #1: API fetch timing (invisible to user, distribution wanted) ","version":"Next 🚧","tagName":"h3"},{"title":"Performance Characteristics","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#performance-characteristics","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Timer #1 (DataUpdateCoordinator)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#timer-1-dataupdatecoordinator","content":" Triggers: Every 15 minutes (unsynchronized)Fast path: ~2ms (cache check, return existing data)Slow path: ~600ms (API fetch + transform + calculate)Frequency: ~96 times/dayAPI calls: ~1-2 times/day (cached otherwise) ","version":"Next 🚧","tagName":"h3"},{"title":"Timer #2 (Quarter-Hour Refresh)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#timer-2-quarter-hour-refresh","content":" Triggers: 96 times/day (exact boundaries)Processing: ~5ms (notify 60 entities)No API calls: Uses cached/transformed dataNo transformation: Just entity state updates ","version":"Next 🚧","tagName":"h3"},{"title":"Timer #3 (Minute Refresh)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#timer-3-minute-refresh","content":" Triggers: 1440 times/day (every minute)Processing: ~1ms (notify 10 entities)No API calls: No data processing at allLightweight: Just countdown math Total CPU budget: ~15 seconds/day for all timers combined. ","version":"Next 🚧","tagName":"h3"},{"title":"Debugging Timer Issues","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#debugging-timer-issues","content":" ","version":"Next 🚧","tagName":"h2"},{"title":"Check Timer #1 (HA Coordinator)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#check-timer-1-ha-coordinator","content":" # Enable debug logging _LOGGER.setLevel(logging.DEBUG) # Watch for these log messages: "Fetching data from API (reason: tomorrow_check)" # API call "Using cached data (no update needed)" # Fast path "Midnight turnover detected (Timer #1)" # Turnover ","version":"Next 🚧","tagName":"h3"},{"title":"Check Timer #2 (Quarter-Hour)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#check-timer-2-quarter-hour","content":" # Watch coordinator logs: "Updated 60 time-sensitive entities at quarter-hour boundary" # Normal "Midnight turnover detected (Timer #2)" # Turnover ","version":"Next 🚧","tagName":"h3"},{"title":"Check Timer #3 (Minute)","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#check-timer-3-minute","content":" # Watch coordinator logs: "Updated 10 minute-update entities" # Every minute ","version":"Next 🚧","tagName":"h3"},{"title":"Common Issues","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#common-issues","content":" Timer #2 not triggering: Check: schedule_quarter_hour_refresh() called in __init__?Check: _quarter_hour_timer_cancel properly stored? Double updates at midnight: Should NOT happen (atomic coordination)Check: Both timers use same date comparison logic? API overload: Check: Random delay working? (0-30s jitter on tomorrow check)Check: Cache validation logic correct? ","version":"Next 🚧","tagName":"h3"},{"title":"Related Documentation","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#related-documentation","content":" Architecture - Overall system design, data flowCaching Strategy - Cache lifetimes, invalidation, midnight turnoverAGENTS.md - Complete reference for AI development ","version":"Next 🚧","tagName":"h2"},{"title":"Summary","type":1,"pageTitle":"Timer Architecture","url":"/hass.tibber_prices/developer/timer-architecture#summary","content":" Three independent timers: Timer #1 (HA built-in, 15 min, unsynchronized) → Data fetching (when needed)Timer #2 (Custom, :00/:15/:30/:45) → Entity state updates (always)Timer #3 (Custom, every minute) → Countdown/progress (always) Key insights: Timer #1 unsynchronized = good (load distribution on API)Timer #2 synchronized = good (user sees correct data immediately)Timer #3 synchronized = good (smooth countdown UX)All three coordinate gracefully (atomic midnight checks, no conflicts) "Listener" terminology: Timer = mechanism that triggersListener = callback that gets calledObserver pattern = entities register, coordinator notifies ","version":"Next 🚧","tagName":"h2"}],"options":{"id":"default"}}