← HOME

PROJECT

Current OS

ACTIVE

AI-powered personal operating system

Next.jsSupabaseTypeScriptClaude API

ABOUT

Current OS is an AI-powered personal operating system — a Next.js web app that coordinates the moving parts of daily life. Built on Supabase with a TypeScript frontend, it uses Claude to bring intelligence to tasks most tools treat as form fields.

The first module is Shed, a home project tracker that turns natural-language descriptions into structured, actionable records. Tell it "the basement has been leaking near the south wall" and it figures out the category, urgency, and next steps — no dropdowns, no form filling.

The Intelligence Foundation connects to your calendar and applies lifecycle management to events — correcting stale data, flagging conflicts, and surfacing what actually needs attention vs what can wait. Predictive scheduling is in progress: an AI-driven optimization layer that proposes schedule changes based on patterns, priorities, and context.

FEATURES

SHIPPED

  • Shed — AI-powered home project tracker

    Natural-language input → structured records with category, urgency, and next steps via Claude

    MAR 2026
  • Intelligence Foundation — event lifecycle pipeline

    Corrects stale calendar events, flags conflicts, surfaces what needs attention

    MAR 2026
  • Calendar sync — Google Calendar integration

    Two-way sync between the app and Google Calendar

    MAR 2026

IN PROGRESS

  • Predictive scheduling — AI-driven schedule optimization

    Proposes schedule changes based on patterns, priorities, and context

CHANGELOG

CURRENT-OSbrainstormnamingbug

WeekWidget fix + Bud brainstorm — inbox intelligence named and scoped

Features

  • Bud vision brief complete — inbox intelligence system powered by n8n on Taproot with six handler types: purchase, coupon, delivery, appointment, reference, triage
  • Cross-agent orchestration scoped — Bud feeds Current OS (tasks, radar, triage), Shed (project materials), Research API (product intelligence), Google Calendar, and a financial dashboard
  • Name locked: Bud — dormant potential waiting to open, "nip it in the bud" for early problem detection, BUD inside BUDGET, buddy personality
  • Standalone project — own repo, own LXC on Taproot, financial dashboard linked from Current OS under a broader domain TBD

Bug Fixes

  • ISSUE-011: WeekWidget "This Week" card now auto-focuses when Work toggle activates — GravityWatcher gained a workVisible transition watcher that calls focusCard("week") on the false → true edge

Lessons

  • Brainstorms should surface established self-hostable tools (n8n, Node-RED) before custom-build options — n8n was the obvious fit for email orchestration but had to be raised by the user
  • Naming requires the design brief as input — two rounds failed before consulting the redwood forest ecosystem aesthetic
  • Humble name for sophisticated tech (Shed, Bark, Bud) is the Understory Labs signature — the contrast IS the brand

TODO

  • /plan on Bud — purchase handler first, end-to-end through n8n
  • Cross-agent protocol design — how Bud, Shed, and Research API communicate
  • Confirm ISSUE-011 fix after testing
CURRENT-OSfeatureaiinfrastructure

Product Intelligence — PI-1 through PI-4 built (retrospective)

Features

  • Product detection lands in intake engine — product_assembly_detected inference fires when AI identifies a specific kit or product, readyToResearch flag signals the UI to offer research before drafting
  • Research service deployed on Taproot — research.rootstack.dev live, returns ManualsLib results + YouTube search URL + DuckDuckGo fallback for any product query
  • Resource approval UI complete — "Resources Found" section in observations panel with type badges, checkboxes, Add your own URL input, and three explicit degradation paths when Taproot is unreachable
  • Ingest pipeline written — /ingest endpoint downloads PDFs and HTML, extracts text, chunks by step-pattern → headings → 500-token windows, stores to ZFS cache under a temp project key
  • Per-resource ingest status in intake overlay — loading / done (chunk count) / image-only-warning / error badges fire immediately on resource approval, non-blocking

Bug Fixes

  • Research fetch moved out of useEffect — cleanup function was aborting the in-flight request the moment researchStatus changed from idle to loading, causing an immediate AbortError
  • @vercel/node import removed from api/research.ts — Vercel infers the runtime; explicit import broke the build
  • Type assertion added for manual resource entry — TypeScript couldn't narrow the type field on user-pasted URLs

Infrastructure

  • api/research.ts — Vercel proxy with 10s timeout, forwards to TAPROOT_RESEARCH_URL with x-api-key auth; RESEARCH_API_KEY never exposed to the browser
  • api/ingest.ts — second Vercel proxy, 25s timeout for PDF downloads
  • project_resources.source column added — distinguishes ai-discovered from manual resources; feeds the future manual library
  • src/cache.js, src/ingest.js, src/retrieve.js written in ~/Projects/homelab/services/research/ — BM25-lite keyword retrieval, ZFS-backed chunk storage
  • PI-4 not yet deployed — ZFS bind-mount into docker-host (requires CT stop) and service rebuild are the next physical steps

Lessons

  • Chunking strategy order matters — trying numbered steps first (assembly manuals are full of them) before generic heading detection before token windows produces far better chunks than the reverse
  • ingestProjectKey needs to be a temp UUID minted at resource-approval time, not the Supabase project ID — the project doesn't exist yet when ingestion kicks off; PI-5 will wire the linkage
  • Explicit degradation beats silent fallback — presenting four named choices when Taproot is unreachable is only slightly more code and completely changes the user's ability to recover

TODO

  • Deploy PI-4: ZFS dataset on Proxmox host → bind-mount into CT 100 → rebuild research service → push Vercel changes
  • Validate PI-4 checkpoint: ingest a real Ring doorbell PDF, confirm /retrieve returns relevant sections
  • PI-5: inject approved chunks into api/project-intake.ts draft prompt using ingestProjectKey
CURRENT-OSbuginfrastructure

Fix project intake photo 413 error

Bug Fixes

  • Compressed project intake photos client-side (max 1200px, JPEG 85%) before base64 encoding — full-res phone photos exceeded Vercel's 4.5MB serverless body limit, returning HTTP 413.

Infrastructure

  • Linked .vercel project config locally — pre-push deploy hook now works for life-automation on this machine.

TODO

  • api/project-intake.ts lines 426–444: pre-existing TypeScript errors — property access on unknown type from JSON parse. Not blocking (build deploys), but should be fixed.
CURRENT-OSfeaturerefactorphase

Gravity Layout — self-sizing cards replace CSS grid

Features

  • Gravity layout system complete — cards size themselves by focus state, dashboard adapts to any height without manual resizing
  • GravityContext: three card states (focused / resting / collapsed), max 2 focused per column, oldest-first demotion, 60s auto-focus cooldown, localStorage persistence
  • GravityCard render-prop pattern: widgets receive variant: 'focused' | 'resting' and render purpose-built compact views at rest — not shrunken full views
  • GravityColumn: narrow prop collapses dual-column layout to a single scrollable column below 900px
  • NowPlaying pinned at top of right column — always visible, no title bar, no collapse
  • Mode system simplified: "online" | "offline" + workVisible boolean replaces the three-mode system
  • Work toggle: header pill swaps accent to teal without changing the environment — data-mode="online-work" inherits pine-forest base, overrides accent only
  • GravityWatcher: auto-focus signals wire morning→brief, music starts→lyrics, item captured→tasks

Bug Fixes

  • LyricsGravityCard and QueueGravityCard never called registerCardfocusCard("lyrics") was a silent no-op, persistence and focus limits broken for both

Infrastructure

  • DashboardShell reduced from ~1094 lines to ~400 — all grid infrastructure (MODE_LAYOUTS, getRows, GridResizeOverlay, expandedCards, rowOverrides) removed
  • /gravity-test route added for layout testing without auth

Lessons

  • Wrappers around non-GravityCard components need explicit registerCard in a useEffect — skipping it is a silent failure with no error, just no behavior
  • Keeping the environment (pine-forest base) while swapping only the accent makes the work toggle feel like a lens over personal space, not a separate room
  • GravityProvider must sit outside key={mode} — state reset on mode switch is the wrong default for a layout system
CURRENT-OSbugfeature

Work block reservation — weekday personal time now realistic

Features

  • /start skill now surfaces the full unworked feature backlog at session open — deferred candidates, wishlist items, and open design questions pulled from memory and design-notes
  • Backlog section grouped into "Ready to plan" and "Wishlist" with a pointer to /scope for details

Bug Fixes

  • Online mode on weekdays no longer reports 8am–5pm as personal free time — reserveWorkBlock parameter added to computeDayContext in the context engine
  • DailyBriefWidget, RadarWidget, and the AI context string all respect the work block — eligible items, day weight, and suggestions now reflect morning/evening availability only
  • RadarWidget now mode-aware via useMode() — eligibility recomputes on mode switch
  • ISSUE-008 closed — all three sub-items resolved across two sessions

Lessons

  • The context engine grid (8am–6pm) only sees 1 hour of personal free time after a 9-hour work block — the heuristic naturally suppresses discretionary items on weekdays, which is the right behavior
  • A default parameter (reserveWorkBlock = false) kept the change backward-compatible without touching every existing call site
CURRENT-OSbuginfrastructure

Mode separation, RLS cleanup, issue sweep

Bug Fixes

  • ISSUE-006 confirmed resolved — committed items not surfacing in TaskListWidget was caused by RLS infinite recursion on the projects SELECT policy, fixed in a prior session
  • ISSUE-007 confirmed resolved — work tasks disappearing after mode switch was the same root cause: fetchTasks joins items → projects(title), triggering the same recursion; optimistic insert appeared to succeed but every re-query returned null, making persistence look broken
  • ISSUE-008 (sub-problems 1 + 3) — DailyBriefWidget lacked mode awareness entirely; weekday tasks query had no mode filter, leaking work tasks into the Online brief
  • Brief cache key was current-os-facts-{date} — shared across modes; switching Online → Work reused cached Online facts; key is now current-os-facts-{mode}-{date}
  • "Work availability" label in the AI context string renamed to "Available time" — was mode-biased in Online brief context

Infrastructure

  • DailyBriefWidget now uses useMode() — tasks query always filters by mode, AI context hint is mode + weekend aware across four combinations
  • ISSUE-009 resolved — four "Project member reads X" SELECT policies in live DB updated via Supabase SQL Editor to use is_project_collaborator() instead of inline subquery; migration file synced to match

Lessons

  • A single circular RLS policy (projects → project_collaborators → projects) silently killed every query that joined to projects — the join itself was the trigger, not the query result; two distinct-looking bugs shared one root cause
  • Mode awareness needs to be explicit at every data boundary — a missing useMode() import left the brief entirely uninformed about which context it was writing for
  • DDL statements in Supabase (DROP POLICY / CREATE POLICY) return "no rows" on success — expected behavior, not an error

TODO

  • ISSUE-008 sub-problem 2 remains: planning overlay work block reservation (8am–5pm M–F in Online mode) — requires reserveWorkBlock flag in computeDayContext so freeHours reflects evening/morning personal time on weekdays
CURRENT-OSbuginfrastructure

RLS recursion root cause found and fixed — commit-to-today unblocked

Bug Fixes

  • Half-screen scroll restored — overflow-hidden on <main> was clipping the stage at ~540px; replaced with overflow-y-auto and added minHeight: 640px to the grid wrapper so fr rows have a usable definite height
  • Stale closure in TaskListWidget items-changed handler fixed — handler was capturing a stale loadTasks reference; useRef + useCallback pattern ensures the latest version is always called
  • RLS infinite recursion (42P17) resolved — projects SELECT policy queried project_collaborators, whose policy queried projects, creating a deadlock loop on any query joining projects (e.g., items → projects(title))
  • Fix: is_project_collaborator() SECURITY DEFINER function queries project_collaborators directly, bypassing its RLS and breaking the cycle; projects SELECT policy updated to use it

Infrastructure

  • Diagnostic logging added to commitToToday and TaskListWidget committed items SELECT to surface silent failures during investigation
  • supabase/schema.sql synced to match live RLS fix — is_project_collaborator() function and updated project table policies now reflected in source of truth
  • Committed items moved above regular tasks in render order — visible without scrolling

Lessons

  • RLS policies that reference each other's tables form recursion cycles invisible until a cross-table query hits them — SECURITY DEFINER functions break the loop by querying at the function owner's privilege level, not the caller's
  • Silent count=0 failures are harder to diagnose than errors — adding a read-back SELECT after an update immediately surfaces whether the write landed

TODO

  • Confirm ISSUE-006 fully resolved — RLS fix is applied, but committed items behavior not yet user-verified in live session
  • ISSUE-007: work tasks disappear after mode round-trip
  • ISSUE-008: work/personal data leaking between modes in daily brief + planning overlay
CURRENT-OSfeatureaiprojects

Shed — Home Projects feature shipped

Features

  • Shed (Home Projects) feature complete — AI-powered home project tracker inside Current OS
  • Project intake form with scope, phase, materials, and priority fields
  • Projects overlay renders active projects with status and last-updated indicators
  • AI scheduling integration — projects surface in prioritization context

Bug Fixes

  • Project intake overlay z-index conflict with calendar resolved
  • Phase label display corrected for multi-phase projects

Lessons

  • Keeping features namespaced (Shed, not just "projects") helps maintain mental boundaries in a large app
  • AI-aware data models pay off immediately — fields added for AI context during intake proved useful in first scheduling run
CURRENT-OSSHIPPEDfeatureaiprojects

Shed — AI home project tracker shipped

Features

  • Shed ships — AI-powered home project tracker operational inside Current OS
  • Natural-language intake: describe a project in plain terms, Claude structures it into category, phase, materials, and priority
  • Projects overlay surfaces active work with status and last-updated indicators — no digging through lists
  • AI scheduling integration active from day one — projects surface in the prioritization engine context at intake

Lessons

  • Naming matters: "Shed" (not just "projects") creates a mental namespace that keeps feature scope clear inside a large app
  • Building for AI context at intake pays off immediately — fields added for Claude's use proved useful in the first scheduling run before the feature shipped
  • The AI-aware data model pattern: design the schema around what AI needs to be useful, not only what humans need to enter
CURRENT-OSSHIPPEDfeatureinfrastructure

Calendar sync — Google Calendar integration shipped

Features

  • Calendar sync ships — two-way Google Calendar integration live in Current OS
  • Events read from Google Calendar at sync time and written into the Supabase events table
  • Lifecycle pipeline picks up new events on sync — normalization and correction run automatically on ingest
  • Changes made in Current OS propagate back to Google Calendar — source of truth stays synchronized

Lessons

  • Two-way sync requires deciding which side wins on conflict — making Current OS the write layer (with Google as display) simplified the mental model considerably
  • Sync as an event trigger (not a cron job) means the pipeline runs when data arrives, not on a schedule indifferent to activity
CURRENT-OSSHIPPEDfeatureaiinfrastructure

Intelligence Foundation — AI scheduling layer shipped

Features

  • Intelligence Foundation ships — AI scheduling layer operational, event lifecycle pipeline live
  • Event correction pipeline: Claude reviews raw calendar events and normalizes titles, durations, and categories before they reach the scheduling engine
  • Lifecycle tracking: events carry state (raw, normalized, scheduled, completed) — full audit trail from ingest to action
  • Normalization runs on ingest — events are clean before they hit the scheduler, not after

Lessons

  • AI as suggestion layer (not replacement) keeps corrections reversible — original data is preserved, AI output sits alongside it
  • lifecycle_state as an enum column gives clearer audit trails than boolean flags — state machines beat is_processed columns
  • Defining "done" for an event is the hardest design decision in an AI pipeline — encoding it as a state machine forced that decision early
CURRENT-OSinfrastructureaiphase

Intelligence Foundation — corrections, lifecycle, normalization

Features

  • Intelligence Foundation phase complete — AI scheduling layer operational
  • Event correction pipeline: Claude reviews raw calendar events and normalizes titles, durations, and categories
  • Lifecycle tracking: events now carry state (raw, normalized, scheduled, completed)
  • Normalization runs on ingest — events are clean before they hit the scheduling engine

Infrastructure

  • Supabase schema extended with lifecycle_state and normalized_at columns
  • Correction pipeline runs as a server action triggered by calendar sync
  • Idempotent design — re-running normalization on already-normalized events is safe

Lessons

  • Treating AI output as a suggestion layer (not a replacement for original data) made corrections reversible
  • lifecycle_state as an enum column gave clearer audit trails than boolean flags
  • The hardest part of AI pipeline design is deciding what "done" looks like for a given event — encoding that as a state machine helped