Compare commits
8 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 96039f6530 | |||
| 6c5f95ad26 | |||
| fafba0f01b | |||
| 3c95fa97cd | |||
| dbca0548b1 | |||
| 9b2c1e5ceb | |||
| 1d43adcfa0 | |||
| a6c133319a |
283
README.md
283
README.md
@@ -1,255 +1,118 @@
|
|||||||
# feedkit
|
# feedkit
|
||||||
|
|
||||||
**feedkit** provides the domain-agnostic core plumbing for *feed-processing daemons*.
|
`feedkit` provides domain-agnostic plumbing for feed-processing daemons.
|
||||||
|
|
||||||
A feed daemon is a long-running process that:
|
A daemon built on feedkit typically:
|
||||||
- polls one or more upstream providers (HTTP APIs, RSS feeds, etc.)
|
- ingests upstream input (polling APIs or consuming streams)
|
||||||
- normalizes upstream data into a consistent internal representation
|
- emits domain-agnostic `event.Event` values
|
||||||
- applies lightweight policy (dedupe, rate-limit, filtering)
|
- applies optional processing (normalization, dedupe, policy)
|
||||||
- emits events to one or more sinks (stdout, files, databases, brokers)
|
- routes events to sinks (stdout, NATS, files, databases, etc.)
|
||||||
|
|
||||||
feedkit is designed to be reused by many concrete daemons (e.g. `weatherfeeder`,
|
|
||||||
`newsfeeder`, `rssfeeder`) without embedding *any* domain-specific logic.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Philosophy
|
## Philosophy
|
||||||
|
|
||||||
feedkit is **not a framework**.
|
feedkit is not a framework. It provides small composable packages and leaves
|
||||||
|
lifecycle, domain schemas, and domain-specific validation in your daemon.
|
||||||
It does **not**:
|
|
||||||
- define domain schemas
|
|
||||||
- enforce allowed event kinds
|
|
||||||
- hide control flow behind inversion-of-control magic
|
|
||||||
- own your application lifecycle
|
|
||||||
|
|
||||||
Instead, it provides **small, composable primitives** that concrete daemons wire
|
|
||||||
together explicitly. The goal is clarity, predictability, and long-term
|
|
||||||
maintainability.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Conceptual pipeline
|
## Conceptual pipeline
|
||||||
|
|
||||||
Collect → Normalize → Filter / Policy → Route → Persist / Emit
|
Collect -> Process (optional stages, including normalize) -> Route -> Emit
|
||||||
|
|
||||||
In feedkit terms:
|
| Stage | Package(s) |
|
||||||
|
|---|---|
|
||||||
|
| Collect | `sources`, `scheduler` |
|
||||||
|
| Process | `pipeline`, `processors`, `normalize` (optional stage) |
|
||||||
|
| Route | `dispatch` |
|
||||||
|
| Emit | `sinks` |
|
||||||
|
| Configure | `config` |
|
||||||
|
|
||||||
| Stage | Package(s) |
|
## Core packages
|
||||||
|------------|--------------------------------------|
|
|
||||||
| Collect | `sources`, `scheduler` |
|
|
||||||
| Normalize | *(today: domain code; planned: pipeline processor)* |
|
|
||||||
| Policy | `pipeline` |
|
|
||||||
| Route | `dispatch` |
|
|
||||||
| Emit | `sinks` |
|
|
||||||
| Configure | `config` |
|
|
||||||
|
|
||||||
---
|
### `config`
|
||||||
|
|
||||||
## Public API overview
|
Loads YAML config with strict decoding and domain-agnostic validation.
|
||||||
|
|
||||||
### `config` — Configuration loading & validation
|
`SourceConfig` supports both source modes:
|
||||||
**Status:** 🟢 Stable
|
- `mode: poll` requires `every`
|
||||||
|
- `mode: stream` forbids `every`
|
||||||
|
- omitted `mode` means auto (inferred from the registered driver type)
|
||||||
|
|
||||||
- Loads YAML configuration
|
It also supports optional expected source kinds:
|
||||||
- Strict decoding (`KnownFields(true)`)
|
- `kinds: ["observation", "alert"]` (preferred)
|
||||||
- Domain-agnostic validation only (shape, required fields, references)
|
- `kind: "observation"` (legacy fallback)
|
||||||
- Flexible `Params map[string]any` with typed helpers
|
|
||||||
|
|
||||||
Key types:
|
### `event`
|
||||||
- `config.Config`
|
|
||||||
- `config.SourceConfig`
|
|
||||||
- `config.SinkConfig`
|
|
||||||
- `config.Load(path)`
|
|
||||||
|
|
||||||
---
|
Defines the domain-agnostic event envelope (`event.Event`) used across the system.
|
||||||
|
|
||||||
### `event` — Domain-agnostic event envelope
|
### `sources`
|
||||||
**Status:** 🟢 Stable
|
|
||||||
|
|
||||||
Defines the canonical event structure that moves through feedkit.
|
Defines source interfaces and driver registry:
|
||||||
|
|
||||||
Includes:
|
|
||||||
- Stable ID
|
|
||||||
- Kind (stringly-typed, domain-defined)
|
|
||||||
- Source name
|
|
||||||
- Timestamps (`EmittedAt`, optional `EffectiveAt`)
|
|
||||||
- Optional `Schema` for payload versioning
|
|
||||||
- Opaque `Payload`
|
|
||||||
|
|
||||||
Key types:
|
|
||||||
- `event.Event`
|
|
||||||
- `event.Kind`
|
|
||||||
- `event.ParseKind`
|
|
||||||
- `event.Event.Validate`
|
|
||||||
|
|
||||||
feedkit infrastructure never inspects `Payload`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `sources` — Polling abstraction
|
|
||||||
**Status:** 🟢 Stable (interface); 🔵 evolving patterns
|
|
||||||
|
|
||||||
Defines the contract implemented by domain-specific polling jobs.
|
|
||||||
|
|
||||||
```go
|
```go
|
||||||
type Source interface {
|
type Input interface {
|
||||||
Name() string
|
Name() string
|
||||||
Kind() event.Kind
|
}
|
||||||
|
|
||||||
|
type PollSource interface {
|
||||||
|
Input
|
||||||
Poll(ctx context.Context) ([]event.Event, error)
|
Poll(ctx context.Context) ([]event.Event, error)
|
||||||
}
|
}
|
||||||
```
|
|
||||||
Includes a registry (sources.Registry) so daemons can register drivers
|
|
||||||
(e.g. openweather_observation, rss_feed) without switch statements.
|
|
||||||
|
|
||||||
Note: Today, most sources both fetch and normalize. A dedicated
|
type StreamSource interface {
|
||||||
normalization hook is planned (see below).
|
Input
|
||||||
|
Run(ctx context.Context, out chan<- event.Event) error
|
||||||
### `scheduler` — Time-based polling
|
|
||||||
|
|
||||||
**Status:** 🟢 Stable
|
|
||||||
|
|
||||||
Runs one goroutine per source on a configured interval with jitter.
|
|
||||||
|
|
||||||
Features:
|
|
||||||
- Per-source interval
|
|
||||||
- Deterministic jitter (avoids thundering herd)
|
|
||||||
- Immediate poll at startup
|
|
||||||
- Context-aware shutdown
|
|
||||||
|
|
||||||
Key types:
|
|
||||||
- `scheduler.Scheduler`
|
|
||||||
- `scheduler.Job`
|
|
||||||
|
|
||||||
### `pipeline` — Event processing chain
|
|
||||||
**Status:** 🟡 Partial (API stable, processors evolving)
|
|
||||||
|
|
||||||
Allows events to be transformed, dropped, or rejected between collection
|
|
||||||
and dispatch.
|
|
||||||
|
|
||||||
```go
|
|
||||||
type Processor interface {
|
|
||||||
Process(ctx context.Context, in event.Event) (*event.Event, error)
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Current state:
|
Notes:
|
||||||
- `pipeline.Pipeline` is fully implemented
|
- a poll can emit `0..N` events
|
||||||
|
- stream sources emit events continuously
|
||||||
|
- a single source may emit multiple event kinds
|
||||||
|
- driver implementations live in downstream daemons and are registered via `sources.Registry`
|
||||||
|
|
||||||
Placeholder files exist for:
|
### `scheduler`
|
||||||
- `dedupe` (planned)
|
|
||||||
- `ratelimit` (planned)
|
|
||||||
|
|
||||||
This is the intended home for:
|
Runs one goroutine per source job:
|
||||||
- normalization
|
- poll sources: cadence driven (`every` + jitter)
|
||||||
- deduplication
|
- stream sources: continuous run loop
|
||||||
- rate limiting
|
|
||||||
- lightweight policy enforcement
|
|
||||||
|
|
||||||
### `dispatch` — Routing & fan-out
|
### `pipeline`
|
||||||
**Status:** 🟢 Stable
|
|
||||||
|
|
||||||
Routes events to sinks based on kind and isolates slow sinks.
|
Optional processing chain between collection and dispatch.
|
||||||
|
Processors can transform, drop, or reject events.
|
||||||
|
|
||||||
Features:
|
### `processors`
|
||||||
- Compiled routing rules
|
|
||||||
- Per-sink buffered queues
|
|
||||||
- Bounded enqueue timeouts
|
|
||||||
- Per-consume timeouts
|
|
||||||
- Sink panic isolation
|
|
||||||
- Context-aware shutdown
|
|
||||||
|
|
||||||
Key types:
|
Defines the generic processor interface and a named-driver registry used by
|
||||||
- `dispatch.Dispatcher`
|
daemons to build ordered processor chains.
|
||||||
- `dispatch.Route`
|
|
||||||
- `dispatch.Fanout`
|
|
||||||
|
|
||||||
### `sinks` — Output adapters
|
### `normalize`
|
||||||
***Status:*** 🟡 Mixed
|
|
||||||
|
|
||||||
Defines where events go after processing.
|
Concrete normalization processor implementation. Typical use: sources emit raw
|
||||||
|
payload events, then a normalize stage maps them to canonical schemas.
|
||||||
|
|
||||||
```go
|
### `dispatch`
|
||||||
type Sink interface {
|
|
||||||
Name() string
|
|
||||||
Consume(ctx context.Context, e event.Event) error
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Registry-based construction allows daemons to opt into any sink drivers.
|
Compiles routes and fans out events to sinks with per-sink queue/worker isolation.
|
||||||
|
|
||||||
Sink Status
|
### `sinks`
|
||||||
stdout 🟢 Implemented
|
|
||||||
file 🔴 Stub
|
|
||||||
postgres 🔴 Stub
|
|
||||||
rabbitmq 🔴 Stub
|
|
||||||
|
|
||||||
All sinks are required to respect context cancellation.
|
Defines sink interface and sink registry. Built-ins include `stdout` and `nats`, with
|
||||||
|
additional sink implementations at varying maturity.
|
||||||
|
|
||||||
### Normalization (planned)
|
## Typical wiring
|
||||||
**Status:** 🔵 Planned (API design in progress)
|
|
||||||
|
|
||||||
Currently, most domain implementations normalize upstream data inside
|
1. Load config.
|
||||||
`sources.Source.Poll`, which leads to:
|
2. Register/build sources from `cfg.Sources`.
|
||||||
- very large source files
|
3. Register/build sinks from `cfg.Sinks`.
|
||||||
- mixed responsibilities (HTTP + mapping)
|
4. Compile routes.
|
||||||
- duplicated helper code
|
5. Start scheduler (`sources -> bus`).
|
||||||
|
6. Start dispatcher (`bus -> pipeline -> sinks`).
|
||||||
The intended evolution is:
|
|
||||||
- Sources emit raw events (e.g. `json.RawMessage`)
|
|
||||||
- A dedicated normalization processor runs in the pipeline
|
|
||||||
- Normalizers are selected by `Event.Schema`, `Kind`, or `Source`
|
|
||||||
|
|
||||||
This keeps:
|
|
||||||
- `feedkit` domain-agnostic
|
|
||||||
- `sources` small and focused
|
|
||||||
- normalization logic centralized and testable
|
|
||||||
|
|
||||||
### Runner helper (planned)
|
|
||||||
**Status:** 🔵 Planned (optional convenience)
|
|
||||||
|
|
||||||
Most daemons wire together the same steps:
|
|
||||||
- load config
|
|
||||||
- build sources
|
|
||||||
- build sinks
|
|
||||||
- compile routes
|
|
||||||
- start scheduler
|
|
||||||
- start dispatcher
|
|
||||||
|
|
||||||
A small, opt-in `Runner` helper may be added to reduce boilerplate while keeping the system explicit and debuggable.
|
|
||||||
|
|
||||||
This is not intended to become a framework.
|
|
||||||
|
|
||||||
## Stability summary
|
|
||||||
Area Status
|
|
||||||
Event model 🟢 Stable
|
|
||||||
Config API 🟢 Stable
|
|
||||||
Scheduler 🟢 Stable
|
|
||||||
Dispatcher 🟢 Stable
|
|
||||||
Source interface 🟢 Stable
|
|
||||||
Pipeline core 🟡 Partial
|
|
||||||
Normalization 🔵 Planned
|
|
||||||
Dedupe/Ratelimit 🔵 Planned
|
|
||||||
Non-stdout sinks 🔴 Stub
|
|
||||||
|
|
||||||
Legend:
|
|
||||||
🟢 Stable — API considered solid
|
|
||||||
🟡 Partial — usable, but incomplete
|
|
||||||
🔵 Planned — design direction agreed, not yet implemented
|
|
||||||
🔴 Stub — placeholder only
|
|
||||||
|
|
||||||
## Non-goals
|
## Non-goals
|
||||||
`feedkit` intentionally does not:
|
|
||||||
|
feedkit intentionally does not:
|
||||||
- define domain payload schemas
|
- define domain payload schemas
|
||||||
- enforce domain-specific validation
|
- enforce domain-specific event kinds
|
||||||
- manage persistence semantics beyond sink adapters
|
- own application lifecycle
|
||||||
- own observability, metrics, or tracing (left to daemons)
|
- prescribe observability stack choices
|
||||||
|
|
||||||
Those concerns belong in concrete implementations.
|
|
||||||
|
|
||||||
## See also
|
|
||||||
- NAMING.md — repository and daemon naming conventions
|
|
||||||
- event/doc.go — detailed event semantics
|
|
||||||
- **Concrete example:** weatherfeeder (reference implementation)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|||||||
@@ -21,20 +21,56 @@ type Config struct {
|
|||||||
Routes []RouteConfig `yaml:"routes"`
|
Routes []RouteConfig `yaml:"routes"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// SourceConfig describes one polling job.
|
// SourceMode selects how a source receives upstream input.
|
||||||
|
//
|
||||||
|
// Empty mode means "auto": feedkit infers mode from the registered driver type.
|
||||||
|
type SourceMode string
|
||||||
|
|
||||||
|
const (
|
||||||
|
SourceModeAuto SourceMode = ""
|
||||||
|
SourceModePoll SourceMode = "poll"
|
||||||
|
SourceModeStream SourceMode = "stream"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Normalize lowercases and trims the mode.
|
||||||
|
func (m SourceMode) Normalize() SourceMode {
|
||||||
|
switch strings.ToLower(strings.TrimSpace(string(m))) {
|
||||||
|
case "":
|
||||||
|
return SourceModeAuto
|
||||||
|
case string(SourceModePoll):
|
||||||
|
return SourceModePoll
|
||||||
|
case string(SourceModeStream):
|
||||||
|
return SourceModeStream
|
||||||
|
default:
|
||||||
|
return SourceMode(strings.ToLower(strings.TrimSpace(string(m))))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// SourceConfig describes one input source.
|
||||||
//
|
//
|
||||||
// This is intentionally generic:
|
// This is intentionally generic:
|
||||||
// - driver-specific knobs belong in Params.
|
// - driver-specific knobs belong in Params.
|
||||||
// - "kind" is allowed (useful for safety checks / routing), but feedkit does not
|
// - mode controls polling vs streaming behavior.
|
||||||
// restrict the allowed values.
|
// - expected emitted kinds are optional and domain-defined.
|
||||||
type SourceConfig struct {
|
type SourceConfig struct {
|
||||||
Name string `yaml:"name"`
|
Name string `yaml:"name"`
|
||||||
Driver string `yaml:"driver"` // e.g. "openmeteo_observation", "rss_feed", etc.
|
Driver string `yaml:"driver"` // e.g. "openmeteo_observation", "rss_feed", etc.
|
||||||
|
|
||||||
Every Duration `yaml:"every"` // "15m", "1m", etc.
|
// Mode is optional:
|
||||||
|
// - "poll": Every must be set (>0)
|
||||||
|
// - "stream": Every must be omitted/zero
|
||||||
|
// - empty: infer from driver registration type (poll vs stream)
|
||||||
|
Mode SourceMode `yaml:"mode"`
|
||||||
|
|
||||||
// Kind is optional and domain-defined. If set, it should be a non-empty string.
|
// Every is the poll cadence for poll-mode sources ("15m", "1m", etc.).
|
||||||
// Domains commonly use it to enforce "this source should only emit kind X".
|
Every Duration `yaml:"every"`
|
||||||
|
|
||||||
|
// Kinds is optional and domain-defined.
|
||||||
|
// If set, it describes the expected emitted event kinds for this source.
|
||||||
|
Kinds []string `yaml:"kinds"`
|
||||||
|
|
||||||
|
// Kind is the legacy singular form. Prefer "kinds".
|
||||||
|
// If both kind and kinds are set, validation fails.
|
||||||
Kind string `yaml:"kind"`
|
Kind string `yaml:"kind"`
|
||||||
|
|
||||||
// Params are driver-specific settings (URL, headers, station IDs, API keys, etc.).
|
// Params are driver-specific settings (URL, headers, station IDs, API keys, etc.).
|
||||||
@@ -42,6 +78,26 @@ type SourceConfig struct {
|
|||||||
Params map[string]any `yaml:"params"`
|
Params map[string]any `yaml:"params"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ExpectedKinds returns normalized expected kinds from config.
|
||||||
|
// "kinds" takes precedence; "kind" is used as a legacy fallback.
|
||||||
|
func (cfg SourceConfig) ExpectedKinds() []string {
|
||||||
|
if len(cfg.Kinds) > 0 {
|
||||||
|
out := make([]string, 0, len(cfg.Kinds))
|
||||||
|
for _, k := range cfg.Kinds {
|
||||||
|
k = strings.TrimSpace(k)
|
||||||
|
if k == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
out = append(out, k)
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
if k := strings.TrimSpace(cfg.Kind); k != "" {
|
||||||
|
return []string{k}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// SinkConfig describes one output sink adapter.
|
// SinkConfig describes one output sink adapter.
|
||||||
type SinkConfig struct {
|
type SinkConfig struct {
|
||||||
Name string `yaml:"name"`
|
Name string `yaml:"name"`
|
||||||
@@ -54,7 +110,10 @@ type RouteConfig struct {
|
|||||||
Sink string `yaml:"sink"` // sink name
|
Sink string `yaml:"sink"` // sink name
|
||||||
|
|
||||||
// Kinds is domain-defined. feedkit only enforces that each entry is non-empty.
|
// Kinds is domain-defined. feedkit only enforces that each entry is non-empty.
|
||||||
// Whether a given daemon "recognizes" a kind is domain-specific validation.
|
//
|
||||||
|
// If Kinds is omitted or empty, the route matches ALL kinds.
|
||||||
|
// This is useful when you want explicit per-sink routing rules even when a
|
||||||
|
// particular sink should receive everything.
|
||||||
Kinds []string `yaml:"kinds"`
|
Kinds []string `yaml:"kinds"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
56
config/config_test.go
Normal file
56
config/config_test.go
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"reflect"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSourceConfigExpectedKinds(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
cfg SourceConfig
|
||||||
|
want []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "plural kinds preferred",
|
||||||
|
cfg: SourceConfig{
|
||||||
|
Kinds: []string{" observation ", "forecast"},
|
||||||
|
Kind: "alert",
|
||||||
|
},
|
||||||
|
want: []string{"observation", "forecast"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "legacy singular fallback",
|
||||||
|
cfg: SourceConfig{
|
||||||
|
Kind: " alert ",
|
||||||
|
},
|
||||||
|
want: []string{"alert"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty kinds",
|
||||||
|
cfg: SourceConfig{},
|
||||||
|
want: nil,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
got := tt.cfg.ExpectedKinds()
|
||||||
|
if !reflect.DeepEqual(got, tt.want) {
|
||||||
|
t.Fatalf("ExpectedKinds() = %#v, want %#v", got, tt.want)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSourceModeNormalize(t *testing.T) {
|
||||||
|
if got := SourceMode(" Poll ").Normalize(); got != SourceModePoll {
|
||||||
|
t.Fatalf("Normalize poll = %q, want %q", got, SourceModePoll)
|
||||||
|
}
|
||||||
|
if got := SourceMode("STREAM").Normalize(); got != SourceModeStream {
|
||||||
|
t.Fatalf("Normalize stream = %q, want %q", got, SourceModeStream)
|
||||||
|
}
|
||||||
|
if got := SourceMode("").Normalize(); got != SourceModeAuto {
|
||||||
|
t.Fatalf("Normalize auto = %q, want %q", got, SourceModeAuto)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -83,15 +83,41 @@ func (c *Config) Validate() error {
|
|||||||
m.Add(fieldErr(path+".driver", "is required (e.g. openmeteo_observation, rss_feed, ...)"))
|
m.Add(fieldErr(path+".driver", "is required (e.g. openmeteo_observation, rss_feed, ...)"))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Every
|
// Mode
|
||||||
if s.Every.Duration <= 0 {
|
mode := s.Mode.Normalize()
|
||||||
m.Add(fieldErr(path+".every", "must be a positive duration (e.g. 15m, 1m, 30s)"))
|
if s.Mode != SourceModeAuto && mode != SourceModePoll && mode != SourceModeStream {
|
||||||
|
m.Add(fieldErr(path+".mode", `must be one of: "poll", "stream" (or omit for auto)`))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Kind (optional but if present must be non-empty after trimming)
|
// Every
|
||||||
|
if s.Every.Duration < 0 {
|
||||||
|
m.Add(fieldErr(path+".every", "is optional, but must be a positive duration (e.g. 15m, 1m, 30s) if provided"))
|
||||||
|
} else {
|
||||||
|
switch mode {
|
||||||
|
case SourceModePoll:
|
||||||
|
if s.Every.Duration <= 0 {
|
||||||
|
m.Add(fieldErr(path+".every", `is required when mode="poll" (e.g. 15m, 1m, 30s)`))
|
||||||
|
}
|
||||||
|
case SourceModeStream:
|
||||||
|
if s.Every.Duration > 0 {
|
||||||
|
m.Add(fieldErr(path+".every", `must be omitted when mode="stream"`))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Kind/Kinds (optional)
|
||||||
|
if s.Kind != "" && len(s.Kinds) > 0 {
|
||||||
|
m.Add(fieldErr(path+".kind", `cannot be set when "kinds" is provided (use only "kinds")`))
|
||||||
|
}
|
||||||
if s.Kind != "" && strings.TrimSpace(s.Kind) == "" {
|
if s.Kind != "" && strings.TrimSpace(s.Kind) == "" {
|
||||||
m.Add(fieldErr(path+".kind", "cannot be blank (omit it entirely, or provide a non-empty string)"))
|
m.Add(fieldErr(path+".kind", "cannot be blank (omit it entirely, or provide a non-empty string)"))
|
||||||
}
|
}
|
||||||
|
for j, k := range s.Kinds {
|
||||||
|
kpath := fmt.Sprintf("%s.kinds[%d]", path, j)
|
||||||
|
if strings.TrimSpace(k) == "" {
|
||||||
|
m.Add(fieldErr(kpath, "kind cannot be empty"))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Params can be nil; that's fine.
|
// Params can be nil; that's fine.
|
||||||
}
|
}
|
||||||
@@ -133,16 +159,12 @@ func (c *Config) Validate() error {
|
|||||||
m.Add(fieldErr(path+".sink", fmt.Sprintf("references unknown sink %q (define it under sinks:)", r.Sink)))
|
m.Add(fieldErr(path+".sink", fmt.Sprintf("references unknown sink %q (define it under sinks:)", r.Sink)))
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(r.Kinds) == 0 {
|
// Kinds is optional. If omitted or empty, the route matches ALL kinds.
|
||||||
// You could relax this later (e.g. empty == "all kinds"), but for now
|
// If provided, each entry must be non-empty.
|
||||||
// keeping it strict prevents accidental "route does nothing".
|
for j, k := range r.Kinds {
|
||||||
m.Add(fieldErr(path+".kinds", "must contain at least one kind"))
|
kpath := fmt.Sprintf("%s.kinds[%d]", path, j)
|
||||||
} else {
|
if strings.TrimSpace(k) == "" {
|
||||||
for j, k := range r.Kinds {
|
m.Add(fieldErr(kpath, "kind cannot be empty"))
|
||||||
kpath := fmt.Sprintf("%s.kinds[%d]", path, j)
|
|
||||||
if strings.TrimSpace(k) == "" {
|
|
||||||
m.Add(fieldErr(kpath, "kind cannot be empty"))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
164
config/validate_test.go
Normal file
164
config/validate_test.go
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestValidate_RouteKindsEmptyIsAllowed(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Sources: []SourceConfig{
|
||||||
|
{Name: "src1", Driver: "driver1", Every: Duration{Duration: time.Minute}},
|
||||||
|
},
|
||||||
|
Sinks: []SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
Routes: []RouteConfig{
|
||||||
|
{Sink: "sink1", Kinds: nil}, // omitted
|
||||||
|
{Sink: "sink1", Kinds: []string{}}, // explicit empty
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := cfg.Validate(); err != nil {
|
||||||
|
t.Fatalf("expected no error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidate_RouteKindsRejectsBlankEntries(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Sources: []SourceConfig{
|
||||||
|
{Name: "src1", Driver: "driver1", Every: Duration{Duration: time.Minute}},
|
||||||
|
},
|
||||||
|
Sinks: []SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
Routes: []RouteConfig{
|
||||||
|
{Sink: "sink1", Kinds: []string{"observation", " ", "alert"}},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := cfg.Validate()
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error, got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "routes[0].kinds[1]") {
|
||||||
|
t.Fatalf("expected error to mention blank kind entry, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidate_SourceModePollRequiresEvery(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Sources: []SourceConfig{
|
||||||
|
{Name: "src1", Driver: "driver1", Mode: SourceModePoll},
|
||||||
|
},
|
||||||
|
Sinks: []SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := cfg.Validate()
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error, got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), `sources[0].every`) {
|
||||||
|
t.Fatalf("expected error to mention sources[0].every, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidate_SourceModeStreamRejectsEvery(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Sources: []SourceConfig{
|
||||||
|
{
|
||||||
|
Name: "src1",
|
||||||
|
Driver: "driver1",
|
||||||
|
Mode: SourceModeStream,
|
||||||
|
Every: Duration{Duration: time.Minute},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Sinks: []SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := cfg.Validate()
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error, got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), `sources[0].every`) {
|
||||||
|
t.Fatalf("expected error to mention sources[0].every, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidate_SourceModeRejectsUnknownValue(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Sources: []SourceConfig{
|
||||||
|
{
|
||||||
|
Name: "src1",
|
||||||
|
Driver: "driver1",
|
||||||
|
Mode: SourceMode("batch"),
|
||||||
|
Every: Duration{Duration: time.Minute},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Sinks: []SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := cfg.Validate()
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error, got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), `sources[0].mode`) {
|
||||||
|
t.Fatalf("expected error to mention sources[0].mode, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidate_SourceKindAndKindsConflict(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Sources: []SourceConfig{
|
||||||
|
{
|
||||||
|
Name: "src1",
|
||||||
|
Driver: "driver1",
|
||||||
|
Every: Duration{Duration: time.Minute},
|
||||||
|
Kind: "observation",
|
||||||
|
Kinds: []string{"forecast"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Sinks: []SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := cfg.Validate()
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error, got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), `sources[0].kind`) {
|
||||||
|
t.Fatalf("expected error to mention sources[0].kind, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidate_SourceKindsRejectBlankEntries(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Sources: []SourceConfig{
|
||||||
|
{
|
||||||
|
Name: "src1",
|
||||||
|
Driver: "driver1",
|
||||||
|
Every: Duration{Duration: time.Minute},
|
||||||
|
Kinds: []string{"observation", " "},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Sinks: []SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := cfg.Validate()
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error, got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), `sources[0].kinds[1]`) {
|
||||||
|
t.Fatalf("expected error to mention sources[0].kinds[1], got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -13,11 +13,13 @@ import (
|
|||||||
// Behavior:
|
// Behavior:
|
||||||
// - If cfg.Routes is empty, we default to "all sinks receive all kinds".
|
// - If cfg.Routes is empty, we default to "all sinks receive all kinds".
|
||||||
// (Implemented as one Route per sink with Kinds == nil.)
|
// (Implemented as one Route per sink with Kinds == nil.)
|
||||||
|
// - If a specific route's kinds: is omitted or empty, that route matches ALL kinds.
|
||||||
|
// (Also compiled as Kinds == nil.)
|
||||||
// - Kind strings are normalized via event.ParseKind (lowercase + trim).
|
// - Kind strings are normalized via event.ParseKind (lowercase + trim).
|
||||||
//
|
//
|
||||||
// Note: config.Validate() already ensures route.sink references a known sink and
|
// Note: config.Validate() ensures route.sink references a known sink and rejects
|
||||||
// route.kinds are non-empty strings. We re-check a few invariants here anyway so
|
// blank kind entries. We re-check a few invariants here anyway so CompileRoutes
|
||||||
// CompileRoutes is safe to call even if a daemon chooses not to call Validate().
|
// is safe to call even if a daemon chooses not to call Validate().
|
||||||
func CompileRoutes(cfg *config.Config) ([]Route, error) {
|
func CompileRoutes(cfg *config.Config) ([]Route, error) {
|
||||||
if cfg == nil {
|
if cfg == nil {
|
||||||
return nil, fmt.Errorf("dispatch.CompileRoutes: cfg is nil")
|
return nil, fmt.Errorf("dispatch.CompileRoutes: cfg is nil")
|
||||||
@@ -27,14 +29,13 @@ func CompileRoutes(cfg *config.Config) ([]Route, error) {
|
|||||||
return nil, fmt.Errorf("dispatch.CompileRoutes: cfg has no sinks")
|
return nil, fmt.Errorf("dispatch.CompileRoutes: cfg has no sinks")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build a quick lookup of sink names.
|
// Build a quick lookup of sink names (exact match; no normalization).
|
||||||
sinkNames := make(map[string]bool, len(cfg.Sinks))
|
sinkNames := make(map[string]bool, len(cfg.Sinks))
|
||||||
for i, s := range cfg.Sinks {
|
for i, s := range cfg.Sinks {
|
||||||
name := strings.TrimSpace(s.Name)
|
if strings.TrimSpace(s.Name) == "" {
|
||||||
if name == "" {
|
|
||||||
return nil, fmt.Errorf("dispatch.CompileRoutes: sinks[%d].name is empty", i)
|
return nil, fmt.Errorf("dispatch.CompileRoutes: sinks[%d].name is empty", i)
|
||||||
}
|
}
|
||||||
sinkNames[name] = true
|
sinkNames[s.Name] = true
|
||||||
}
|
}
|
||||||
|
|
||||||
// Default routing: everything to every sink.
|
// Default routing: everything to every sink.
|
||||||
@@ -52,16 +53,21 @@ func CompileRoutes(cfg *config.Config) ([]Route, error) {
|
|||||||
out := make([]Route, 0, len(cfg.Routes))
|
out := make([]Route, 0, len(cfg.Routes))
|
||||||
|
|
||||||
for i, r := range cfg.Routes {
|
for i, r := range cfg.Routes {
|
||||||
sink := strings.TrimSpace(r.Sink)
|
sink := r.Sink
|
||||||
if sink == "" {
|
if strings.TrimSpace(sink) == "" {
|
||||||
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].sink is required", i)
|
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].sink is required", i)
|
||||||
}
|
}
|
||||||
if !sinkNames[sink] {
|
if !sinkNames[sink] {
|
||||||
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].sink references unknown sink %q", i, sink)
|
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].sink references unknown sink %q", i, sink)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If kinds is omitted/empty, this route matches all kinds.
|
||||||
if len(r.Kinds) == 0 {
|
if len(r.Kinds) == 0 {
|
||||||
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].kinds must contain at least one kind", i)
|
out = append(out, Route{
|
||||||
|
SinkName: sink,
|
||||||
|
Kinds: nil,
|
||||||
|
})
|
||||||
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
kinds := make(map[event.Kind]bool, len(r.Kinds))
|
kinds := make(map[event.Kind]bool, len(r.Kinds))
|
||||||
|
|||||||
67
dispatch/routes_test.go
Normal file
67
dispatch/routes_test.go
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
package dispatch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCompileRoutes_DefaultIsAllSinksAllKinds(t *testing.T) {
|
||||||
|
cfg := &config.Config{
|
||||||
|
Sinks: []config.SinkConfig{
|
||||||
|
{Name: "a", Driver: "stdout"},
|
||||||
|
{Name: "b", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
// Routes omitted => default
|
||||||
|
}
|
||||||
|
|
||||||
|
routes, err := CompileRoutes(cfg)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CompileRoutes error: %v", err)
|
||||||
|
}
|
||||||
|
if len(routes) != 2 {
|
||||||
|
t.Fatalf("expected 2 routes, got %d", len(routes))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Order should match cfg.Sinks order (deterministic).
|
||||||
|
if routes[0].SinkName != "a" || routes[1].SinkName != "b" {
|
||||||
|
t.Fatalf("unexpected route order: %+v", routes)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, r := range routes {
|
||||||
|
if len(r.Kinds) != 0 {
|
||||||
|
t.Fatalf("expected nil/empty kinds for default routes, got: %+v", r.Kinds)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCompileRoutes_EmptyKindsMeansAllKinds(t *testing.T) {
|
||||||
|
cfg := &config.Config{
|
||||||
|
Sinks: []config.SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
Routes: []config.RouteConfig{
|
||||||
|
{Sink: "sink1"}, // omitted kinds
|
||||||
|
{Sink: "sink1", Kinds: nil}, // explicit nil
|
||||||
|
{Sink: "sink1", Kinds: []string{}}, // explicit empty
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
routes, err := CompileRoutes(cfg)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CompileRoutes error: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(routes) != 3 {
|
||||||
|
t.Fatalf("expected 3 routes, got %d", len(routes))
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, r := range routes {
|
||||||
|
if r.SinkName != "sink1" {
|
||||||
|
t.Fatalf("route[%d] unexpected sink: %q", i, r.SinkName)
|
||||||
|
}
|
||||||
|
if len(r.Kinds) != 0 {
|
||||||
|
t.Fatalf("route[%d] expected nil/empty kinds (match all), got: %+v", i, r.Kinds)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
195
doc.go
195
doc.go
@@ -1,183 +1,110 @@
|
|||||||
// Package feedkit provides domain-agnostic plumbing for "feed processing daemons".
|
// Package feedkit provides domain-agnostic plumbing for feed-processing daemons.
|
||||||
//
|
//
|
||||||
// A feed daemon polls one or more upstream providers (HTTP APIs, RSS, etc.),
|
// A feed daemon ingests upstream input, turns it into event.Event values, applies
|
||||||
// converts upstream items into a normalized internal representation, applies
|
// optional processing, and emits to sinks.
|
||||||
// lightweight policy (dedupe/rate-limit/filters), and emits events to one or
|
|
||||||
// more sinks (stdout, files, Postgres, brokers, ...).
|
|
||||||
//
|
//
|
||||||
// feedkit is intentionally NOT a framework. It supplies small, composable
|
// Conceptual flow:
|
||||||
// primitives that concrete daemons wire together in main.go (or via a small
|
|
||||||
// optional Runner helper, see "Future additions").
|
|
||||||
//
|
//
|
||||||
// Conceptual pipeline
|
// Collect -> Process (optional stages, including normalize) -> Route -> Emit
|
||||||
//
|
//
|
||||||
// Collect → Normalize → Filter/Policy → Persist/Emit → Signal
|
// In feedkit this maps to:
|
||||||
//
|
//
|
||||||
// In feedkit today, that maps to:
|
// Collect: sources + scheduler
|
||||||
|
// Process: pipeline + processors + normalize (optional stage)
|
||||||
|
// Route: dispatch
|
||||||
|
// Emit: sinks
|
||||||
|
// Config: config
|
||||||
//
|
//
|
||||||
// Collect: sources.Source + scheduler.Scheduler
|
// feedkit intentionally does not define domain payload schemas or domain-specific
|
||||||
// Normalize: (today: domain code typically does this inside Source.Poll;
|
// validation rules. Those belong in each concrete daemon.
|
||||||
// future: a normalization Processor is a good fit)
|
|
||||||
// Policy: pipeline.Pipeline (Processor chain; dedupe/ratelimit are planned)
|
|
||||||
// Emit: dispatch.Dispatcher + dispatch.Fanout
|
|
||||||
// Sinks: sinks.Sink (+ sinks.Registry to build from config)
|
|
||||||
// Config: config.Load + config.Config validation
|
|
||||||
//
|
//
|
||||||
// Public packages (API surface)
|
// Public packages
|
||||||
//
|
//
|
||||||
// - config
|
// - config
|
||||||
// YAML configuration types and loader/validator.
|
// YAML config loading/validation (strict decode + domain-agnostic checks).
|
||||||
//
|
//
|
||||||
// - config.Load(path) (*config.Config, error)
|
// SourceConfig supports both polling and streaming sources:
|
||||||
//
|
//
|
||||||
// - config.Config: Sources, Sinks, Routes
|
// - mode: "poll" | "stream" | omitted (auto by driver type)
|
||||||
//
|
//
|
||||||
// - config.SourceConfig / SinkConfig include Params map[string]any
|
// - every: poll interval (required for mode="poll")
|
||||||
// with convenience helpers like:
|
|
||||||
//
|
//
|
||||||
// - ParamString / ParamStringDefault
|
// - kinds: optional expected emitted kinds
|
||||||
//
|
//
|
||||||
// - ParamBool / ParamBoolDefault
|
// - kind: legacy singular fallback
|
||||||
//
|
//
|
||||||
// - ParamInt / ParamIntDefault
|
// - params: driver-specific settings
|
||||||
//
|
|
||||||
// - ParamDuration / ParamDurationDefault
|
|
||||||
//
|
|
||||||
// - ParamStringSlice
|
|
||||||
//
|
//
|
||||||
// - event
|
// - event
|
||||||
// Domain-agnostic event envelope moved through the system.
|
// Domain-agnostic event envelope (ID, Kind, Source, EmittedAt, Schema, Payload).
|
||||||
//
|
|
||||||
// - event.Event includes ID, Kind, Source, timestamps, Schema, Payload
|
|
||||||
//
|
|
||||||
// - event.Kind is stringly typed; event.ParseKind normalizes/validates.
|
|
||||||
//
|
//
|
||||||
// - sources
|
// - sources
|
||||||
// Extension point for domain-specific polling jobs.
|
// Source abstractions and source-driver registry.
|
||||||
//
|
//
|
||||||
// - sources.Source interface: Name(), Kind(), Poll(ctx)
|
// There are two source interfaces:
|
||||||
//
|
//
|
||||||
// - sources.Registry lets daemons register driver factories and build
|
// - PollSource: Poll(ctx) ([]event.Event, error)
|
||||||
// sources from config.SourceConfig.
|
//
|
||||||
|
// - StreamSource: Run(ctx, out) error
|
||||||
|
//
|
||||||
|
// Both share Input{Name()}. A source may emit 0..N events per poll/run step,
|
||||||
|
// and may emit multiple event kinds.
|
||||||
//
|
//
|
||||||
// - scheduler
|
// - scheduler
|
||||||
// Runs sources on a cadence and publishes emitted events onto a channel.
|
// Runs one goroutine per job:
|
||||||
//
|
//
|
||||||
// - scheduler.Scheduler{Jobs, Out, Logf}.Run(ctx)
|
// - PollSource jobs run on Every (+ jitter)
|
||||||
//
|
//
|
||||||
// - scheduler.Job: {Source, Every, Jitter}
|
// - StreamSource jobs run continuously
|
||||||
//
|
//
|
||||||
// - pipeline
|
// - pipeline
|
||||||
// Optional processing chain between scheduler and dispatch.
|
// Processor chain between scheduler and dispatch.
|
||||||
|
// Processors can transform, drop, or reject events.
|
||||||
//
|
//
|
||||||
// - pipeline.Pipeline{Processors}.Process(ctx, event)
|
// - processors
|
||||||
|
// Generic processor interface and named factory registry for wiring chains.
|
||||||
//
|
//
|
||||||
// - pipeline.Processor can mutate, drop (return nil), or error.
|
// - normalize
|
||||||
//
|
// Concrete pipeline processor for raw->canonical mapping.
|
||||||
// - dedupe/ratelimit processors are placeholders (planned).
|
// If no normalizer matches, the event passes through unchanged by default.
|
||||||
//
|
|
||||||
// - sinks
|
|
||||||
// Extension point for output adapters.
|
|
||||||
//
|
|
||||||
// - sinks.Sink interface: Name(), Consume(ctx, event)
|
|
||||||
//
|
|
||||||
// - sinks.Registry to register driver factories and build sinks from config
|
|
||||||
//
|
|
||||||
// - sinks.RegisterBuiltins registers feedkit-provided sink drivers
|
|
||||||
// (stdout/file/postgres/rabbitmq; some are currently stubs).
|
|
||||||
//
|
//
|
||||||
// - dispatch
|
// - dispatch
|
||||||
// Routes processed events to sinks, and isolates slow sinks via per-sink queues.
|
// Routes events to sinks and isolates slow sinks via per-sink queues/workers.
|
||||||
//
|
//
|
||||||
// - dispatch.Dispatcher{In, Pipeline, Sinks, Routes, ...}.Run(ctx, logf)
|
// - sinks
|
||||||
|
// Sink abstractions + sink registry.
|
||||||
//
|
//
|
||||||
// - dispatch.Fanout: one buffered queue + worker goroutine per sink
|
// Typical wiring (daemon main.go)
|
||||||
//
|
//
|
||||||
// - dispatch.CompileRoutes(*config.Config) compiles cfg.Routes into []dispatch.Route.
|
// 1. Load config.
|
||||||
// If routes: is omitted, it defaults to "all sinks receive all kinds".
|
// 2. Register source drivers and build sources from config.Sources.
|
||||||
|
// 3. Register sink drivers and build sinks from config.Sinks.
|
||||||
|
// 4. Compile routes.
|
||||||
|
// 5. Start scheduler (sources -> bus) and dispatcher (bus -> pipeline -> sinks).
|
||||||
//
|
//
|
||||||
// - logging
|
// Sketch:
|
||||||
// Shared logger type used across feedkit packages.
|
|
||||||
//
|
|
||||||
// - logging.Logf is a printf-style logger signature.
|
|
||||||
//
|
|
||||||
// Typical wiring (what a daemon does in main.go)
|
|
||||||
//
|
|
||||||
// 1. Load config (domain code may add domain-specific validation).
|
|
||||||
// 2. Register and build sources from config.Sources using sources.Registry.
|
|
||||||
// 3. Register and build sinks from config.Sinks using sinks.Registry.
|
|
||||||
// 4. Compile routes (typically via dispatch.CompileRoutes).
|
|
||||||
// 5. Create an event bus channel.
|
|
||||||
// 6. Start scheduler (sources → bus).
|
|
||||||
// 7. Start dispatcher (bus → pipeline → routes → sinks).
|
|
||||||
//
|
|
||||||
// A sketch:
|
|
||||||
//
|
//
|
||||||
// cfg, _ := config.Load("config.yml")
|
// cfg, _ := config.Load("config.yml")
|
||||||
//
|
|
||||||
// // Build sources (domain registers its drivers).
|
|
||||||
// srcReg := sources.NewRegistry()
|
// srcReg := sources.NewRegistry()
|
||||||
// // domain: srcReg.Register("openweather_observation", newOpenWeatherSource)
|
// // domain registers poll/stream drivers...
|
||||||
// // ...
|
|
||||||
//
|
//
|
||||||
// var jobs []scheduler.Job
|
// var jobs []scheduler.Job
|
||||||
// for _, sc := range cfg.Sources {
|
// for _, sc := range cfg.Sources {
|
||||||
// src, _ := srcReg.Build(sc)
|
// src, _ := srcReg.BuildInput(sc)
|
||||||
// jobs = append(jobs, scheduler.Job{Source: src, Every: sc.Every.Duration})
|
// jobs = append(jobs, scheduler.Job{
|
||||||
|
// Source: src,
|
||||||
|
// Every: sc.Every.Duration,
|
||||||
|
// })
|
||||||
// }
|
// }
|
||||||
//
|
//
|
||||||
// // Build sinks (feedkit can register builtins).
|
|
||||||
// sinkReg := sinks.NewRegistry()
|
|
||||||
// sinks.RegisterBuiltins(sinkReg)
|
|
||||||
// builtSinks := map[string]sinks.Sink{}
|
|
||||||
// for _, sk := range cfg.Sinks {
|
|
||||||
// s, _ := sinkReg.Build(sk)
|
|
||||||
// builtSinks[sk.Name] = s
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// // Compile routes.
|
|
||||||
// routes, _ := dispatch.CompileRoutes(cfg)
|
|
||||||
//
|
|
||||||
// // Event bus.
|
|
||||||
// bus := make(chan event.Event, 256)
|
// bus := make(chan event.Event, 256)
|
||||||
//
|
|
||||||
// // Scheduler.
|
|
||||||
// s := &scheduler.Scheduler{Jobs: jobs, Out: bus, Logf: logf}
|
// s := &scheduler.Scheduler{Jobs: jobs, Out: bus, Logf: logf}
|
||||||
//
|
// // start dispatcher similarly...
|
||||||
// // Dispatcher.
|
|
||||||
// d := &dispatch.Dispatcher{
|
|
||||||
// In: bus,
|
|
||||||
// Pipeline: &pipeline.Pipeline{Processors: nil},
|
|
||||||
// Sinks: builtSinks,
|
|
||||||
// Routes: routes,
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
// go s.Run(ctx)
|
|
||||||
// return d.Run(ctx, logf)
|
|
||||||
//
|
|
||||||
// Conventions (recommended, not required)
|
|
||||||
//
|
|
||||||
// - Event.ID should be stable for dedupe/storage (often "<provider>:<upstream-id>").
|
|
||||||
// - Event.Kind should be lowercase ("observation", "alert", "article", ...).
|
|
||||||
// - Event.Schema should identify the payload shape/version
|
|
||||||
// (e.g. "weather.observation.v1").
|
|
||||||
//
|
//
|
||||||
// # Context and cancellation
|
// # Context and cancellation
|
||||||
//
|
//
|
||||||
// All blocking or I/O work should honor ctx.Done():
|
// All blocking work should honor context cancellation:
|
||||||
// - sources.Source.Poll should pass ctx to HTTP calls, etc.
|
// - source polling/streaming I/O
|
||||||
// - sinks.Sink.Consume should honor ctx (Fanout timeouts only help if sinks cooperate).
|
// - sink consumption
|
||||||
//
|
// - any expensive processor work
|
||||||
// Future additions (likely)
|
|
||||||
//
|
|
||||||
// - A small Runner helper that performs the standard wiring (load config,
|
|
||||||
// build sources/sinks/routes, run scheduler+dispatcher, handle shutdown).
|
|
||||||
// - A normalization hook (a Pipeline Processor + registry) that allows sources
|
|
||||||
// to emit "raw" payloads and defer normalization to a dedicated stage.
|
|
||||||
//
|
|
||||||
// # Non-goals
|
|
||||||
//
|
|
||||||
// feedkit does not define domain payload schemas, does not enforce domain kinds,
|
|
||||||
// and does not embed domain-specific validation rules. Those live in each
|
|
||||||
// concrete daemon/module (weatherfeeder, newsfeeder, ...).
|
|
||||||
package feedkit
|
package feedkit
|
||||||
|
|||||||
13
go.mod
13
go.mod
@@ -2,4 +2,15 @@ module gitea.maximumdirect.net/ejr/feedkit
|
|||||||
|
|
||||||
go 1.22
|
go 1.22
|
||||||
|
|
||||||
require gopkg.in/yaml.v3 v3.0.1
|
require (
|
||||||
|
github.com/nats-io/nats.go v1.34.0
|
||||||
|
gopkg.in/yaml.v3 v3.0.1
|
||||||
|
)
|
||||||
|
|
||||||
|
require (
|
||||||
|
github.com/klauspost/compress v1.17.2 // indirect
|
||||||
|
github.com/nats-io/nkeys v0.4.7 // indirect
|
||||||
|
github.com/nats-io/nuid v1.0.1 // indirect
|
||||||
|
golang.org/x/crypto v0.18.0 // indirect
|
||||||
|
golang.org/x/sys v0.16.0 // indirect
|
||||||
|
)
|
||||||
|
|||||||
13
go.sum
13
go.sum
@@ -1,3 +1,16 @@
|
|||||||
|
github.com/klauspost/compress v1.17.2 h1:RlWWUY/Dr4fL8qk9YG7DTZ7PDgME2V4csBXA8L/ixi4=
|
||||||
|
github.com/klauspost/compress v1.17.2/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
|
||||||
|
github.com/nats-io/nats.go v1.34.0 h1:fnxnPCNiwIG5w08rlMcEKTUw4AV/nKyGCOJE8TdhSPk=
|
||||||
|
github.com/nats-io/nats.go v1.34.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
|
||||||
|
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
|
||||||
|
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
|
||||||
|
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
|
||||||
|
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
|
||||||
|
golang.org/x/crypto v0.18.0 h1:PGVlW0xEltQnzFZ55hkuX5+KLyrMYhHld1YHO4AKcdc=
|
||||||
|
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
|
||||||
|
golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU=
|
||||||
|
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
|
|||||||
17
normalize/doc.go
Normal file
17
normalize/doc.go
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
// Package normalize provides a concrete normalization processor for feedkit pipelines.
|
||||||
|
//
|
||||||
|
// Motivation:
|
||||||
|
// Many daemons have sources that:
|
||||||
|
// 1. fetch raw upstream data (often JSON), and
|
||||||
|
// 2. transform it into a domain's normalized payload format.
|
||||||
|
//
|
||||||
|
// Doing both steps inside Source.Poll works, but tends to make sources large and
|
||||||
|
// encourages duplication (unit conversions, common mapping helpers, etc.).
|
||||||
|
//
|
||||||
|
// This package lets a source emit a "raw" event (e.g., Schema="raw.openweather.current.v1",
|
||||||
|
// Payload=json.RawMessage), and then a normalize.Processor can convert it into a
|
||||||
|
// normalized event (e.g., Schema="weather.observation.v1", Payload=WeatherObservation{}).
|
||||||
|
//
|
||||||
|
// Key property: normalization is optional.
|
||||||
|
// If no Normalizer matches an event, Processor passes it through unchanged by default.
|
||||||
|
package normalize
|
||||||
76
normalize/normalize.go
Normal file
76
normalize/normalize.go
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
package normalize
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Normalizer converts one event shape into another.
|
||||||
|
//
|
||||||
|
// A Normalizer is typically domain-owned code (weatherfeeder/newsfeeder/...)
|
||||||
|
// that knows how to interpret a specific upstream payload and produce a
|
||||||
|
// normalized payload.
|
||||||
|
//
|
||||||
|
// Normalizers are selected via Match(). The matching strategy is intentionally
|
||||||
|
// flexible: implementations may match on Schema, Kind, Source, or any other
|
||||||
|
// Event fields.
|
||||||
|
type Normalizer interface {
|
||||||
|
// Match reports whether this normalizer applies to the given event.
|
||||||
|
//
|
||||||
|
// Common patterns:
|
||||||
|
// - match on e.Schema (recommended for versioning)
|
||||||
|
// - match on e.Source (useful if Schema is empty)
|
||||||
|
// - match on (e.Kind + e.Source), etc.
|
||||||
|
Match(e event.Event) bool
|
||||||
|
|
||||||
|
// Normalize transforms the incoming event into a new (or modified) event.
|
||||||
|
//
|
||||||
|
// Return values:
|
||||||
|
// - (out, nil) where out != nil: emit the normalized event
|
||||||
|
// - (nil, nil): drop the event (treat as policy drop)
|
||||||
|
// - (nil, err): fail the pipeline
|
||||||
|
//
|
||||||
|
// Note: If you simply want to pass the event through unchanged, return &in.
|
||||||
|
Normalize(ctx context.Context, in event.Event) (*event.Event, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Func is an ergonomic adapter that lets you define a Normalizer with functions.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// n := normalize.Func{
|
||||||
|
// MatchFn: func(e event.Event) bool { return e.Schema == "raw.openweather.current.v1" },
|
||||||
|
// NormalizeFn: func(ctx context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
// // ... map in.Payload -> normalized payload ...
|
||||||
|
// },
|
||||||
|
// }
|
||||||
|
type Func struct {
|
||||||
|
MatchFn func(e event.Event) bool
|
||||||
|
NormalizeFn func(ctx context.Context, in event.Event) (*event.Event, error)
|
||||||
|
|
||||||
|
// Optional: helps produce nicer panic/error messages if something goes wrong.
|
||||||
|
Name string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f Func) Match(e event.Event) bool {
|
||||||
|
if f.MatchFn == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return f.MatchFn(e)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f Func) Normalize(ctx context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
if f.NormalizeFn == nil {
|
||||||
|
return nil, fmt.Errorf("normalize.Func(%s): NormalizeFn is nil", f.safeName())
|
||||||
|
}
|
||||||
|
return f.NormalizeFn(ctx, in)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f Func) safeName() string {
|
||||||
|
if f.Name == "" {
|
||||||
|
return "<unnamed>"
|
||||||
|
}
|
||||||
|
return f.Name
|
||||||
|
}
|
||||||
57
normalize/processor.go
Normal file
57
normalize/processor.go
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
package normalize
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Processor applies ordered normalization rules to pipeline events.
|
||||||
|
//
|
||||||
|
// Selection rule:
|
||||||
|
// - iterate in Normalizers order
|
||||||
|
// - the first Normalizer whose Match returns true is applied
|
||||||
|
//
|
||||||
|
// If no normalizer matches, the default behavior is pass-through.
|
||||||
|
type Processor struct {
|
||||||
|
Normalizers []Normalizer
|
||||||
|
|
||||||
|
// If true, events that do not match any normalizer cause an error.
|
||||||
|
// Default is false (pass-through).
|
||||||
|
RequireMatch bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewProcessor constructs a normalization processor from an ordered normalizer list.
|
||||||
|
func NewProcessor(normalizers []Normalizer, requireMatch bool) Processor {
|
||||||
|
return Processor{
|
||||||
|
Normalizers: append([]Normalizer(nil), normalizers...),
|
||||||
|
RequireMatch: requireMatch,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process implements processors.Processor.
|
||||||
|
func (p Processor) Process(ctx context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
for _, n := range p.Normalizers {
|
||||||
|
if n == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !n.Match(in) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
out, err := n.Normalize(ctx, in)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("normalize: normalizer failed: %w", err)
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if p.RequireMatch {
|
||||||
|
return nil, fmt.Errorf("normalize: no normalizer matched event (id=%s kind=%s source=%s schema=%q)",
|
||||||
|
in.ID, in.Kind, in.Source, in.Schema)
|
||||||
|
}
|
||||||
|
|
||||||
|
out := in
|
||||||
|
return &out, nil
|
||||||
|
}
|
||||||
139
normalize/processor_test.go
Normal file
139
normalize/processor_test.go
Normal file
@@ -0,0 +1,139 @@
|
|||||||
|
package normalize
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestProcessorFirstMatchWins(t *testing.T) {
|
||||||
|
var firstCalls, secondCalls int
|
||||||
|
|
||||||
|
p := NewProcessor([]Normalizer{
|
||||||
|
Func{
|
||||||
|
MatchFn: func(event.Event) bool { return true },
|
||||||
|
NormalizeFn: func(_ context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
firstCalls++
|
||||||
|
out := in
|
||||||
|
out.Schema = "normalized.first.v1"
|
||||||
|
return &out, nil
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Func{
|
||||||
|
MatchFn: func(event.Event) bool { return true },
|
||||||
|
NormalizeFn: func(_ context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
secondCalls++
|
||||||
|
out := in
|
||||||
|
out.Schema = "normalized.second.v1"
|
||||||
|
return &out, nil
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}, false)
|
||||||
|
|
||||||
|
out, err := p.Process(context.Background(), testEvent())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Process error: %v", err)
|
||||||
|
}
|
||||||
|
if out == nil {
|
||||||
|
t.Fatalf("expected output event, got nil")
|
||||||
|
}
|
||||||
|
if out.Schema != "normalized.first.v1" {
|
||||||
|
t.Fatalf("unexpected schema: %q", out.Schema)
|
||||||
|
}
|
||||||
|
if firstCalls != 1 {
|
||||||
|
t.Fatalf("expected first normalizer called once, got %d", firstCalls)
|
||||||
|
}
|
||||||
|
if secondCalls != 0 {
|
||||||
|
t.Fatalf("expected second normalizer skipped, got %d calls", secondCalls)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProcessorNoMatchPassThroughAndRequireMatch(t *testing.T) {
|
||||||
|
in := testEvent()
|
||||||
|
in.Schema = "raw.schema.v1"
|
||||||
|
|
||||||
|
passThrough := NewProcessor([]Normalizer{
|
||||||
|
Func{
|
||||||
|
MatchFn: func(event.Event) bool { return false },
|
||||||
|
NormalizeFn: func(_ context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
out := in
|
||||||
|
out.Schema = "should.not.run"
|
||||||
|
return &out, nil
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}, false)
|
||||||
|
|
||||||
|
out, err := passThrough.Process(context.Background(), in)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("pass-through Process error: %v", err)
|
||||||
|
}
|
||||||
|
if out == nil {
|
||||||
|
t.Fatalf("expected pass-through output event, got nil")
|
||||||
|
}
|
||||||
|
if out.Schema != "raw.schema.v1" {
|
||||||
|
t.Fatalf("expected unchanged schema, got %q", out.Schema)
|
||||||
|
}
|
||||||
|
|
||||||
|
required := NewProcessor(nil, true)
|
||||||
|
_, err = required.Process(context.Background(), in)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected require-match error")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "no normalizer matched") {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProcessorDropAndErrorPropagation(t *testing.T) {
|
||||||
|
t.Run("drop", func(t *testing.T) {
|
||||||
|
p := NewProcessor([]Normalizer{
|
||||||
|
Func{
|
||||||
|
MatchFn: func(event.Event) bool { return true },
|
||||||
|
NormalizeFn: func(context.Context, event.Event) (*event.Event, error) {
|
||||||
|
return nil, nil
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}, false)
|
||||||
|
|
||||||
|
out, err := p.Process(context.Background(), testEvent())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Process error: %v", err)
|
||||||
|
}
|
||||||
|
if out != nil {
|
||||||
|
t.Fatalf("expected nil output for dropped event, got %#v", out)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("error", func(t *testing.T) {
|
||||||
|
p := NewProcessor([]Normalizer{
|
||||||
|
Func{
|
||||||
|
MatchFn: func(event.Event) bool { return true },
|
||||||
|
NormalizeFn: func(context.Context, event.Event) (*event.Event, error) {
|
||||||
|
return nil, errors.New("map failed")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}, false)
|
||||||
|
|
||||||
|
_, err := p.Process(context.Background(), testEvent())
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "normalizer failed") {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func testEvent() event.Event {
|
||||||
|
return event.Event{
|
||||||
|
ID: "evt-normalize-1",
|
||||||
|
Kind: event.Kind("observation"),
|
||||||
|
Source: "source-1",
|
||||||
|
EmittedAt: time.Now().UTC(),
|
||||||
|
Payload: map[string]any{"x": 1},
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -5,15 +5,11 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"gitea.maximumdirect.net/ejr/feedkit/event"
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/processors"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Processor can mutate/drop events (dedupe, rate-limit, normalization tweaks).
|
|
||||||
type Processor interface {
|
|
||||||
Process(ctx context.Context, in event.Event) (out *event.Event, err error)
|
|
||||||
}
|
|
||||||
|
|
||||||
type Pipeline struct {
|
type Pipeline struct {
|
||||||
Processors []Processor
|
Processors []processors.Processor
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *Pipeline) Process(ctx context.Context, e event.Event) (*event.Event, error) {
|
func (p *Pipeline) Process(ctx context.Context, e event.Event) (*event.Event, error) {
|
||||||
|
|||||||
115
pipeline/pipeline_test.go
Normal file
115
pipeline/pipeline_test.go
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
package pipeline
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/processors"
|
||||||
|
)
|
||||||
|
|
||||||
|
type procFunc func(context.Context, event.Event) (*event.Event, error)
|
||||||
|
|
||||||
|
func (f procFunc) Process(ctx context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
return f(ctx, in)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPipelineProcessSequentialOrder(t *testing.T) {
|
||||||
|
var gotOrder []string
|
||||||
|
|
||||||
|
p := &Pipeline{
|
||||||
|
Processors: []processors.Processor{
|
||||||
|
procFunc(func(_ context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
gotOrder = append(gotOrder, "first")
|
||||||
|
out := in
|
||||||
|
out.Schema = "stage.one.v1"
|
||||||
|
return &out, nil
|
||||||
|
}),
|
||||||
|
procFunc(func(_ context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
gotOrder = append(gotOrder, "second")
|
||||||
|
if in.Schema != "stage.one.v1" {
|
||||||
|
return nil, fmt.Errorf("expected schema from first stage, got %q", in.Schema)
|
||||||
|
}
|
||||||
|
out := in
|
||||||
|
out.Schema = "stage.two.v1"
|
||||||
|
return &out, nil
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
out, err := p.Process(context.Background(), validEvent())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Process error: %v", err)
|
||||||
|
}
|
||||||
|
if out == nil {
|
||||||
|
t.Fatalf("expected output event, got nil")
|
||||||
|
}
|
||||||
|
if out.Schema != "stage.two.v1" {
|
||||||
|
t.Fatalf("unexpected output schema: %q", out.Schema)
|
||||||
|
}
|
||||||
|
if strings.Join(gotOrder, ",") != "first,second" {
|
||||||
|
t.Fatalf("unexpected processor order: %v", gotOrder)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPipelineProcessInvalidInput(t *testing.T) {
|
||||||
|
p := &Pipeline{}
|
||||||
|
_, err := p.Process(context.Background(), event.Event{})
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected input validation error")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "invalid input event") {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPipelineProcessDrop(t *testing.T) {
|
||||||
|
p := &Pipeline{
|
||||||
|
Processors: []processors.Processor{
|
||||||
|
procFunc(func(context.Context, event.Event) (*event.Event, error) {
|
||||||
|
return nil, nil
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
out, err := p.Process(context.Background(), validEvent())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Process error: %v", err)
|
||||||
|
}
|
||||||
|
if out != nil {
|
||||||
|
t.Fatalf("expected nil output for dropped event, got %#v", out)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPipelineProcessInvalidOutput(t *testing.T) {
|
||||||
|
p := &Pipeline{
|
||||||
|
Processors: []processors.Processor{
|
||||||
|
procFunc(func(_ context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
out := in
|
||||||
|
out.Payload = nil
|
||||||
|
return &out, nil
|
||||||
|
}),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := p.Process(context.Background(), validEvent())
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected output validation error")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "invalid output event") {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func validEvent() event.Event {
|
||||||
|
return event.Event{
|
||||||
|
ID: "evt-1",
|
||||||
|
Kind: event.Kind("observation"),
|
||||||
|
Source: "source-1",
|
||||||
|
EmittedAt: time.Now().UTC(),
|
||||||
|
Payload: map[string]any{"ok": true},
|
||||||
|
}
|
||||||
|
}
|
||||||
22
processors/doc.go
Normal file
22
processors/doc.go
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
// Package processors defines feedkit's generic processor abstraction and registry.
|
||||||
|
//
|
||||||
|
// Processors are optional pipeline stages that can transform, drop, or reject
|
||||||
|
// events before dispatch to sinks.
|
||||||
|
//
|
||||||
|
// Registry provides name-based construction so daemons can assemble processor
|
||||||
|
// chains without embedding switch statements in wiring code.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// reg := processors.NewRegistry()
|
||||||
|
// reg.Register("normalize", func() (processors.Processor, error) {
|
||||||
|
// return normalize.NewProcessor(myNormalizers, false), nil
|
||||||
|
// })
|
||||||
|
//
|
||||||
|
// chain, err := reg.BuildChain([]string{"normalize"})
|
||||||
|
// if err != nil {
|
||||||
|
// // handle wiring error
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// p := &pipeline.Pipeline{Processors: chain}
|
||||||
|
package processors
|
||||||
15
processors/processor.go
Normal file
15
processors/processor.go
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
package processors
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Processor can mutate/drop events (dedupe, rate-limit, normalization tweaks).
|
||||||
|
type Processor interface {
|
||||||
|
Process(ctx context.Context, in event.Event) (out *event.Event, err error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Factory constructs a configured Processor instance.
|
||||||
|
type Factory func() (Processor, error)
|
||||||
71
processors/registry.go
Normal file
71
processors/registry.go
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
package processors
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Registry struct {
|
||||||
|
byDriver map[string]Factory
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewRegistry() *Registry {
|
||||||
|
return &Registry{byDriver: map[string]Factory{}}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Register associates a processor driver name with a factory.
|
||||||
|
//
|
||||||
|
// Register panics for empty driver names, nil factories, and duplicates.
|
||||||
|
func (r *Registry) Register(driver string, f Factory) {
|
||||||
|
if r == nil {
|
||||||
|
panic("processors.Registry.Register: registry cannot be nil")
|
||||||
|
}
|
||||||
|
driver = strings.TrimSpace(driver)
|
||||||
|
if driver == "" {
|
||||||
|
panic("processors.Registry.Register: driver cannot be empty")
|
||||||
|
}
|
||||||
|
if f == nil {
|
||||||
|
panic(fmt.Sprintf("processors.Registry.Register: factory cannot be nil (driver=%q)", driver))
|
||||||
|
}
|
||||||
|
if r.byDriver == nil {
|
||||||
|
r.byDriver = map[string]Factory{}
|
||||||
|
}
|
||||||
|
if _, exists := r.byDriver[driver]; exists {
|
||||||
|
panic(fmt.Sprintf("processors.Registry.Register: driver %q already registered", driver))
|
||||||
|
}
|
||||||
|
r.byDriver[driver] = f
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build constructs a Processor by driver name.
|
||||||
|
func (r *Registry) Build(driver string) (Processor, error) {
|
||||||
|
if r == nil {
|
||||||
|
return nil, fmt.Errorf("processors registry is nil")
|
||||||
|
}
|
||||||
|
driver = strings.TrimSpace(driver)
|
||||||
|
f, ok := r.byDriver[driver]
|
||||||
|
if !ok {
|
||||||
|
return nil, fmt.Errorf("unknown processor driver: %q", driver)
|
||||||
|
}
|
||||||
|
|
||||||
|
p, err := f()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("build processor %q: %w", driver, err)
|
||||||
|
}
|
||||||
|
if p == nil {
|
||||||
|
return nil, fmt.Errorf("build processor %q: factory returned nil processor", driver)
|
||||||
|
}
|
||||||
|
return p, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildChain constructs an ordered processor chain from a driver list.
|
||||||
|
func (r *Registry) BuildChain(drivers []string) ([]Processor, error) {
|
||||||
|
out := make([]Processor, 0, len(drivers))
|
||||||
|
for i, driver := range drivers {
|
||||||
|
p, err := r.Build(driver)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("build processor chain[%d] (%q): %w", i, strings.TrimSpace(driver), err)
|
||||||
|
}
|
||||||
|
out = append(out, p)
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
100
processors/registry_test.go
Normal file
100
processors/registry_test.go
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
package processors
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
)
|
||||||
|
|
||||||
|
type testProcessor struct {
|
||||||
|
name string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p testProcessor) Process(context.Context, event.Event) (*event.Event, error) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRegistryRegisterValidation(t *testing.T) {
|
||||||
|
t.Run("empty driver panics", func(t *testing.T) {
|
||||||
|
r := NewRegistry()
|
||||||
|
assertPanics(t, func() {
|
||||||
|
r.Register(" ", func() (Processor, error) { return testProcessor{name: "x"}, nil })
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("nil factory panics", func(t *testing.T) {
|
||||||
|
r := NewRegistry()
|
||||||
|
assertPanics(t, func() {
|
||||||
|
r.Register("normalize", nil)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("duplicate driver panics", func(t *testing.T) {
|
||||||
|
r := NewRegistry()
|
||||||
|
r.Register("normalize", func() (Processor, error) { return testProcessor{name: "a"}, nil })
|
||||||
|
assertPanics(t, func() {
|
||||||
|
r.Register("normalize", func() (Processor, error) { return testProcessor{name: "b"}, nil })
|
||||||
|
})
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRegistryBuildUnknownDriver(t *testing.T) {
|
||||||
|
r := NewRegistry()
|
||||||
|
_, err := r.Build("does_not_exist")
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error for unknown driver")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "unknown processor driver") {
|
||||||
|
t.Fatalf("unexpected error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRegistryBuildChainPreservesOrder(t *testing.T) {
|
||||||
|
r := NewRegistry()
|
||||||
|
r.Register("first", func() (Processor, error) { return testProcessor{name: "first"}, nil })
|
||||||
|
r.Register("second", func() (Processor, error) { return testProcessor{name: "second"}, nil })
|
||||||
|
|
||||||
|
chain, err := r.BuildChain([]string{"first", "second"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("BuildChain error: %v", err)
|
||||||
|
}
|
||||||
|
if len(chain) != 2 {
|
||||||
|
t.Fatalf("expected 2 processors, got %d", len(chain))
|
||||||
|
}
|
||||||
|
|
||||||
|
p0, ok := chain[0].(testProcessor)
|
||||||
|
if !ok || p0.name != "first" {
|
||||||
|
t.Fatalf("unexpected chain[0]: %#v", chain[0])
|
||||||
|
}
|
||||||
|
p1, ok := chain[1].(testProcessor)
|
||||||
|
if !ok || p1.name != "second" {
|
||||||
|
t.Fatalf("unexpected chain[1]: %#v", chain[1])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRegistryBuildChainIndexedFailure(t *testing.T) {
|
||||||
|
r := NewRegistry()
|
||||||
|
r.Register("ok", func() (Processor, error) { return testProcessor{name: "ok"}, nil })
|
||||||
|
r.Register("broken", func() (Processor, error) { return nil, errors.New("boom") })
|
||||||
|
|
||||||
|
_, err := r.BuildChain([]string{"ok", "broken"})
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "chain[1]") {
|
||||||
|
t.Fatalf("expected indexed error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func assertPanics(t *testing.T, fn func()) {
|
||||||
|
t.Helper()
|
||||||
|
defer func() {
|
||||||
|
if recover() == nil {
|
||||||
|
t.Fatalf("expected panic")
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
fn()
|
||||||
|
}
|
||||||
@@ -17,15 +17,27 @@ import (
|
|||||||
// one function everywhere without type mismatch friction.
|
// one function everywhere without type mismatch friction.
|
||||||
type Logger = logging.Logf
|
type Logger = logging.Logf
|
||||||
|
|
||||||
|
// Job describes one scheduler task.
|
||||||
|
//
|
||||||
|
// A Job may be backed by either:
|
||||||
|
// - a polling source (sources.PollSource): uses Every + jitter and calls Poll()
|
||||||
|
// - a stream source (sources.StreamSource): ignores Every and calls Run()
|
||||||
|
//
|
||||||
|
// Jitter behavior:
|
||||||
|
// - For polling sources: Jitter is applied at startup and before each poll tick.
|
||||||
|
// - For stream sources: Jitter is applied once at startup only (optional; useful to avoid
|
||||||
|
// reconnect storms when many instances start together).
|
||||||
type Job struct {
|
type Job struct {
|
||||||
Source sources.Source
|
Source sources.Input
|
||||||
Every time.Duration
|
Every time.Duration
|
||||||
|
|
||||||
// Jitter is the maximum additional delay added before each poll.
|
// Jitter is the maximum additional delay added before each poll.
|
||||||
// Example: if Every=15m and Jitter=30s, each poll will occur at:
|
// Example: if Every=15m and Jitter=30s, each poll will occur at:
|
||||||
// tick time + random(0..30s)
|
// tick time + random(0..30s)
|
||||||
//
|
//
|
||||||
// If Jitter == 0, we compute a default jitter based on Every.
|
// If Jitter == 0 for polling sources, we compute a default jitter based on Every.
|
||||||
|
//
|
||||||
|
// For stream sources, Jitter is treated as *startup jitter only*.
|
||||||
Jitter time.Duration
|
Jitter time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -35,8 +47,9 @@ type Scheduler struct {
|
|||||||
Logf Logger
|
Logf Logger
|
||||||
}
|
}
|
||||||
|
|
||||||
// Run starts one polling goroutine per job.
|
// Run starts one goroutine per job.
|
||||||
// Each job runs on its own interval and emits 0..N events per poll.
|
// Poll jobs run on their own interval and emit 0..N events per poll.
|
||||||
|
// Stream jobs run continuously and emit events as they arrive.
|
||||||
func (s *Scheduler) Run(ctx context.Context) error {
|
func (s *Scheduler) Run(ctx context.Context) error {
|
||||||
if s.Out == nil {
|
if s.Out == nil {
|
||||||
return fmt.Errorf("scheduler.Run: Out channel is nil")
|
return fmt.Errorf("scheduler.Run: Out channel is nil")
|
||||||
@@ -59,17 +72,48 @@ func (s *Scheduler) runJob(ctx context.Context, job Job) {
|
|||||||
s.logf("scheduler: job has nil source")
|
s.logf("scheduler: job has nil source")
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if job.Every <= 0 {
|
|
||||||
s.logf("scheduler: job %s has invalid interval", job.Source.Name())
|
// Stream sources: event-driven.
|
||||||
|
if ss, ok := job.Source.(sources.StreamSource); ok {
|
||||||
|
s.runStream(ctx, job, ss)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Poll sources: time-based.
|
||||||
|
ps, ok := job.Source.(sources.PollSource)
|
||||||
|
if !ok {
|
||||||
|
s.logf("scheduler: source %T (%s) implements neither Poll() nor Run()", job.Source, job.Source.Name())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if job.Every <= 0 {
|
||||||
|
s.logf("scheduler: polling job %q missing/invalid interval (sources[].every)", ps.Name())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
s.runPoller(ctx, job, ps)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Scheduler) runStream(ctx context.Context, job Job, src sources.StreamSource) {
|
||||||
|
// Optional startup jitter: helps avoid reconnect storms if many daemons start at once.
|
||||||
|
if job.Jitter > 0 {
|
||||||
|
rng := seededRNG(src.Name())
|
||||||
|
if !sleepJitter(ctx, rng, job.Jitter) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stream sources should block until ctx cancel or fatal error.
|
||||||
|
if err := src.Run(ctx, s.Out); err != nil && ctx.Err() == nil {
|
||||||
|
s.logf("scheduler: stream source %q exited with error: %v", src.Name(), err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Scheduler) runPoller(ctx context.Context, job Job, src sources.PollSource) {
|
||||||
// Compute jitter: either configured per job, or a sensible default.
|
// Compute jitter: either configured per job, or a sensible default.
|
||||||
jitter := effectiveJitter(job.Every, job.Jitter)
|
jitter := effectiveJitter(job.Every, job.Jitter)
|
||||||
|
|
||||||
// Each worker gets its own RNG (safe + no lock contention).
|
// Each worker gets its own RNG (safe + no lock contention).
|
||||||
seed := time.Now().UnixNano() ^ int64(hashStringFNV32a(job.Source.Name()))
|
rng := seededRNG(src.Name())
|
||||||
rng := rand.New(rand.NewSource(seed))
|
|
||||||
|
|
||||||
// Optional startup jitter: avoids all jobs firing at the exact moment the daemon starts.
|
// Optional startup jitter: avoids all jobs firing at the exact moment the daemon starts.
|
||||||
if !sleepJitter(ctx, rng, jitter) {
|
if !sleepJitter(ctx, rng, jitter) {
|
||||||
@@ -77,7 +121,7 @@ func (s *Scheduler) runJob(ctx context.Context, job Job) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Immediate poll at startup (after startup jitter).
|
// Immediate poll at startup (after startup jitter).
|
||||||
s.pollOnce(ctx, job)
|
s.pollOnce(ctx, src)
|
||||||
|
|
||||||
t := time.NewTicker(job.Every)
|
t := time.NewTicker(job.Every)
|
||||||
defer t.Stop()
|
defer t.Stop()
|
||||||
@@ -89,7 +133,7 @@ func (s *Scheduler) runJob(ctx context.Context, job Job) {
|
|||||||
if !sleepJitter(ctx, rng, jitter) {
|
if !sleepJitter(ctx, rng, jitter) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
s.pollOnce(ctx, job)
|
s.pollOnce(ctx, src)
|
||||||
|
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return
|
return
|
||||||
@@ -97,10 +141,10 @@ func (s *Scheduler) runJob(ctx context.Context, job Job) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Scheduler) pollOnce(ctx context.Context, job Job) {
|
func (s *Scheduler) pollOnce(ctx context.Context, src sources.PollSource) {
|
||||||
events, err := job.Source.Poll(ctx)
|
events, err := src.Poll(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
s.logf("scheduler: poll failed (%s): %v", job.Source.Name(), err)
|
s.logf("scheduler: poll failed (%s): %v", src.Name(), err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -120,6 +164,13 @@ func (s *Scheduler) logf(format string, args ...any) {
|
|||||||
s.Logf(format, args...)
|
s.Logf(format, args...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ---- helpers ----
|
||||||
|
|
||||||
|
func seededRNG(name string) *rand.Rand {
|
||||||
|
seed := time.Now().UnixNano() ^ int64(hashStringFNV32a(name))
|
||||||
|
return rand.New(rand.NewSource(seed))
|
||||||
|
}
|
||||||
|
|
||||||
// effectiveJitter chooses a jitter value.
|
// effectiveJitter chooses a jitter value.
|
||||||
// - If configuredMax > 0, use it (but clamp).
|
// - If configuredMax > 0, use it (but clamp).
|
||||||
// - Else default to min(every/10, 30s).
|
// - Else default to min(every/10, 30s).
|
||||||
|
|||||||
@@ -27,9 +27,9 @@ func RegisterBuiltins(r *Registry) {
|
|||||||
return NewPostgresSinkFromConfig(cfg)
|
return NewPostgresSinkFromConfig(cfg)
|
||||||
})
|
})
|
||||||
|
|
||||||
// RabbitMQ sink: publishes events to a broker for downstream consumers.
|
// NATS sink: publishes events to a broker for downstream consumers.
|
||||||
r.Register("rabbitmq", func(cfg config.SinkConfig) (Sink, error) {
|
r.Register("nats", func(cfg config.SinkConfig) (Sink, error) {
|
||||||
return NewRabbitMQSinkFromConfig(cfg)
|
return NewNATSSinkFromConfig(cfg)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
97
sinks/nats.go
Normal file
97
sinks/nats.go
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
package sinks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/config"
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
"github.com/nats-io/nats.go"
|
||||||
|
)
|
||||||
|
|
||||||
|
type NATSSink struct {
|
||||||
|
name string
|
||||||
|
url string
|
||||||
|
exchange string
|
||||||
|
|
||||||
|
mu sync.Mutex
|
||||||
|
conn *nats.Conn
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewNATSSinkFromConfig(cfg config.SinkConfig) (Sink, error) {
|
||||||
|
url, err := requireStringParam(cfg, "url")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ex, err := requireStringParam(cfg, "exchange")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &NATSSink{name: cfg.Name, url: url, exchange: ex}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *NATSSink) Name() string { return r.name }
|
||||||
|
|
||||||
|
func (r *NATSSink) Consume(ctx context.Context, e event.Event) error {
|
||||||
|
// Boundary validation: if something upstream violated invariants,
|
||||||
|
// surface it loudly rather than printing partial nonsense.
|
||||||
|
if err := e.Validate(); err != nil {
|
||||||
|
return fmt.Errorf("NATS sink: invalid event: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := ctx.Err(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
conn, err := r.connect(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("NATS sink: connect: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
b, err := json.Marshal(e)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("NATS sink: marshal event: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := ctx.Err(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := conn.Publish(r.exchange, b); err != nil {
|
||||||
|
return fmt.Errorf("NATS sink: publish: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *NATSSink) connect(ctx context.Context) (*nats.Conn, error) {
|
||||||
|
if err := ctx.Err(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
|
||||||
|
if r.conn != nil && r.conn.Status() != nats.CLOSED {
|
||||||
|
return r.conn, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
opts := []nats.Option{
|
||||||
|
nats.Name(fmt.Sprintf("feedkit sink %s", r.name)),
|
||||||
|
}
|
||||||
|
if deadline, ok := ctx.Deadline(); ok {
|
||||||
|
timeout := time.Until(deadline)
|
||||||
|
if timeout <= 0 {
|
||||||
|
return nil, ctx.Err()
|
||||||
|
}
|
||||||
|
opts = append(opts, nats.Timeout(timeout))
|
||||||
|
}
|
||||||
|
|
||||||
|
conn, err := nats.Connect(r.url, opts...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
r.conn = conn
|
||||||
|
return conn, nil
|
||||||
|
}
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
package sinks
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
"gitea.maximumdirect.net/ejr/feedkit/config"
|
|
||||||
"gitea.maximumdirect.net/ejr/feedkit/event"
|
|
||||||
)
|
|
||||||
|
|
||||||
type RabbitMQSink struct {
|
|
||||||
name string
|
|
||||||
url string
|
|
||||||
exchange string
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewRabbitMQSinkFromConfig(cfg config.SinkConfig) (Sink, error) {
|
|
||||||
url, err := requireStringParam(cfg, "url")
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
ex, err := requireStringParam(cfg, "exchange")
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &RabbitMQSink{name: cfg.Name, url: url, exchange: ex}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *RabbitMQSink) Name() string { return r.name }
|
|
||||||
|
|
||||||
func (r *RabbitMQSink) Consume(ctx context.Context, e event.Event) error {
|
|
||||||
_ = ctx
|
|
||||||
|
|
||||||
// Boundary validation: if something upstream violated invariants,
|
|
||||||
// surface it loudly rather than printing partial nonsense.
|
|
||||||
if err := e.Validate(); err != nil {
|
|
||||||
return fmt.Errorf("rabbitmq sink: invalid event: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO implement RabbitMQ publishing
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
14
sources/doc.go
Normal file
14
sources/doc.go
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
// Package sources defines feedkit's input-source abstraction.
|
||||||
|
//
|
||||||
|
// A source ingests upstream input and emits one or more event.Event values.
|
||||||
|
//
|
||||||
|
// feedkit supports two source modes:
|
||||||
|
// - PollSource: scheduler invokes Poll on a cadence.
|
||||||
|
// - StreamSource: source runs continuously and pushes events as input arrives.
|
||||||
|
//
|
||||||
|
// Source drivers are domain-specific and registered into Registry by driver name.
|
||||||
|
// Registry can then build configured sources from config.SourceConfig.
|
||||||
|
//
|
||||||
|
// A single source may emit 0..N events per poll or stream iteration, and those
|
||||||
|
// events may span multiple event kinds.
|
||||||
|
package sources
|
||||||
@@ -7,43 +7,130 @@ import (
|
|||||||
"gitea.maximumdirect.net/ejr/feedkit/config"
|
"gitea.maximumdirect.net/ejr/feedkit/config"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Factory constructs a configured Source instance from config.
|
// PollFactory constructs a configured PollSource instance from config.
|
||||||
//
|
//
|
||||||
// This is how concrete daemons (weatherfeeder/newsfeeder/...) register their
|
// This is how concrete daemons (weatherfeeder/newsfeeder/...) register their
|
||||||
// domain-specific source drivers (Open-Meteo, NWS, RSS, etc.) while feedkit
|
// domain-specific source drivers (Open-Meteo, NWS, RSS, etc.) while feedkit
|
||||||
// remains domain-agnostic.
|
// remains domain-agnostic.
|
||||||
type Factory func(cfg config.SourceConfig) (Source, error)
|
type PollFactory func(cfg config.SourceConfig) (PollSource, error)
|
||||||
|
type StreamFactory func(cfg config.SourceConfig) (StreamSource, error)
|
||||||
|
|
||||||
|
// Factory is the legacy alias for poll source factories.
|
||||||
|
type Factory = PollFactory
|
||||||
|
|
||||||
type Registry struct {
|
type Registry struct {
|
||||||
byDriver map[string]Factory
|
byPollDriver map[string]PollFactory
|
||||||
|
byStreamDriver map[string]StreamFactory
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewRegistry() *Registry {
|
func NewRegistry() *Registry {
|
||||||
return &Registry{byDriver: map[string]Factory{}}
|
return &Registry{
|
||||||
|
byPollDriver: map[string]PollFactory{},
|
||||||
|
byStreamDriver: map[string]StreamFactory{},
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Register associates a driver name (e.g. "openmeteo_observation") with a factory.
|
// Register associates a driver name (e.g. "openmeteo_observation") with a factory.
|
||||||
//
|
//
|
||||||
// The driver string is the "lookup key" used by config.sources[].driver.
|
// The driver string is the "lookup key" used by config.sources[].driver.
|
||||||
func (r *Registry) Register(driver string, f Factory) {
|
func (r *Registry) Register(driver string, f PollFactory) {
|
||||||
|
r.RegisterPoll(driver, f)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RegisterPoll associates a driver name with a polling-source factory.
|
||||||
|
func (r *Registry) RegisterPoll(driver string, f PollFactory) {
|
||||||
driver = strings.TrimSpace(driver)
|
driver = strings.TrimSpace(driver)
|
||||||
if driver == "" {
|
if driver == "" {
|
||||||
// Panic is appropriate here: registering an empty driver is always a programmer error,
|
// Panic is appropriate here: registering an empty driver is always a programmer error,
|
||||||
// and it will lead to extremely confusing runtime behavior if allowed.
|
// and it will lead to extremely confusing runtime behavior if allowed.
|
||||||
panic("sources.Registry.Register: driver cannot be empty")
|
panic("sources.Registry.RegisterPoll: driver cannot be empty")
|
||||||
}
|
}
|
||||||
if f == nil {
|
if f == nil {
|
||||||
panic(fmt.Sprintf("sources.Registry.Register: factory cannot be nil (driver=%q)", driver))
|
panic(fmt.Sprintf("sources.Registry.RegisterPoll: factory cannot be nil (driver=%q)", driver))
|
||||||
}
|
}
|
||||||
|
if _, exists := r.byStreamDriver[driver]; exists {
|
||||||
r.byDriver[driver] = f
|
panic(fmt.Sprintf("sources.Registry.RegisterPoll: driver %q already registered as a stream source", driver))
|
||||||
|
}
|
||||||
|
if _, exists := r.byPollDriver[driver]; exists {
|
||||||
|
panic(fmt.Sprintf("sources.Registry.RegisterPoll: driver %q already registered as a polling source", driver))
|
||||||
|
}
|
||||||
|
r.byPollDriver[driver] = f
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build constructs a Source from a SourceConfig by looking up cfg.Driver.
|
// RegisterStream is the StreamSource equivalent of Register.
|
||||||
func (r *Registry) Build(cfg config.SourceConfig) (Source, error) {
|
func (r *Registry) RegisterStream(driver string, f StreamFactory) {
|
||||||
f, ok := r.byDriver[cfg.Driver]
|
driver = strings.TrimSpace(driver)
|
||||||
|
if driver == "" {
|
||||||
|
panic("sources.Registry.RegisterStream: driver cannot be empty")
|
||||||
|
}
|
||||||
|
if f == nil {
|
||||||
|
panic(fmt.Sprintf("sources.Registry.RegisterStream: factory cannot be nil (driver=%q)", driver))
|
||||||
|
}
|
||||||
|
if _, exists := r.byPollDriver[driver]; exists {
|
||||||
|
panic(fmt.Sprintf("sources.Registry.RegisterStream: driver %q already registered as a polling source", driver))
|
||||||
|
}
|
||||||
|
if _, exists := r.byStreamDriver[driver]; exists {
|
||||||
|
panic(fmt.Sprintf("sources.Registry.RegisterStream: driver %q already registered as a stream source", driver))
|
||||||
|
}
|
||||||
|
r.byStreamDriver[driver] = f
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build constructs a polling source from a SourceConfig by looking up cfg.Driver.
|
||||||
|
func (r *Registry) Build(cfg config.SourceConfig) (PollSource, error) {
|
||||||
|
return r.BuildPoll(cfg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildPoll constructs a polling source from a SourceConfig by looking up cfg.Driver.
|
||||||
|
func (r *Registry) BuildPoll(cfg config.SourceConfig) (PollSource, error) {
|
||||||
|
driver := strings.TrimSpace(cfg.Driver)
|
||||||
|
if cfg.Mode.Normalize() == config.SourceModeStream {
|
||||||
|
return nil, fmt.Errorf("source %q mode=stream cannot be built as polling source", cfg.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
f, ok := r.byPollDriver[driver]
|
||||||
if !ok {
|
if !ok {
|
||||||
return nil, fmt.Errorf("unknown source driver: %q", cfg.Driver)
|
if _, streamExists := r.byStreamDriver[driver]; streamExists {
|
||||||
|
return nil, fmt.Errorf("source driver %q is stream-only; cannot build as polling source", driver)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unknown source driver: %q", driver)
|
||||||
}
|
}
|
||||||
return f(cfg)
|
return f(cfg)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// BuildInput can return either a polling Source or a StreamSource.
|
||||||
|
func (r *Registry) BuildInput(cfg config.SourceConfig) (Input, error) {
|
||||||
|
driver := strings.TrimSpace(cfg.Driver)
|
||||||
|
mode := cfg.Mode.Normalize()
|
||||||
|
if mode != config.SourceModeAuto && mode != config.SourceModePoll && mode != config.SourceModeStream {
|
||||||
|
return nil, fmt.Errorf("source %q has invalid mode %q (expected \"poll\" or \"stream\")", cfg.Name, cfg.Mode)
|
||||||
|
}
|
||||||
|
|
||||||
|
switch mode {
|
||||||
|
case config.SourceModePoll:
|
||||||
|
f, ok := r.byPollDriver[driver]
|
||||||
|
if !ok {
|
||||||
|
if _, streamExists := r.byStreamDriver[driver]; streamExists {
|
||||||
|
return nil, fmt.Errorf("source %q mode=poll conflicts with stream-only driver %q", cfg.Name, driver)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unknown source driver: %q", driver)
|
||||||
|
}
|
||||||
|
return f(cfg)
|
||||||
|
case config.SourceModeStream:
|
||||||
|
f, ok := r.byStreamDriver[driver]
|
||||||
|
if !ok {
|
||||||
|
if _, pollExists := r.byPollDriver[driver]; pollExists {
|
||||||
|
return nil, fmt.Errorf("source %q mode=stream conflicts with polling driver %q", cfg.Name, driver)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unknown source driver: %q", driver)
|
||||||
|
}
|
||||||
|
return f(cfg)
|
||||||
|
}
|
||||||
|
|
||||||
|
if f, ok := r.byStreamDriver[driver]; ok {
|
||||||
|
return f(cfg)
|
||||||
|
}
|
||||||
|
if f, ok := r.byPollDriver[driver]; ok {
|
||||||
|
return f(cfg)
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("unknown source driver: %q", driver)
|
||||||
|
}
|
||||||
|
|||||||
84
sources/registry_test.go
Normal file
84
sources/registry_test.go
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
package sources
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/config"
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
)
|
||||||
|
|
||||||
|
type testPollSource struct{ name string }
|
||||||
|
|
||||||
|
func (s testPollSource) Name() string { return s.name }
|
||||||
|
func (s testPollSource) Poll(context.Context) ([]event.Event, error) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type testStreamSource struct{ name string }
|
||||||
|
|
||||||
|
func (s testStreamSource) Name() string { return s.name }
|
||||||
|
func (s testStreamSource) Run(context.Context, chan<- event.Event) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRegistryBuildInputModeConflicts(t *testing.T) {
|
||||||
|
r := NewRegistry()
|
||||||
|
r.RegisterPoll("poll_driver", func(cfg config.SourceConfig) (PollSource, error) {
|
||||||
|
return testPollSource{name: cfg.Name}, nil
|
||||||
|
})
|
||||||
|
r.RegisterStream("stream_driver", func(cfg config.SourceConfig) (StreamSource, error) {
|
||||||
|
return testStreamSource{name: cfg.Name}, nil
|
||||||
|
})
|
||||||
|
|
||||||
|
_, err := r.BuildInput(config.SourceConfig{
|
||||||
|
Name: "s1",
|
||||||
|
Driver: "stream_driver",
|
||||||
|
Mode: config.SourceModePoll,
|
||||||
|
})
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected mode conflict error, got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "mode=poll") {
|
||||||
|
t.Fatalf("expected poll conflict error, got: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = r.BuildInput(config.SourceConfig{
|
||||||
|
Name: "s2",
|
||||||
|
Driver: "poll_driver",
|
||||||
|
Mode: config.SourceModeStream,
|
||||||
|
})
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected mode conflict error, got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "mode=stream") {
|
||||||
|
t.Fatalf("expected stream conflict error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRegistryBuildInputAutoByDriverType(t *testing.T) {
|
||||||
|
r := NewRegistry()
|
||||||
|
r.RegisterPoll("poll_driver", func(cfg config.SourceConfig) (PollSource, error) {
|
||||||
|
return testPollSource{name: cfg.Name}, nil
|
||||||
|
})
|
||||||
|
r.RegisterStream("stream_driver", func(cfg config.SourceConfig) (StreamSource, error) {
|
||||||
|
return testStreamSource{name: cfg.Name}, nil
|
||||||
|
})
|
||||||
|
|
||||||
|
src, err := r.BuildInput(config.SourceConfig{Name: "p", Driver: "poll_driver"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("BuildInput poll auto failed: %v", err)
|
||||||
|
}
|
||||||
|
if _, ok := src.(PollSource); !ok {
|
||||||
|
t.Fatalf("expected PollSource, got %T", src)
|
||||||
|
}
|
||||||
|
|
||||||
|
src, err = r.BuildInput(config.SourceConfig{Name: "s", Driver: "stream_driver"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("BuildInput stream auto failed: %v", err)
|
||||||
|
}
|
||||||
|
if _, ok := src.(StreamSource); !ok {
|
||||||
|
t.Fatalf("expected StreamSource, got %T", src)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -6,25 +6,50 @@ import (
|
|||||||
"gitea.maximumdirect.net/ejr/feedkit/event"
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Source is a configured polling job that emits 0..N events per poll.
|
// Input is the common surface shared by all source types.
|
||||||
//
|
//
|
||||||
// Source implementations live in domain modules (weatherfeeder/newsfeeder/...)
|
// A source may be polling (PollSource) or event-driven (StreamSource).
|
||||||
|
// Both source types emit domain-agnostic event.Event values.
|
||||||
|
type Input interface {
|
||||||
|
Name() string
|
||||||
|
}
|
||||||
|
|
||||||
|
// PollSource is a configured polling source that emits 0..N events per poll.
|
||||||
|
//
|
||||||
|
// PollSource implementations live in domain modules (weatherfeeder/newsfeeder/...)
|
||||||
// and are registered into a feedkit sources.Registry.
|
// and are registered into a feedkit sources.Registry.
|
||||||
//
|
//
|
||||||
// feedkit infrastructure treats Source as opaque; it just calls Poll()
|
// feedkit infrastructure treats PollSource as opaque; it just calls Poll()
|
||||||
// on the configured cadence and publishes the resulting events.
|
// on the configured cadence and publishes the resulting events.
|
||||||
type Source interface {
|
type PollSource interface {
|
||||||
// Name is the configured source name (used for logs and included in emitted events).
|
// Name is the configured source name (used for logs and included in emitted events).
|
||||||
Name() string
|
Name() string
|
||||||
|
|
||||||
// Kind is the "primary kind" emitted by this source.
|
// Poll fetches/processes one input batch and returns 0..N events.
|
||||||
//
|
// A single poll can emit multiple event kinds.
|
||||||
// This is mainly useful as a *safety check* (e.g. config says kind=forecast but
|
|
||||||
// driver emits observation). Some future sources may emit multiple kinds; if/when
|
|
||||||
// that happens, we can evolve this interface (e.g., make Kind optional, or remove it).
|
|
||||||
Kind() event.Kind
|
|
||||||
|
|
||||||
// Poll fetches from upstream and returns 0..N events.
|
|
||||||
// Implementations should honor ctx.Done() for network calls and other I/O.
|
// Implementations should honor ctx.Done() for network calls and other I/O.
|
||||||
Poll(ctx context.Context) ([]event.Event, error)
|
Poll(ctx context.Context) ([]event.Event, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Source is a compatibility alias for the legacy polling-source name.
|
||||||
|
type Source = PollSource
|
||||||
|
|
||||||
|
// StreamSource is an event-driven source (NATS/RabbitMQ/MQTT/etc).
|
||||||
|
//
|
||||||
|
// Run should block, producing events into `out` until ctx is cancelled or a fatal error occurs.
|
||||||
|
// It MUST NOT close out (the scheduler/daemon owns the bus).
|
||||||
|
type StreamSource interface {
|
||||||
|
Input
|
||||||
|
Run(ctx context.Context, out chan<- event.Event) error
|
||||||
|
}
|
||||||
|
|
||||||
|
// KindSource is an optional interface for sources that advertise one "primary" kind.
|
||||||
|
// This is legacy-friendly but no longer required.
|
||||||
|
type KindSource interface {
|
||||||
|
Kind() event.Kind
|
||||||
|
}
|
||||||
|
|
||||||
|
// KindsSource is an optional interface for sources that advertise multiple kinds.
|
||||||
|
type KindsSource interface {
|
||||||
|
Kinds() []event.Kind
|
||||||
|
}
|
||||||
|
|||||||
70
transport/http.go
Normal file
70
transport/http.go
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
// FILE: ./transport/http.go
|
||||||
|
package transport
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"net/http"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// maxResponseBodyBytes is a hard safety limit on HTTP response bodies.
|
||||||
|
// API responses should be small, so this protects us from accidental
|
||||||
|
// or malicious large responses.
|
||||||
|
const maxResponseBodyBytes = 2 << 21 // 4 MiB
|
||||||
|
|
||||||
|
// DefaultHTTPTimeout is the standard timeout used by HTTP sources.
|
||||||
|
// Individual drivers may override this if they have a specific need.
|
||||||
|
const DefaultHTTPTimeout = 10 * time.Second
|
||||||
|
|
||||||
|
// NewHTTPClient returns a simple http.Client configured with a timeout.
|
||||||
|
// If timeout <= 0, DefaultHTTPTimeout is used.
|
||||||
|
func NewHTTPClient(timeout time.Duration) *http.Client {
|
||||||
|
if timeout <= 0 {
|
||||||
|
timeout = DefaultHTTPTimeout
|
||||||
|
}
|
||||||
|
return &http.Client{Timeout: timeout}
|
||||||
|
}
|
||||||
|
|
||||||
|
func FetchBody(ctx context.Context, client *http.Client, url, userAgent, accept string) ([]byte, error) {
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if userAgent != "" {
|
||||||
|
req.Header.Set("User-Agent", userAgent)
|
||||||
|
}
|
||||||
|
if accept != "" {
|
||||||
|
req.Header.Set("Accept", accept)
|
||||||
|
}
|
||||||
|
|
||||||
|
res, err := client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
|
||||||
|
if res.StatusCode < 200 || res.StatusCode >= 300 {
|
||||||
|
return nil, fmt.Errorf("HTTP %s", res.Status)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read at most maxResponseBodyBytes + 1 so we can detect overflow.
|
||||||
|
limited := io.LimitReader(res.Body, maxResponseBodyBytes+1)
|
||||||
|
|
||||||
|
b, err := io.ReadAll(limited)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(b) == 0 {
|
||||||
|
return nil, fmt.Errorf("empty response body")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(b) > maxResponseBodyBytes {
|
||||||
|
return nil, fmt.Errorf("response body too large (>%d bytes)", maxResponseBodyBytes)
|
||||||
|
}
|
||||||
|
|
||||||
|
return b, nil
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user