Compare commits
4 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 9b2c1e5ceb | |||
| 1d43adcfa0 | |||
| a6c133319a | |||
| 09bc65e947 |
254
README.md
254
README.md
@@ -1,3 +1,255 @@
|
|||||||
# feedkit
|
# feedkit
|
||||||
|
|
||||||
Feedkit provides core interfaces and plumbing for applications that ingest and process feeds.
|
**feedkit** provides the domain-agnostic core plumbing for *feed-processing daemons*.
|
||||||
|
|
||||||
|
A feed daemon is a long-running process that:
|
||||||
|
- polls one or more upstream providers (HTTP APIs, RSS feeds, etc.)
|
||||||
|
- normalizes upstream data into a consistent internal representation
|
||||||
|
- applies lightweight policy (dedupe, rate-limit, filtering)
|
||||||
|
- emits events to one or more sinks (stdout, files, databases, brokers)
|
||||||
|
|
||||||
|
feedkit is designed to be reused by many concrete daemons (e.g. `weatherfeeder`,
|
||||||
|
`newsfeeder`, `rssfeeder`) without embedding *any* domain-specific logic.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Philosophy
|
||||||
|
|
||||||
|
feedkit is **not a framework**.
|
||||||
|
|
||||||
|
It does **not**:
|
||||||
|
- define domain schemas
|
||||||
|
- enforce allowed event kinds
|
||||||
|
- hide control flow behind inversion-of-control magic
|
||||||
|
- own your application lifecycle
|
||||||
|
|
||||||
|
Instead, it provides **small, composable primitives** that concrete daemons wire
|
||||||
|
together explicitly. The goal is clarity, predictability, and long-term
|
||||||
|
maintainability.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Conceptual pipeline
|
||||||
|
|
||||||
|
Collect → Normalize → Filter / Policy → Route → Persist / Emit
|
||||||
|
|
||||||
|
In feedkit terms:
|
||||||
|
|
||||||
|
| Stage | Package(s) |
|
||||||
|
|------------|--------------------------------------|
|
||||||
|
| Collect | `sources`, `scheduler` |
|
||||||
|
| Normalize | *(today: domain code; planned: pipeline processor)* |
|
||||||
|
| Policy | `pipeline` |
|
||||||
|
| Route | `dispatch` |
|
||||||
|
| Emit | `sinks` |
|
||||||
|
| Configure | `config` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Public API overview
|
||||||
|
|
||||||
|
### `config` — Configuration loading & validation
|
||||||
|
**Status:** 🟢 Stable
|
||||||
|
|
||||||
|
- Loads YAML configuration
|
||||||
|
- Strict decoding (`KnownFields(true)`)
|
||||||
|
- Domain-agnostic validation only (shape, required fields, references)
|
||||||
|
- Flexible `Params map[string]any` with typed helpers
|
||||||
|
|
||||||
|
Key types:
|
||||||
|
- `config.Config`
|
||||||
|
- `config.SourceConfig`
|
||||||
|
- `config.SinkConfig`
|
||||||
|
- `config.Load(path)`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `event` — Domain-agnostic event envelope
|
||||||
|
**Status:** 🟢 Stable
|
||||||
|
|
||||||
|
Defines the canonical event structure that moves through feedkit.
|
||||||
|
|
||||||
|
Includes:
|
||||||
|
- Stable ID
|
||||||
|
- Kind (stringly-typed, domain-defined)
|
||||||
|
- Source name
|
||||||
|
- Timestamps (`EmittedAt`, optional `EffectiveAt`)
|
||||||
|
- Optional `Schema` for payload versioning
|
||||||
|
- Opaque `Payload`
|
||||||
|
|
||||||
|
Key types:
|
||||||
|
- `event.Event`
|
||||||
|
- `event.Kind`
|
||||||
|
- `event.ParseKind`
|
||||||
|
- `event.Event.Validate`
|
||||||
|
|
||||||
|
feedkit infrastructure never inspects `Payload`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### `sources` — Polling abstraction
|
||||||
|
**Status:** 🟢 Stable (interface); 🔵 evolving patterns
|
||||||
|
|
||||||
|
Defines the contract implemented by domain-specific polling jobs.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Source interface {
|
||||||
|
Name() string
|
||||||
|
Kind() event.Kind
|
||||||
|
Poll(ctx context.Context) ([]event.Event, error)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
Includes a registry (sources.Registry) so daemons can register drivers
|
||||||
|
(e.g. openweather_observation, rss_feed) without switch statements.
|
||||||
|
|
||||||
|
Note: Today, most sources both fetch and normalize. A dedicated
|
||||||
|
normalization hook is planned (see below).
|
||||||
|
|
||||||
|
### `scheduler` — Time-based polling
|
||||||
|
|
||||||
|
**Status:** 🟢 Stable
|
||||||
|
|
||||||
|
Runs one goroutine per source on a configured interval with jitter.
|
||||||
|
|
||||||
|
Features:
|
||||||
|
- Per-source interval
|
||||||
|
- Deterministic jitter (avoids thundering herd)
|
||||||
|
- Immediate poll at startup
|
||||||
|
- Context-aware shutdown
|
||||||
|
|
||||||
|
Key types:
|
||||||
|
- `scheduler.Scheduler`
|
||||||
|
- `scheduler.Job`
|
||||||
|
|
||||||
|
### `pipeline` — Event processing chain
|
||||||
|
**Status:** 🟡 Partial (API stable, processors evolving)
|
||||||
|
|
||||||
|
Allows events to be transformed, dropped, or rejected between collection
|
||||||
|
and dispatch.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Processor interface {
|
||||||
|
Process(ctx context.Context, in event.Event) (*event.Event, error)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Current state:
|
||||||
|
- `pipeline.Pipeline` is fully implemented
|
||||||
|
|
||||||
|
Placeholder files exist for:
|
||||||
|
- `dedupe` (planned)
|
||||||
|
- `ratelimit` (planned)
|
||||||
|
|
||||||
|
This is the intended home for:
|
||||||
|
- normalization
|
||||||
|
- deduplication
|
||||||
|
- rate limiting
|
||||||
|
- lightweight policy enforcement
|
||||||
|
|
||||||
|
### `dispatch` — Routing & fan-out
|
||||||
|
**Status:** 🟢 Stable
|
||||||
|
|
||||||
|
Routes events to sinks based on kind and isolates slow sinks.
|
||||||
|
|
||||||
|
Features:
|
||||||
|
- Compiled routing rules
|
||||||
|
- Per-sink buffered queues
|
||||||
|
- Bounded enqueue timeouts
|
||||||
|
- Per-consume timeouts
|
||||||
|
- Sink panic isolation
|
||||||
|
- Context-aware shutdown
|
||||||
|
|
||||||
|
Key types:
|
||||||
|
- `dispatch.Dispatcher`
|
||||||
|
- `dispatch.Route`
|
||||||
|
- `dispatch.Fanout`
|
||||||
|
|
||||||
|
### `sinks` — Output adapters
|
||||||
|
***Status:*** 🟡 Mixed
|
||||||
|
|
||||||
|
Defines where events go after processing.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type Sink interface {
|
||||||
|
Name() string
|
||||||
|
Consume(ctx context.Context, e event.Event) error
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Registry-based construction allows daemons to opt into any sink drivers.
|
||||||
|
|
||||||
|
Sink Status
|
||||||
|
stdout 🟢 Implemented
|
||||||
|
file 🔴 Stub
|
||||||
|
postgres 🔴 Stub
|
||||||
|
rabbitmq 🔴 Stub
|
||||||
|
|
||||||
|
All sinks are required to respect context cancellation.
|
||||||
|
|
||||||
|
### Normalization (planned)
|
||||||
|
**Status:** 🔵 Planned (API design in progress)
|
||||||
|
|
||||||
|
Currently, most domain implementations normalize upstream data inside
|
||||||
|
`sources.Source.Poll`, which leads to:
|
||||||
|
- very large source files
|
||||||
|
- mixed responsibilities (HTTP + mapping)
|
||||||
|
- duplicated helper code
|
||||||
|
|
||||||
|
The intended evolution is:
|
||||||
|
- Sources emit raw events (e.g. `json.RawMessage`)
|
||||||
|
- A dedicated normalization processor runs in the pipeline
|
||||||
|
- Normalizers are selected by `Event.Schema`, `Kind`, or `Source`
|
||||||
|
|
||||||
|
This keeps:
|
||||||
|
- `feedkit` domain-agnostic
|
||||||
|
- `sources` small and focused
|
||||||
|
- normalization logic centralized and testable
|
||||||
|
|
||||||
|
### Runner helper (planned)
|
||||||
|
**Status:** 🔵 Planned (optional convenience)
|
||||||
|
|
||||||
|
Most daemons wire together the same steps:
|
||||||
|
- load config
|
||||||
|
- build sources
|
||||||
|
- build sinks
|
||||||
|
- compile routes
|
||||||
|
- start scheduler
|
||||||
|
- start dispatcher
|
||||||
|
|
||||||
|
A small, opt-in `Runner` helper may be added to reduce boilerplate while keeping the system explicit and debuggable.
|
||||||
|
|
||||||
|
This is not intended to become a framework.
|
||||||
|
|
||||||
|
## Stability summary
|
||||||
|
Area Status
|
||||||
|
Event model 🟢 Stable
|
||||||
|
Config API 🟢 Stable
|
||||||
|
Scheduler 🟢 Stable
|
||||||
|
Dispatcher 🟢 Stable
|
||||||
|
Source interface 🟢 Stable
|
||||||
|
Pipeline core 🟡 Partial
|
||||||
|
Normalization 🔵 Planned
|
||||||
|
Dedupe/Ratelimit 🔵 Planned
|
||||||
|
Non-stdout sinks 🔴 Stub
|
||||||
|
|
||||||
|
Legend:
|
||||||
|
🟢 Stable — API considered solid
|
||||||
|
🟡 Partial — usable, but incomplete
|
||||||
|
🔵 Planned — design direction agreed, not yet implemented
|
||||||
|
🔴 Stub — placeholder only
|
||||||
|
|
||||||
|
## Non-goals
|
||||||
|
`feedkit` intentionally does not:
|
||||||
|
- define domain payload schemas
|
||||||
|
- enforce domain-specific validation
|
||||||
|
- manage persistence semantics beyond sink adapters
|
||||||
|
- own observability, metrics, or tracing (left to daemons)
|
||||||
|
|
||||||
|
Those concerns belong in concrete implementations.
|
||||||
|
|
||||||
|
## See also
|
||||||
|
- NAMING.md — repository and daemon naming conventions
|
||||||
|
- event/doc.go — detailed event semantics
|
||||||
|
- **Concrete example:** weatherfeeder (reference implementation)
|
||||||
|
|
||||||
|
---
|
||||||
@@ -54,7 +54,10 @@ type RouteConfig struct {
|
|||||||
Sink string `yaml:"sink"` // sink name
|
Sink string `yaml:"sink"` // sink name
|
||||||
|
|
||||||
// Kinds is domain-defined. feedkit only enforces that each entry is non-empty.
|
// Kinds is domain-defined. feedkit only enforces that each entry is non-empty.
|
||||||
// Whether a given daemon "recognizes" a kind is domain-specific validation.
|
//
|
||||||
|
// If Kinds is omitted or empty, the route matches ALL kinds.
|
||||||
|
// This is useful when you want explicit per-sink routing rules even when a
|
||||||
|
// particular sink should receive everything.
|
||||||
Kinds []string `yaml:"kinds"`
|
Kinds []string `yaml:"kinds"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -128,12 +131,3 @@ func (d *Duration) UnmarshalYAML(value *yaml.Node) error {
|
|||||||
// Anything else: reject.
|
// Anything else: reject.
|
||||||
return fmt.Errorf("duration must be a string like 15m or an integer minutes, got tag %s", value.Tag)
|
return fmt.Errorf("duration must be a string like 15m or an integer minutes, got tag %s", value.Tag)
|
||||||
}
|
}
|
||||||
|
|
||||||
func isAllDigits(s string) bool {
|
|
||||||
for _, r := range s {
|
|
||||||
if r < '0' || r > '9' {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return len(s) > 0
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -133,11 +133,8 @@ func (c *Config) Validate() error {
|
|||||||
m.Add(fieldErr(path+".sink", fmt.Sprintf("references unknown sink %q (define it under sinks:)", r.Sink)))
|
m.Add(fieldErr(path+".sink", fmt.Sprintf("references unknown sink %q (define it under sinks:)", r.Sink)))
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(r.Kinds) == 0 {
|
// Kinds is optional. If omitted or empty, the route matches ALL kinds.
|
||||||
// You could relax this later (e.g. empty == "all kinds"), but for now
|
// If provided, each entry must be non-empty.
|
||||||
// keeping it strict prevents accidental "route does nothing".
|
|
||||||
m.Add(fieldErr(path+".kinds", "must contain at least one kind"))
|
|
||||||
} else {
|
|
||||||
for j, k := range r.Kinds {
|
for j, k := range r.Kinds {
|
||||||
kpath := fmt.Sprintf("%s.kinds[%d]", path, j)
|
kpath := fmt.Sprintf("%s.kinds[%d]", path, j)
|
||||||
if strings.TrimSpace(k) == "" {
|
if strings.TrimSpace(k) == "" {
|
||||||
@@ -145,7 +142,6 @@ func (c *Config) Validate() error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
return m.Err()
|
return m.Err()
|
||||||
}
|
}
|
||||||
|
|||||||
385
config/params.go
385
config/params.go
@@ -1,32 +1,21 @@
|
|||||||
// feedkit/config/params.go
|
// feedkit/config/params.go
|
||||||
package config
|
package config
|
||||||
|
|
||||||
import "strings"
|
import (
|
||||||
|
"math"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ---- SourceConfig param helpers ----
|
||||||
|
|
||||||
// ParamString returns the first non-empty string found for any of the provided keys.
|
// ParamString returns the first non-empty string found for any of the provided keys.
|
||||||
// Values must actually be strings in the decoded config; other types are ignored.
|
// Values must actually be strings in the decoded config; other types are ignored.
|
||||||
//
|
//
|
||||||
// This keeps cfg.Params flexible (map[string]any) while letting callers stay type-safe.
|
// This keeps cfg.Params flexible (map[string]any) while letting callers stay type-safe.
|
||||||
func (cfg SourceConfig) ParamString(keys ...string) (string, bool) {
|
func (cfg SourceConfig) ParamString(keys ...string) (string, bool) {
|
||||||
if cfg.Params == nil {
|
return paramString(cfg.Params, keys...)
|
||||||
return "", false
|
|
||||||
}
|
|
||||||
for _, k := range keys {
|
|
||||||
v, ok := cfg.Params[k]
|
|
||||||
if !ok || v == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
s, ok := v.(string)
|
|
||||||
if !ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
s = strings.TrimSpace(s)
|
|
||||||
if s == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
return s, true
|
|
||||||
}
|
|
||||||
return "", false
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ParamStringDefault returns ParamString(keys...) if present; otherwise it returns def.
|
// ParamStringDefault returns ParamString(keys...) if present; otherwise it returns def.
|
||||||
@@ -38,14 +27,150 @@ func (cfg SourceConfig) ParamStringDefault(def string, keys ...string) string {
|
|||||||
return strings.TrimSpace(def)
|
return strings.TrimSpace(def)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ParamBool returns the first boolean found for any of the provided keys.
|
||||||
|
//
|
||||||
|
// Accepted types in Params:
|
||||||
|
// - bool
|
||||||
|
// - string: parsed via strconv.ParseBool ("true"/"false"/"1"/"0", etc.)
|
||||||
|
func (cfg SourceConfig) ParamBool(keys ...string) (bool, bool) {
|
||||||
|
return paramBool(cfg.Params, keys...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SourceConfig) ParamBoolDefault(def bool, keys ...string) bool {
|
||||||
|
if v, ok := cfg.ParamBool(keys...); ok {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
return def
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParamInt returns the first integer-like value found for any of the provided keys.
|
||||||
|
//
|
||||||
|
// Accepted types in Params:
|
||||||
|
// - any integer type (int, int64, uint32, ...)
|
||||||
|
// - float32/float64 ONLY if it is an exact integer (e.g. 15.0)
|
||||||
|
// - string: parsed via strconv.Atoi (e.g. "42")
|
||||||
|
func (cfg SourceConfig) ParamInt(keys ...string) (int, bool) {
|
||||||
|
return paramInt(cfg.Params, keys...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SourceConfig) ParamIntDefault(def int, keys ...string) int {
|
||||||
|
if v, ok := cfg.ParamInt(keys...); ok {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
return def
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParamDuration returns the first duration-like value found for any of the provided keys.
|
||||||
|
//
|
||||||
|
// Accepted types in Params:
|
||||||
|
// - time.Duration
|
||||||
|
// - string: parsed via time.ParseDuration (e.g. "250ms", "30s", "5m")
|
||||||
|
// - if the string is all digits (e.g. "30"), it is interpreted as SECONDS
|
||||||
|
// - numeric: interpreted as SECONDS (e.g. 30 => 30s)
|
||||||
|
//
|
||||||
|
// Rationale: Param durations are usually timeouts/backoffs; seconds are a sane numeric default.
|
||||||
|
// If you want minutes/hours, prefer a duration string like "5m" or "1h".
|
||||||
|
func (cfg SourceConfig) ParamDuration(keys ...string) (time.Duration, bool) {
|
||||||
|
return paramDuration(cfg.Params, keys...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SourceConfig) ParamDurationDefault(def time.Duration, keys ...string) time.Duration {
|
||||||
|
if v, ok := cfg.ParamDuration(keys...); ok {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
return def
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParamStringSlice returns the first string-slice-like value found for any of the provided keys.
|
||||||
|
//
|
||||||
|
// Accepted types in Params:
|
||||||
|
// - []string
|
||||||
|
// - []any where each element is a string
|
||||||
|
// - string:
|
||||||
|
// - if it contains commas, split on commas (",") and trim each item
|
||||||
|
// - otherwise treat as a single-item list
|
||||||
|
//
|
||||||
|
// Empty/blank items are removed.
|
||||||
|
func (cfg SourceConfig) ParamStringSlice(keys ...string) ([]string, bool) {
|
||||||
|
return paramStringSlice(cfg.Params, keys...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- SinkConfig param helpers ----
|
||||||
|
|
||||||
// ParamString returns the first non-empty string found for any of the provided keys
|
// ParamString returns the first non-empty string found for any of the provided keys
|
||||||
// in SinkConfig.Params. (Same rationale as SourceConfig.ParamString.)
|
// in SinkConfig.Params. (Same rationale as SourceConfig.ParamString.)
|
||||||
func (cfg SinkConfig) ParamString(keys ...string) (string, bool) {
|
func (cfg SinkConfig) ParamString(keys ...string) (string, bool) {
|
||||||
if cfg.Params == nil {
|
return paramString(cfg.Params, keys...)
|
||||||
return "", false
|
}
|
||||||
|
|
||||||
|
// ParamStringDefault returns ParamString(keys...) if present; otherwise it returns def.
|
||||||
|
// Symmetric helper for sink implementations.
|
||||||
|
func (cfg SinkConfig) ParamStringDefault(def string, keys ...string) string {
|
||||||
|
if s, ok := cfg.ParamString(keys...); ok {
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
return strings.TrimSpace(def)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SinkConfig) ParamBool(keys ...string) (bool, bool) {
|
||||||
|
return paramBool(cfg.Params, keys...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SinkConfig) ParamBoolDefault(def bool, keys ...string) bool {
|
||||||
|
if v, ok := cfg.ParamBool(keys...); ok {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
return def
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SinkConfig) ParamInt(keys ...string) (int, bool) {
|
||||||
|
return paramInt(cfg.Params, keys...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SinkConfig) ParamIntDefault(def int, keys ...string) int {
|
||||||
|
if v, ok := cfg.ParamInt(keys...); ok {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
return def
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SinkConfig) ParamDuration(keys ...string) (time.Duration, bool) {
|
||||||
|
return paramDuration(cfg.Params, keys...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SinkConfig) ParamDurationDefault(def time.Duration, keys ...string) time.Duration {
|
||||||
|
if v, ok := cfg.ParamDuration(keys...); ok {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
return def
|
||||||
|
}
|
||||||
|
|
||||||
|
func (cfg SinkConfig) ParamStringSlice(keys ...string) ([]string, bool) {
|
||||||
|
return paramStringSlice(cfg.Params, keys...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---- shared implementations (package-private) ----
|
||||||
|
|
||||||
|
func paramAny(params map[string]any, keys ...string) (any, bool) {
|
||||||
|
if params == nil {
|
||||||
|
return nil, false
|
||||||
}
|
}
|
||||||
for _, k := range keys {
|
for _, k := range keys {
|
||||||
v, ok := cfg.Params[k]
|
v, ok := params[k]
|
||||||
|
if !ok || v == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return v, true
|
||||||
|
}
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
func paramString(params map[string]any, keys ...string) (string, bool) {
|
||||||
|
for _, k := range keys {
|
||||||
|
if params == nil {
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
v, ok := params[k]
|
||||||
if !ok || v == nil {
|
if !ok || v == nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -62,11 +187,213 @@ func (cfg SinkConfig) ParamString(keys ...string) (string, bool) {
|
|||||||
return "", false
|
return "", false
|
||||||
}
|
}
|
||||||
|
|
||||||
// ParamStringDefault returns ParamString(keys...) if present; otherwise it returns def.
|
func paramBool(params map[string]any, keys ...string) (bool, bool) {
|
||||||
// Symmetric helper for sink implementations.
|
v, ok := paramAny(params, keys...)
|
||||||
func (cfg SinkConfig) ParamStringDefault(def string, keys ...string) string {
|
if !ok {
|
||||||
if s, ok := cfg.ParamString(keys...); ok {
|
return false, false
|
||||||
return s
|
|
||||||
}
|
}
|
||||||
return strings.TrimSpace(def)
|
|
||||||
|
switch t := v.(type) {
|
||||||
|
case bool:
|
||||||
|
return t, true
|
||||||
|
case string:
|
||||||
|
s := strings.TrimSpace(t)
|
||||||
|
if s == "" {
|
||||||
|
return false, false
|
||||||
|
}
|
||||||
|
parsed, err := strconv.ParseBool(s)
|
||||||
|
if err != nil {
|
||||||
|
return false, false
|
||||||
|
}
|
||||||
|
return parsed, true
|
||||||
|
default:
|
||||||
|
return false, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func paramInt(params map[string]any, keys ...string) (int, bool) {
|
||||||
|
v, ok := paramAny(params, keys...)
|
||||||
|
if !ok {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
|
||||||
|
switch t := v.(type) {
|
||||||
|
case int:
|
||||||
|
return t, true
|
||||||
|
case int8:
|
||||||
|
return int(t), true
|
||||||
|
case int16:
|
||||||
|
return int(t), true
|
||||||
|
case int32:
|
||||||
|
return int(t), true
|
||||||
|
case int64:
|
||||||
|
return int(t), true
|
||||||
|
|
||||||
|
case uint:
|
||||||
|
return int(t), true
|
||||||
|
case uint8:
|
||||||
|
return int(t), true
|
||||||
|
case uint16:
|
||||||
|
return int(t), true
|
||||||
|
case uint32:
|
||||||
|
return int(t), true
|
||||||
|
case uint64:
|
||||||
|
return int(t), true
|
||||||
|
|
||||||
|
case float32:
|
||||||
|
f := float64(t)
|
||||||
|
if math.IsNaN(f) || math.IsInf(f, 0) {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
if math.Trunc(f) != f {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
return int(f), true
|
||||||
|
|
||||||
|
case float64:
|
||||||
|
if math.IsNaN(t) || math.IsInf(t, 0) {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
if math.Trunc(t) != t {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
return int(t), true
|
||||||
|
|
||||||
|
case string:
|
||||||
|
s := strings.TrimSpace(t)
|
||||||
|
if s == "" {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
n, err := strconv.Atoi(s)
|
||||||
|
if err != nil {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
return n, true
|
||||||
|
|
||||||
|
default:
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func paramDuration(params map[string]any, keys ...string) (time.Duration, bool) {
|
||||||
|
v, ok := paramAny(params, keys...)
|
||||||
|
if !ok {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
|
||||||
|
switch t := v.(type) {
|
||||||
|
case time.Duration:
|
||||||
|
if t <= 0 {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
return t, true
|
||||||
|
|
||||||
|
case string:
|
||||||
|
s := strings.TrimSpace(t)
|
||||||
|
if s == "" {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
// Numeric strings are interpreted as seconds (see doc comment).
|
||||||
|
if isAllDigits(s) {
|
||||||
|
n, err := strconv.Atoi(s)
|
||||||
|
if err != nil || n <= 0 {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
return time.Duration(n) * time.Second, true
|
||||||
|
}
|
||||||
|
d, err := time.ParseDuration(s)
|
||||||
|
if err != nil || d <= 0 {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
return d, true
|
||||||
|
|
||||||
|
case int:
|
||||||
|
if t <= 0 {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
return time.Duration(t) * time.Second, true
|
||||||
|
case int64:
|
||||||
|
if t <= 0 {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
return time.Duration(t) * time.Second, true
|
||||||
|
case float64:
|
||||||
|
if math.IsNaN(t) || math.IsInf(t, 0) || t <= 0 {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
// Allow fractional seconds.
|
||||||
|
secs := t * float64(time.Second)
|
||||||
|
return time.Duration(secs), true
|
||||||
|
case float32:
|
||||||
|
f := float64(t)
|
||||||
|
if math.IsNaN(f) || math.IsInf(f, 0) || f <= 0 {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
secs := f * float64(time.Second)
|
||||||
|
return time.Duration(secs), true
|
||||||
|
|
||||||
|
default:
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func paramStringSlice(params map[string]any, keys ...string) ([]string, bool) {
|
||||||
|
v, ok := paramAny(params, keys...)
|
||||||
|
if !ok {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
clean := func(items []string) ([]string, bool) {
|
||||||
|
out := make([]string, 0, len(items))
|
||||||
|
for _, it := range items {
|
||||||
|
it = strings.TrimSpace(it)
|
||||||
|
if it == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
out = append(out, it)
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
return out, true
|
||||||
|
}
|
||||||
|
|
||||||
|
switch t := v.(type) {
|
||||||
|
case []string:
|
||||||
|
return clean(t)
|
||||||
|
|
||||||
|
case []any:
|
||||||
|
tmp := make([]string, 0, len(t))
|
||||||
|
for _, it := range t {
|
||||||
|
s, ok := it.(string)
|
||||||
|
if !ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
tmp = append(tmp, s)
|
||||||
|
}
|
||||||
|
return clean(tmp)
|
||||||
|
|
||||||
|
case string:
|
||||||
|
s := strings.TrimSpace(t)
|
||||||
|
if s == "" {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
if strings.Contains(s, ",") {
|
||||||
|
parts := strings.Split(s, ",")
|
||||||
|
return clean(parts)
|
||||||
|
}
|
||||||
|
return clean([]string{s})
|
||||||
|
|
||||||
|
default:
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func isAllDigits(s string) bool {
|
||||||
|
for _, r := range s {
|
||||||
|
if r < '0' || r > '9' {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return len(s) > 0
|
||||||
}
|
}
|
||||||
|
|||||||
48
config/validate_test.go
Normal file
48
config/validate_test.go
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestValidate_RouteKindsEmptyIsAllowed(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Sources: []SourceConfig{
|
||||||
|
{Name: "src1", Driver: "driver1", Every: Duration{Duration: time.Minute}},
|
||||||
|
},
|
||||||
|
Sinks: []SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
Routes: []RouteConfig{
|
||||||
|
{Sink: "sink1", Kinds: nil}, // omitted
|
||||||
|
{Sink: "sink1", Kinds: []string{}}, // explicit empty
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := cfg.Validate(); err != nil {
|
||||||
|
t.Fatalf("expected no error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestValidate_RouteKindsRejectsBlankEntries(t *testing.T) {
|
||||||
|
cfg := &Config{
|
||||||
|
Sources: []SourceConfig{
|
||||||
|
{Name: "src1", Driver: "driver1", Every: Duration{Duration: time.Minute}},
|
||||||
|
},
|
||||||
|
Sinks: []SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
Routes: []RouteConfig{
|
||||||
|
{Sink: "sink1", Kinds: []string{"observation", " ", "alert"}},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := cfg.Validate()
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error, got nil")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "routes[0].kinds[1]") {
|
||||||
|
t.Fatalf("expected error to mention blank kind entry, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -6,10 +6,16 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"gitea.maximumdirect.net/ejr/feedkit/event"
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/logging"
|
||||||
"gitea.maximumdirect.net/ejr/feedkit/pipeline"
|
"gitea.maximumdirect.net/ejr/feedkit/pipeline"
|
||||||
"gitea.maximumdirect.net/ejr/feedkit/sinks"
|
"gitea.maximumdirect.net/ejr/feedkit/sinks"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Logger is a printf-style logger used throughout dispatch.
|
||||||
|
// It is an alias to the shared feedkit logging type so callers can pass
|
||||||
|
// one function everywhere without type mismatch friction.
|
||||||
|
type Logger = logging.Logf
|
||||||
|
|
||||||
type Dispatcher struct {
|
type Dispatcher struct {
|
||||||
In <-chan event.Event
|
In <-chan event.Event
|
||||||
|
|
||||||
@@ -35,8 +41,6 @@ type Route struct {
|
|||||||
Kinds map[event.Kind]bool
|
Kinds map[event.Kind]bool
|
||||||
}
|
}
|
||||||
|
|
||||||
type Logger func(format string, args ...any)
|
|
||||||
|
|
||||||
func (d *Dispatcher) Run(ctx context.Context, logf Logger) error {
|
func (d *Dispatcher) Run(ctx context.Context, logf Logger) error {
|
||||||
if d.In == nil {
|
if d.In == nil {
|
||||||
return fmt.Errorf("dispatcher.Run: In channel is nil")
|
return fmt.Errorf("dispatcher.Run: In channel is nil")
|
||||||
|
|||||||
89
dispatch/routes.go
Normal file
89
dispatch/routes.go
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
package dispatch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/config"
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
)
|
||||||
|
|
||||||
|
// CompileRoutes converts config.Config routes into dispatch.Route rules.
|
||||||
|
//
|
||||||
|
// Behavior:
|
||||||
|
// - If cfg.Routes is empty, we default to "all sinks receive all kinds".
|
||||||
|
// (Implemented as one Route per sink with Kinds == nil.)
|
||||||
|
// - If a specific route's kinds: is omitted or empty, that route matches ALL kinds.
|
||||||
|
// (Also compiled as Kinds == nil.)
|
||||||
|
// - Kind strings are normalized via event.ParseKind (lowercase + trim).
|
||||||
|
//
|
||||||
|
// Note: config.Validate() ensures route.sink references a known sink and rejects
|
||||||
|
// blank kind entries. We re-check a few invariants here anyway so CompileRoutes
|
||||||
|
// is safe to call even if a daemon chooses not to call Validate().
|
||||||
|
func CompileRoutes(cfg *config.Config) ([]Route, error) {
|
||||||
|
if cfg == nil {
|
||||||
|
return nil, fmt.Errorf("dispatch.CompileRoutes: cfg is nil")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(cfg.Sinks) == 0 {
|
||||||
|
return nil, fmt.Errorf("dispatch.CompileRoutes: cfg has no sinks")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build a quick lookup of sink names (exact match; no normalization).
|
||||||
|
sinkNames := make(map[string]bool, len(cfg.Sinks))
|
||||||
|
for i, s := range cfg.Sinks {
|
||||||
|
if strings.TrimSpace(s.Name) == "" {
|
||||||
|
return nil, fmt.Errorf("dispatch.CompileRoutes: sinks[%d].name is empty", i)
|
||||||
|
}
|
||||||
|
sinkNames[s.Name] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Default routing: everything to every sink.
|
||||||
|
if len(cfg.Routes) == 0 {
|
||||||
|
out := make([]Route, 0, len(cfg.Sinks))
|
||||||
|
for _, s := range cfg.Sinks {
|
||||||
|
out = append(out, Route{
|
||||||
|
SinkName: s.Name,
|
||||||
|
Kinds: nil, // nil/empty map means "all kinds"
|
||||||
|
})
|
||||||
|
}
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
out := make([]Route, 0, len(cfg.Routes))
|
||||||
|
|
||||||
|
for i, r := range cfg.Routes {
|
||||||
|
sink := r.Sink
|
||||||
|
if strings.TrimSpace(sink) == "" {
|
||||||
|
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].sink is required", i)
|
||||||
|
}
|
||||||
|
if !sinkNames[sink] {
|
||||||
|
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].sink references unknown sink %q", i, sink)
|
||||||
|
}
|
||||||
|
|
||||||
|
// If kinds is omitted/empty, this route matches all kinds.
|
||||||
|
if len(r.Kinds) == 0 {
|
||||||
|
out = append(out, Route{
|
||||||
|
SinkName: sink,
|
||||||
|
Kinds: nil,
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
kinds := make(map[event.Kind]bool, len(r.Kinds))
|
||||||
|
for j, raw := range r.Kinds {
|
||||||
|
k, err := event.ParseKind(raw)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].kinds[%d]: %w", i, j, err)
|
||||||
|
}
|
||||||
|
kinds[k] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
out = append(out, Route{
|
||||||
|
SinkName: sink,
|
||||||
|
Kinds: kinds,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
67
dispatch/routes_test.go
Normal file
67
dispatch/routes_test.go
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
package dispatch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCompileRoutes_DefaultIsAllSinksAllKinds(t *testing.T) {
|
||||||
|
cfg := &config.Config{
|
||||||
|
Sinks: []config.SinkConfig{
|
||||||
|
{Name: "a", Driver: "stdout"},
|
||||||
|
{Name: "b", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
// Routes omitted => default
|
||||||
|
}
|
||||||
|
|
||||||
|
routes, err := CompileRoutes(cfg)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CompileRoutes error: %v", err)
|
||||||
|
}
|
||||||
|
if len(routes) != 2 {
|
||||||
|
t.Fatalf("expected 2 routes, got %d", len(routes))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Order should match cfg.Sinks order (deterministic).
|
||||||
|
if routes[0].SinkName != "a" || routes[1].SinkName != "b" {
|
||||||
|
t.Fatalf("unexpected route order: %+v", routes)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, r := range routes {
|
||||||
|
if len(r.Kinds) != 0 {
|
||||||
|
t.Fatalf("expected nil/empty kinds for default routes, got: %+v", r.Kinds)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCompileRoutes_EmptyKindsMeansAllKinds(t *testing.T) {
|
||||||
|
cfg := &config.Config{
|
||||||
|
Sinks: []config.SinkConfig{
|
||||||
|
{Name: "sink1", Driver: "stdout"},
|
||||||
|
},
|
||||||
|
Routes: []config.RouteConfig{
|
||||||
|
{Sink: "sink1"}, // omitted kinds
|
||||||
|
{Sink: "sink1", Kinds: nil}, // explicit nil
|
||||||
|
{Sink: "sink1", Kinds: []string{}}, // explicit empty
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
routes, err := CompileRoutes(cfg)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CompileRoutes error: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(routes) != 3 {
|
||||||
|
t.Fatalf("expected 3 routes, got %d", len(routes))
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, r := range routes {
|
||||||
|
if r.SinkName != "sink1" {
|
||||||
|
t.Fatalf("route[%d] unexpected sink: %q", i, r.SinkName)
|
||||||
|
}
|
||||||
|
if len(r.Kinds) != 0 {
|
||||||
|
t.Fatalf("route[%d] expected nil/empty kinds (match all), got: %+v", i, r.Kinds)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
334
doc.go
Normal file
334
doc.go
Normal file
@@ -0,0 +1,334 @@
|
|||||||
|
// Package feedkit provides domain-agnostic plumbing for "feed processing daemons".
|
||||||
|
//
|
||||||
|
// A feed daemon polls one or more upstream providers (HTTP APIs, RSS, etc.),
|
||||||
|
// converts upstream items into a normalized internal representation, applies
|
||||||
|
// lightweight policy (dedupe/rate-limit/filters), and emits events to one or
|
||||||
|
// more sinks (stdout, files, Postgres, brokers, ...).
|
||||||
|
//
|
||||||
|
// feedkit is intentionally NOT a framework. It supplies small, composable
|
||||||
|
// primitives that concrete daemons wire together in main.go (or via a small
|
||||||
|
// optional Runner helper, see "Future additions").
|
||||||
|
//
|
||||||
|
// Conceptual pipeline
|
||||||
|
//
|
||||||
|
// Collect → Normalize → Filter/Policy → Persist/Emit → Signal
|
||||||
|
//
|
||||||
|
// In feedkit today, that maps to:
|
||||||
|
//
|
||||||
|
// Collect: sources.Source + scheduler.Scheduler
|
||||||
|
// Normalize: (optional) normalize.Processor (or domain code inside Source.Poll)
|
||||||
|
// Policy: pipeline.Pipeline (Processor chain; dedupe/ratelimit are planned)
|
||||||
|
// Emit: dispatch.Dispatcher + dispatch.Fanout
|
||||||
|
// Sinks: sinks.Sink (+ sinks.Registry to build from config)
|
||||||
|
// Config: config.Load + config.Config validation
|
||||||
|
//
|
||||||
|
// Public packages (API surface)
|
||||||
|
//
|
||||||
|
// - config
|
||||||
|
// YAML configuration types and loader/validator.
|
||||||
|
//
|
||||||
|
// - config.Load(path) (*config.Config, error)
|
||||||
|
//
|
||||||
|
// - config.Config: Sources, Sinks, Routes
|
||||||
|
//
|
||||||
|
// - config.SourceConfig / SinkConfig include Params map[string]any
|
||||||
|
// with convenience helpers like:
|
||||||
|
//
|
||||||
|
// - ParamString / ParamStringDefault
|
||||||
|
//
|
||||||
|
// - ParamBool / ParamBoolDefault
|
||||||
|
//
|
||||||
|
// - ParamInt / ParamIntDefault
|
||||||
|
//
|
||||||
|
// - ParamDuration / ParamDurationDefault
|
||||||
|
//
|
||||||
|
// - ParamStringSlice
|
||||||
|
//
|
||||||
|
// - event
|
||||||
|
// Domain-agnostic event envelope moved through the system.
|
||||||
|
//
|
||||||
|
// - event.Event includes ID, Kind, Source, timestamps, Schema, Payload
|
||||||
|
//
|
||||||
|
// - event.Kind is stringly typed; event.ParseKind normalizes/validates.
|
||||||
|
//
|
||||||
|
// - sources
|
||||||
|
// Extension point for domain-specific polling jobs.
|
||||||
|
//
|
||||||
|
// - sources.Source interface: Name(), Kind(), Poll(ctx)
|
||||||
|
//
|
||||||
|
// - sources.Registry lets daemons register driver factories and build
|
||||||
|
// sources from config.SourceConfig.
|
||||||
|
//
|
||||||
|
// - scheduler
|
||||||
|
// Runs sources on a cadence and publishes emitted events onto a channel.
|
||||||
|
//
|
||||||
|
// - scheduler.Scheduler{Jobs, Out, Logf}.Run(ctx)
|
||||||
|
//
|
||||||
|
// - scheduler.Job: {Source, Every, Jitter}
|
||||||
|
//
|
||||||
|
// - pipeline
|
||||||
|
// Optional processing chain between scheduler and dispatch.
|
||||||
|
//
|
||||||
|
// - pipeline.Pipeline{Processors}.Process(ctx, event)
|
||||||
|
//
|
||||||
|
// - pipeline.Processor can mutate, drop (return nil), or error.
|
||||||
|
//
|
||||||
|
// - dedupe/ratelimit processors are placeholders (planned).
|
||||||
|
//
|
||||||
|
// - normalize
|
||||||
|
// Optional normalization hook for splitting "fetch" from "transform".
|
||||||
|
//
|
||||||
|
// Many domains (like weather) ingest multiple upstream providers whose payloads
|
||||||
|
// differ. A common evolution is to keep sources small and focused on polling,
|
||||||
|
// and move mapping/normalization into a dedicated stage.
|
||||||
|
//
|
||||||
|
// feedkit provides this as an OPTIONAL pipeline processor:
|
||||||
|
//
|
||||||
|
// - normalize.Normalizer: domain-implemented mapping logic
|
||||||
|
//
|
||||||
|
// - normalize.Registry: holds normalizers and selects one by Match()
|
||||||
|
//
|
||||||
|
// - normalize.Processor: adapts Registry into a pipeline.Processor
|
||||||
|
//
|
||||||
|
// Normalization is NOT required:
|
||||||
|
//
|
||||||
|
// - If you do all normalization inside Source.Poll, you can ignore this package.
|
||||||
|
//
|
||||||
|
// - If normalize.Processor is not installed in your pipeline, nothing changes.
|
||||||
|
//
|
||||||
|
// - If normalize.Processor is installed but no Normalizer matches an event,
|
||||||
|
// the event passes through unchanged.
|
||||||
|
//
|
||||||
|
// The key types:
|
||||||
|
//
|
||||||
|
// type Normalizer interface {
|
||||||
|
// // Match returns true if this normalizer should handle the event.
|
||||||
|
// // Matching is intentionally flexible: match on Schema, Kind, Source,
|
||||||
|
// // or any combination.
|
||||||
|
// Match(e event.Event) bool
|
||||||
|
//
|
||||||
|
// // Normalize converts the incoming event into a new (or modified) event.
|
||||||
|
// //
|
||||||
|
// // Return values:
|
||||||
|
// // - (out, nil) where out != nil: emit the normalized event
|
||||||
|
// // - (nil, nil): drop the event (policy drop)
|
||||||
|
// // - (nil, err): fail the pipeline
|
||||||
|
// Normalize(ctx context.Context, in event.Event) (*event.Event, error)
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// type Registry struct { ... }
|
||||||
|
//
|
||||||
|
// func (r *Registry) Register(n Normalizer)
|
||||||
|
//
|
||||||
|
// // Normalize finds the first matching normalizer (in registration order) and applies it.
|
||||||
|
// // If none match, it returns the input event unchanged.
|
||||||
|
// func (r *Registry) Normalize(ctx context.Context, in event.Event) (*event.Event, error)
|
||||||
|
//
|
||||||
|
// // Processor implements pipeline.Processor and calls into the Registry.
|
||||||
|
// // Optional behavior:
|
||||||
|
// // - If Registry is nil, Processor is a no-op pass-through.
|
||||||
|
// // - If RequireMatch is false (default), non-matching events pass through.
|
||||||
|
// // - If RequireMatch is true, non-matching events are treated as errors.
|
||||||
|
// type Processor struct {
|
||||||
|
// Registry *Registry
|
||||||
|
// RequireMatch bool
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// "First match wins":
|
||||||
|
// Registry applies the first Normalizer whose Match() returns true.
|
||||||
|
// This is intentional: normalization is usually a single mapping step from a
|
||||||
|
// raw schema into a canonical schema. If you want multiple sequential transforms,
|
||||||
|
// model them as multiple pipeline processors.
|
||||||
|
//
|
||||||
|
// Recommended convention: match by Event.Schema
|
||||||
|
// ------------------------------------------------
|
||||||
|
// Schema gives you a versionable selector that doesn't depend on source names.
|
||||||
|
//
|
||||||
|
// A common pattern is:
|
||||||
|
//
|
||||||
|
// - sources emit "raw" events with Schema like:
|
||||||
|
// "raw.openweather.current.v1"
|
||||||
|
// "raw.openmeteo.current.v1"
|
||||||
|
// "raw.nws.observation.v1"
|
||||||
|
//
|
||||||
|
// - normalizers transform them into canonical domain schemas like:
|
||||||
|
// "weather.observation.v1"
|
||||||
|
// "weather.forecast.v1"
|
||||||
|
// "weather.alert.v1"
|
||||||
|
//
|
||||||
|
// What is a "raw event"?
|
||||||
|
// ------------------------------------------------
|
||||||
|
// feedkit does not prescribe the raw payload representation.
|
||||||
|
// A raw payload is typically one of:
|
||||||
|
//
|
||||||
|
// - json.RawMessage (recommended for JSON APIs)
|
||||||
|
//
|
||||||
|
// - []byte (raw bytes)
|
||||||
|
//
|
||||||
|
// - map[string]any (already-decoded but untyped JSON)
|
||||||
|
//
|
||||||
|
// The only hard requirement enforced by feedkit is Event.Validate():
|
||||||
|
//
|
||||||
|
// - ID, Kind, Source, EmittedAt must be set
|
||||||
|
//
|
||||||
|
// - Payload must be non-nil
|
||||||
|
//
|
||||||
|
// If you use raw events, you still must provide Event.Kind.
|
||||||
|
// Typical approaches:
|
||||||
|
//
|
||||||
|
// - set Kind to the intended canonical kind (e.g. "observation") even before normalization
|
||||||
|
//
|
||||||
|
// - or set Kind to a domain-defined "raw_*" kind and normalize it later
|
||||||
|
//
|
||||||
|
// The simplest approach is: set Kind to the final kind early, and use Schema
|
||||||
|
// to describe the raw-vs-normalized payload shape.
|
||||||
|
//
|
||||||
|
// Wiring example (daemon main.go)
|
||||||
|
// ------------------------------------------------
|
||||||
|
// Install normalize.Processor at the front of your pipeline:
|
||||||
|
//
|
||||||
|
// normReg := &normalize.Registry{}
|
||||||
|
//
|
||||||
|
// normReg.Register(normalize.Func{
|
||||||
|
// Name: "openweather current -> weather.observation.v1",
|
||||||
|
// MatchFn: func(e event.Event) bool {
|
||||||
|
// return e.Schema == "raw.openweather.current.v1"
|
||||||
|
// },
|
||||||
|
// NormalizeFn: func(ctx context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
// // 1) interpret in.Payload (json.RawMessage / []byte / map)
|
||||||
|
// // 2) build canonical domain payload
|
||||||
|
// // 3) return updated event
|
||||||
|
//
|
||||||
|
// out := in
|
||||||
|
// out.Schema = "weather.observation.v1"
|
||||||
|
// // Optionally adjust Kind, EffectiveAt, etc.
|
||||||
|
// out.Payload = /* canonical weather observation struct */
|
||||||
|
// return &out, nil
|
||||||
|
// },
|
||||||
|
// })
|
||||||
|
//
|
||||||
|
// p := &pipeline.Pipeline{
|
||||||
|
// Processors: []pipeline.Processor{
|
||||||
|
// normalize.Processor{Registry: normReg}, // optional stage
|
||||||
|
// // dedupe.New(...), ratelimit.New(...), ...
|
||||||
|
// },
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// If the event does not match any normalizer, it passes through unmodified.
|
||||||
|
//
|
||||||
|
// - sinks
|
||||||
|
// Extension point for output adapters.
|
||||||
|
//
|
||||||
|
// - sinks.Sink interface: Name(), Consume(ctx, event)
|
||||||
|
//
|
||||||
|
// - sinks.Registry to register driver factories and build sinks from config
|
||||||
|
//
|
||||||
|
// - sinks.RegisterBuiltins registers feedkit-provided sink drivers
|
||||||
|
// (stdout/file/postgres/rabbitmq; some are currently stubs).
|
||||||
|
//
|
||||||
|
// - dispatch
|
||||||
|
// Routes processed events to sinks, and isolates slow sinks via per-sink queues.
|
||||||
|
//
|
||||||
|
// - dispatch.Dispatcher{In, Pipeline, Sinks, Routes, ...}.Run(ctx, logf)
|
||||||
|
//
|
||||||
|
// - dispatch.Fanout: one buffered queue + worker goroutine per sink
|
||||||
|
//
|
||||||
|
// - dispatch.CompileRoutes(*config.Config) compiles cfg.Routes into []dispatch.Route.
|
||||||
|
// If routes: is omitted, it defaults to "all sinks receive all kinds". If a route
|
||||||
|
// omits kinds: (or sets it empty), that route matches all kinds.
|
||||||
|
//
|
||||||
|
// - logging
|
||||||
|
// Shared logger type used across feedkit packages.
|
||||||
|
//
|
||||||
|
// - logging.Logf is a printf-style logger signature.
|
||||||
|
//
|
||||||
|
// Typical wiring (what a daemon does in main.go)
|
||||||
|
//
|
||||||
|
// 1. Load config (domain code may add domain-specific validation).
|
||||||
|
// 2. Register and build sources from config.Sources using sources.Registry.
|
||||||
|
// 3. Register and build sinks from config.Sinks using sinks.Registry.
|
||||||
|
// 4. Compile routes (typically via dispatch.CompileRoutes).
|
||||||
|
// 5. Create an event bus channel.
|
||||||
|
// 6. Start scheduler (sources → bus).
|
||||||
|
// 7. Start dispatcher (bus → pipeline → routes → sinks).
|
||||||
|
//
|
||||||
|
// A sketch:
|
||||||
|
//
|
||||||
|
// cfg, _ := config.Load("config.yml")
|
||||||
|
//
|
||||||
|
// // Build sources (domain registers its drivers).
|
||||||
|
// srcReg := sources.NewRegistry()
|
||||||
|
// // domain: srcReg.Register("openweather_observation", newOpenWeatherSource)
|
||||||
|
// // ...
|
||||||
|
//
|
||||||
|
// var jobs []scheduler.Job
|
||||||
|
// for _, sc := range cfg.Sources {
|
||||||
|
// src, _ := srcReg.Build(sc)
|
||||||
|
// jobs = append(jobs, scheduler.Job{Source: src, Every: sc.Every.Duration})
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// // Build sinks (feedkit can register builtins).
|
||||||
|
// sinkReg := sinks.NewRegistry()
|
||||||
|
// sinks.RegisterBuiltins(sinkReg)
|
||||||
|
// builtSinks := map[string]sinks.Sink{}
|
||||||
|
// for _, sk := range cfg.Sinks {
|
||||||
|
// s, _ := sinkReg.Build(sk)
|
||||||
|
// builtSinks[sk.Name] = s
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// // Compile routes.
|
||||||
|
// routes, _ := dispatch.CompileRoutes(cfg)
|
||||||
|
//
|
||||||
|
// // Event bus.
|
||||||
|
// bus := make(chan event.Event, 256)
|
||||||
|
//
|
||||||
|
// // Optional normalization registry + pipeline.
|
||||||
|
// normReg := &normalize.Registry{}
|
||||||
|
// // domain registers normalizers into normReg...
|
||||||
|
//
|
||||||
|
// p := &pipeline.Pipeline{
|
||||||
|
// Processors: []pipeline.Processor{
|
||||||
|
// normalize.Processor{Registry: normReg}, // optional
|
||||||
|
// // dedupe/ratelimit/etc...
|
||||||
|
// },
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// // Scheduler.
|
||||||
|
// s := &scheduler.Scheduler{Jobs: jobs, Out: bus, Logf: logf}
|
||||||
|
//
|
||||||
|
// // Dispatcher.
|
||||||
|
// d := &dispatch.Dispatcher{
|
||||||
|
// In: bus,
|
||||||
|
// Pipeline: p,
|
||||||
|
// Sinks: builtSinks,
|
||||||
|
// Routes: routes,
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// go s.Run(ctx)
|
||||||
|
// return d.Run(ctx, logf)
|
||||||
|
//
|
||||||
|
// Conventions (recommended, not required)
|
||||||
|
//
|
||||||
|
// - Event.ID should be stable for dedupe/storage (often "<provider>:<upstream-id>").
|
||||||
|
// - Event.Kind should be lowercase ("observation", "alert", "article", ...).
|
||||||
|
// - Event.Schema should identify the payload shape/version
|
||||||
|
// (e.g. "weather.observation.v1").
|
||||||
|
//
|
||||||
|
// # Context and cancellation
|
||||||
|
//
|
||||||
|
// All blocking or I/O work should honor ctx.Done():
|
||||||
|
// - sources.Source.Poll should pass ctx to HTTP calls, etc.
|
||||||
|
// - sinks.Sink.Consume should honor ctx (Fanout timeouts only help if sinks cooperate).
|
||||||
|
// - normalizers should honor ctx if they do expensive work (rare; usually pure transforms).
|
||||||
|
//
|
||||||
|
// Future additions (likely)
|
||||||
|
//
|
||||||
|
// - A small Runner helper that performs the standard wiring (load config,
|
||||||
|
// build sources/sinks/routes, run scheduler+dispatcher, handle shutdown).
|
||||||
|
//
|
||||||
|
// # Non-goals
|
||||||
|
//
|
||||||
|
// feedkit does not define domain payload schemas, does not enforce domain kinds,
|
||||||
|
// and does not embed domain-specific validation rules. Those live in each
|
||||||
|
// concrete daemon/module (weatherfeeder, newsfeeder, ...).
|
||||||
|
package feedkit
|
||||||
8
logging/logging.go
Normal file
8
logging/logging.go
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
package logging
|
||||||
|
|
||||||
|
// Logf is the shared printf-style logger signature used across feedkit.
|
||||||
|
//
|
||||||
|
// Keeping this in one place avoids the "scheduler.Logger vs dispatch.Logger"
|
||||||
|
// friction and makes it trivial for downstream apps to pass a single log
|
||||||
|
// function throughout the system.
|
||||||
|
type Logf func(format string, args ...any)
|
||||||
17
normalize/doc.go
Normal file
17
normalize/doc.go
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
// Package normalize provides an OPTIONAL normalization hook for feedkit pipelines.
|
||||||
|
//
|
||||||
|
// Motivation:
|
||||||
|
// Many daemons have sources that:
|
||||||
|
// 1. fetch raw upstream data (often JSON), and
|
||||||
|
// 2. transform it into a domain's normalized payload format.
|
||||||
|
//
|
||||||
|
// Doing both steps inside Source.Poll works, but tends to make sources large and
|
||||||
|
// encourages duplication (unit conversions, common mapping helpers, etc.).
|
||||||
|
//
|
||||||
|
// This package lets a source emit a "raw" event (e.g., Schema="raw.openweather.current.v1",
|
||||||
|
// Payload=json.RawMessage), and then a normalization processor can convert it into a
|
||||||
|
// normalized event (e.g., Schema="weather.observation.v1", Payload=WeatherObservation{}).
|
||||||
|
//
|
||||||
|
// Key property: normalization is optional.
|
||||||
|
// If no registered Normalizer matches an event, it passes through unchanged.
|
||||||
|
package normalize
|
||||||
76
normalize/normalize.go
Normal file
76
normalize/normalize.go
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
package normalize
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Normalizer converts one event shape into another.
|
||||||
|
//
|
||||||
|
// A Normalizer is typically domain-owned code (weatherfeeder/newsfeeder/...)
|
||||||
|
// that knows how to interpret a specific upstream payload and produce a
|
||||||
|
// normalized payload.
|
||||||
|
//
|
||||||
|
// Normalizers are selected via Match(). The matching strategy is intentionally
|
||||||
|
// flexible: implementations may match on Schema, Kind, Source, or any other
|
||||||
|
// Event fields.
|
||||||
|
type Normalizer interface {
|
||||||
|
// Match reports whether this normalizer applies to the given event.
|
||||||
|
//
|
||||||
|
// Common patterns:
|
||||||
|
// - match on e.Schema (recommended for versioning)
|
||||||
|
// - match on e.Source (useful if Schema is empty)
|
||||||
|
// - match on (e.Kind + e.Source), etc.
|
||||||
|
Match(e event.Event) bool
|
||||||
|
|
||||||
|
// Normalize transforms the incoming event into a new (or modified) event.
|
||||||
|
//
|
||||||
|
// Return values:
|
||||||
|
// - (out, nil) where out != nil: emit the normalized event
|
||||||
|
// - (nil, nil): drop the event (treat as policy drop)
|
||||||
|
// - (nil, err): fail the pipeline
|
||||||
|
//
|
||||||
|
// Note: If you simply want to pass the event through unchanged, return &in.
|
||||||
|
Normalize(ctx context.Context, in event.Event) (*event.Event, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Func is an ergonomic adapter that lets you define a Normalizer with functions.
|
||||||
|
//
|
||||||
|
// Example:
|
||||||
|
//
|
||||||
|
// n := normalize.Func{
|
||||||
|
// MatchFn: func(e event.Event) bool { return e.Schema == "raw.openweather.current.v1" },
|
||||||
|
// NormalizeFn: func(ctx context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
// // ... map in.Payload -> normalized payload ...
|
||||||
|
// },
|
||||||
|
// }
|
||||||
|
type Func struct {
|
||||||
|
MatchFn func(e event.Event) bool
|
||||||
|
NormalizeFn func(ctx context.Context, in event.Event) (*event.Event, error)
|
||||||
|
|
||||||
|
// Optional: helps produce nicer panic/error messages if something goes wrong.
|
||||||
|
Name string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f Func) Match(e event.Event) bool {
|
||||||
|
if f.MatchFn == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return f.MatchFn(e)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f Func) Normalize(ctx context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
if f.NormalizeFn == nil {
|
||||||
|
return nil, fmt.Errorf("normalize.Func(%s): NormalizeFn is nil", f.safeName())
|
||||||
|
}
|
||||||
|
return f.NormalizeFn(ctx, in)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f Func) safeName() string {
|
||||||
|
if f.Name == "" {
|
||||||
|
return "<unnamed>"
|
||||||
|
}
|
||||||
|
return f.Name
|
||||||
|
}
|
||||||
140
normalize/registry.go
Normal file
140
normalize/registry.go
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
package normalize
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Registry holds a set of Normalizers and selects one for a given event.
|
||||||
|
//
|
||||||
|
// Selection rule (simple + predictable):
|
||||||
|
// - iterate in registration order
|
||||||
|
// - the FIRST Normalizer whose Match(e) returns true is used
|
||||||
|
//
|
||||||
|
// If none match, the event passes through unchanged.
|
||||||
|
//
|
||||||
|
// Why "first match wins"?
|
||||||
|
// Normalization is usually a single mapping step from a raw schema/version into
|
||||||
|
// a normalized schema/version. If you want multiple transformation steps,
|
||||||
|
// model them as multiple pipeline processors (which feedkit already supports).
|
||||||
|
type Registry struct {
|
||||||
|
mu sync.RWMutex
|
||||||
|
ns []Normalizer
|
||||||
|
}
|
||||||
|
|
||||||
|
// Register adds a normalizer to the registry.
|
||||||
|
//
|
||||||
|
// Register panics if n is nil; this is a programmer error and should fail fast.
|
||||||
|
func (r *Registry) Register(n Normalizer) {
|
||||||
|
if n == nil {
|
||||||
|
panic("normalize.Registry.Register: normalizer cannot be nil")
|
||||||
|
}
|
||||||
|
r.mu.Lock()
|
||||||
|
defer r.mu.Unlock()
|
||||||
|
r.ns = append(r.ns, n)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalize finds the first matching Normalizer and applies it.
|
||||||
|
//
|
||||||
|
// If no normalizer matches, it returns the input event unchanged.
|
||||||
|
//
|
||||||
|
// If a normalizer returns (nil, nil), the event is dropped.
|
||||||
|
func (r *Registry) Normalize(ctx context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
if r == nil {
|
||||||
|
// Nil registry is a valid "feature off" state.
|
||||||
|
out := in
|
||||||
|
return &out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
r.mu.RLock()
|
||||||
|
ns := append([]Normalizer(nil), r.ns...) // copy for safe iteration outside lock
|
||||||
|
r.mu.RUnlock()
|
||||||
|
|
||||||
|
for _, n := range ns {
|
||||||
|
if n == nil {
|
||||||
|
// Shouldn't happen (Register panics), but guard anyway.
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !n.Match(in) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
out, err := n.Normalize(ctx, in)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("normalize: normalizer failed: %w", err)
|
||||||
|
}
|
||||||
|
// out may be nil to signal "drop".
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// No match: pass through unchanged.
|
||||||
|
out := in
|
||||||
|
return &out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Processor adapts a Registry into a pipeline Processor.
|
||||||
|
//
|
||||||
|
// It implements:
|
||||||
|
//
|
||||||
|
// Process(ctx context.Context, in event.Event) (*event.Event, error)
|
||||||
|
//
|
||||||
|
// which matches feedkit/pipeline.Processor.
|
||||||
|
//
|
||||||
|
// Optionality:
|
||||||
|
// - If Registry is nil, Processor becomes a no-op pass-through.
|
||||||
|
// - If Registry has no matching normalizer for an event, that event passes through unchanged.
|
||||||
|
type Processor struct {
|
||||||
|
Registry *Registry
|
||||||
|
|
||||||
|
// If true, events that do not match any normalizer cause an error.
|
||||||
|
// Default is false (pass-through), which is the behavior you asked for.
|
||||||
|
RequireMatch bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process implements the pipeline.Processor interface.
|
||||||
|
func (p Processor) Process(ctx context.Context, in event.Event) (*event.Event, error) {
|
||||||
|
// "Feature off": no registry means no normalization.
|
||||||
|
if p.Registry == nil {
|
||||||
|
out := in
|
||||||
|
return &out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
out, err := p.Registry.Normalize(ctx, in)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if out == nil {
|
||||||
|
// Dropped by normalization policy.
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if p.RequireMatch {
|
||||||
|
// Detect "no-op pass-through due to no match" by checking whether a match existed.
|
||||||
|
// We do this with a cheap second pass to avoid changing Normalize()'s signature.
|
||||||
|
// (This is rare to enable; correctness/clarity > micro-optimization.)
|
||||||
|
if !p.Registry.hasMatch(in) {
|
||||||
|
return nil, fmt.Errorf("normalize: no normalizer matched event (id=%s kind=%s source=%s schema=%q)",
|
||||||
|
in.ID, in.Kind, in.Source, in.Schema)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *Registry) hasMatch(in event.Event) bool {
|
||||||
|
if r == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
r.mu.RLock()
|
||||||
|
defer r.mu.RUnlock()
|
||||||
|
for _, n := range r.ns {
|
||||||
|
if n != nil && n.Match(in) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
@@ -8,9 +8,15 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"gitea.maximumdirect.net/ejr/feedkit/event"
|
"gitea.maximumdirect.net/ejr/feedkit/event"
|
||||||
|
"gitea.maximumdirect.net/ejr/feedkit/logging"
|
||||||
"gitea.maximumdirect.net/ejr/feedkit/sources"
|
"gitea.maximumdirect.net/ejr/feedkit/sources"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Logger is a printf-style logger used throughout scheduler.
|
||||||
|
// It is an alias to the shared feedkit logging type so callers can pass
|
||||||
|
// one function everywhere without type mismatch friction.
|
||||||
|
type Logger = logging.Logf
|
||||||
|
|
||||||
type Job struct {
|
type Job struct {
|
||||||
Source sources.Source
|
Source sources.Source
|
||||||
Every time.Duration
|
Every time.Duration
|
||||||
@@ -23,8 +29,6 @@ type Job struct {
|
|||||||
Jitter time.Duration
|
Jitter time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
type Logger func(format string, args ...any)
|
|
||||||
|
|
||||||
type Scheduler struct {
|
type Scheduler struct {
|
||||||
Jobs []Job
|
Jobs []Job
|
||||||
Out chan<- event.Event
|
Out chan<- event.Event
|
||||||
|
|||||||
@@ -29,7 +29,7 @@ func (p *PostgresSink) Consume(ctx context.Context, e event.Event) error {
|
|||||||
// Boundary validation: if something upstream violated invariants,
|
// Boundary validation: if something upstream violated invariants,
|
||||||
// surface it loudly rather than printing partial nonsense.
|
// surface it loudly rather than printing partial nonsense.
|
||||||
if err := e.Validate(); err != nil {
|
if err := e.Validate(); err != nil {
|
||||||
return fmt.Errorf("rabbitmq sink: invalid event: %w", err)
|
return fmt.Errorf("postgres sink: invalid event: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO implement Postgres transaction
|
// TODO implement Postgres transaction
|
||||||
|
|||||||
70
transport/http.go
Normal file
70
transport/http.go
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
// FILE: ./transport/http.go
|
||||||
|
package transport
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"net/http"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// maxResponseBodyBytes is a hard safety limit on HTTP response bodies.
|
||||||
|
// API responses should be small, so this protects us from accidental
|
||||||
|
// or malicious large responses.
|
||||||
|
const maxResponseBodyBytes = 2 << 21 // 4 MiB
|
||||||
|
|
||||||
|
// DefaultHTTPTimeout is the standard timeout used by weatherfeeder HTTP sources.
|
||||||
|
// Individual drivers may override this if they have a specific need.
|
||||||
|
const DefaultHTTPTimeout = 10 * time.Second
|
||||||
|
|
||||||
|
// NewHTTPClient returns a simple http.Client configured with a timeout.
|
||||||
|
// If timeout <= 0, DefaultHTTPTimeout is used.
|
||||||
|
func NewHTTPClient(timeout time.Duration) *http.Client {
|
||||||
|
if timeout <= 0 {
|
||||||
|
timeout = DefaultHTTPTimeout
|
||||||
|
}
|
||||||
|
return &http.Client{Timeout: timeout}
|
||||||
|
}
|
||||||
|
|
||||||
|
func FetchBody(ctx context.Context, client *http.Client, url, userAgent, accept string) ([]byte, error) {
|
||||||
|
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if userAgent != "" {
|
||||||
|
req.Header.Set("User-Agent", userAgent)
|
||||||
|
}
|
||||||
|
if accept != "" {
|
||||||
|
req.Header.Set("Accept", accept)
|
||||||
|
}
|
||||||
|
|
||||||
|
res, err := client.Do(req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer res.Body.Close()
|
||||||
|
|
||||||
|
if res.StatusCode < 200 || res.StatusCode >= 300 {
|
||||||
|
return nil, fmt.Errorf("HTTP %s", res.Status)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read at most maxResponseBodyBytes + 1 so we can detect overflow.
|
||||||
|
limited := io.LimitReader(res.Body, maxResponseBodyBytes+1)
|
||||||
|
|
||||||
|
b, err := io.ReadAll(limited)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(b) == 0 {
|
||||||
|
return nil, fmt.Errorf("empty response body")
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(b) > maxResponseBodyBytes {
|
||||||
|
return nil, fmt.Errorf("response body too large (>%d bytes)", maxResponseBodyBytes)
|
||||||
|
}
|
||||||
|
|
||||||
|
return b, nil
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user