11 Commits

Author SHA1 Message Date
215afe1acf Added a dedupe processor, and moved processor packages under processors/* 2026-03-16 18:17:53 -05:00
4572c53580 Updated sinks to add a functional postgres sink API. 2026-03-16 14:54:57 -05:00
96039f6530 refactor!: introduce generic processors registry and remove normalize registry adapter
- add new `processors` package with canonical `Processor` interface
- add `processors.Registry` with Register/Build/BuildChain factory model
- switch `pipeline.Pipeline` to `[]processors.Processor`
- replace `normalize.Registry` + registry adapter with direct `normalize.Processor`
- remove `normalize/registry.go`
- update root docs to position normalize as one optional processing stage
- add tests for processors registry, normalize processor behavior, and pipeline flow

BREAKING CHANGE:
- `pipeline.Processor` removed; use `processors.Processor`
- `normalize.Registry` and old normalize processor adapter APIs removed
- downstream daemons must update processor wiring to new `processors.Registry`
  and `normalize.NewProcessor(...)`
2026-03-16 13:14:24 -05:00
6c5f95ad26 feat(sources)!: split source contracts into PollSource/StreamSource and add mode-aware source config
- Introduce explicit source interfaces: sources.PollSource and sources.StreamSource, with shared sources.Input (Name() only).
- Remove mandatory Kind() from the base source contract to support sources that emit multiple kinds.
- Add config.SourceMode (poll, stream, or omitted/auto) and SourceConfig.Kinds (plural expected kinds), while keeping legacy SourceConfig.Kind for compatibility.
- Enforce mode semantics in config validation (poll requires every, stream forbids every) and detect mode/driver mismatches in sources.Registry.
- Update docs and tests for the new source model and config behavior.
2026-03-15 19:19:19 -05:00
fafba0f01b Refactored the scheduler and source interfaces to accommondate both polling (e.g., HTTP) sources and streaming (e.g., message queue) sources. 2026-02-08 15:03:46 -06:00
3c95fa97cd Updated the README to reflect recent updates to default sinks. 2026-02-07 19:45:26 -06:00
dbca0548b1 Implemented a builtin NATS sink. 2026-02-07 11:35:55 -06:00
9b2c1e5ceb transport: Moved transport/http.go upstream to feedkit (was previously in weatherfeeder). 2026-01-15 19:08:28 -06:00
1d43adcfa0 dispatch: allow empty route kinds (match all) + add routing tests
- config: permit routes[].kinds to be omitted/empty; treat as "all kinds"
- dispatch: compile empty kinds to Route{Kinds:nil} (match all kinds)
- tests: add coverage for route compilation + config validation edge cases

Files:
- config/load.go
- config/config.go
- dispatch/routes.go
- config/validate_test.go
- dispatch/routes_test.go
2026-01-15 18:26:45 -06:00
a6c133319a feat(feedkit): add optional normalization hook and document external API
Introduce an optional normalization stage for feedkit pipelines via the new
normalize package. This adds:

- normalize.Normalizer interface with flexible Match() semantics
- normalize.Registry for ordered normalizer selection (first match wins)
- normalize.Processor adapter implementing pipeline.Processor
- Pass-through behavior when no normalizer matches (normalization is optional)
- Func helper for ergonomic normalizer definitions

Update root doc.go to fully document the normalization model, its role in the
pipeline, recommended conventions (Schema-based matching, raw vs normalized
events), and concrete wiring examples. The documentation now serves as a
complete external-facing API specification for downstream daemons such as
weatherfeeder.

This change preserves feedkit’s non-framework philosophy while enabling a
clean separation between data collection and domain normalization.
2026-01-13 18:23:43 -06:00
09bc65e947 feedkit: ergonomics pass (shared logger, route compiler, param helpers)
- Add logging.Logf as the canonical printf-style logger type used across feedkit.
  - Update scheduler and dispatch to alias their Logger types to logging.Logf.
  - Eliminates type-mismatch friction when wiring one log function through the system.

- Add dispatch.CompileRoutes(*config.Config) ([]dispatch.Route, error)
  - Compiles config routes into dispatch routes with event.ParseKind normalization.
  - If routes: is omitted, defaults to “all sinks receive all kinds”.

- Expand config param helpers for both SourceConfig and SinkConfig
  - Add ParamBool/ParamInt/ParamDuration/ParamStringSlice (+ Default variants).
  - Supports common YAML-decoded types (bool/int/float/string, []any, etc.)
  - Keeps driver code cleaner and reduces repeated type assertions.

- Fix Postgres sink validation error prefix ("postgres sink", not "rabbitmq sink").
2026-01-13 14:40:29 -06:00
40 changed files with 3802 additions and 170 deletions

126
README.md
View File

@@ -1,3 +1,127 @@
# feedkit
Feedkit provides core interfaces and plumbing for applications that ingest and process feeds.
`feedkit` provides domain-agnostic plumbing for feed-processing daemons.
A daemon built on feedkit typically:
- ingests upstream input (polling APIs or consuming streams)
- emits domain-agnostic `event.Event` values
- applies optional processing (normalization, dedupe, policy)
- routes events to sinks (stdout, NATS, files, databases, etc.)
## Philosophy
feedkit is not a framework. It provides small composable packages and leaves
lifecycle, domain schemas, and domain-specific validation in your daemon.
## Conceptual pipeline
Collect -> Process (optional stages, including dedupe + normalize) -> Route -> Emit
| Stage | Package(s) |
|---|---|
| Collect | `sources`, `scheduler` |
| Process | `pipeline`, `processors`, `processors/dedupe`, `processors/normalize` (optional stages) |
| Route | `dispatch` |
| Emit | `sinks` |
| Configure | `config` |
## Core packages
### `config`
Loads YAML config with strict decoding and domain-agnostic validation.
`SourceConfig` supports both source modes:
- `mode: poll` requires `every`
- `mode: stream` forbids `every`
- omitted `mode` means auto (inferred from the registered driver type)
It also supports optional expected source kinds:
- `kinds: ["observation", "alert"]` (preferred)
- `kind: "observation"` (legacy fallback)
### `event`
Defines the domain-agnostic event envelope (`event.Event`) used across the system.
### `sources`
Defines source interfaces and driver registry:
```go
type Input interface {
Name() string
}
type PollSource interface {
Input
Poll(ctx context.Context) ([]event.Event, error)
}
type StreamSource interface {
Input
Run(ctx context.Context, out chan<- event.Event) error
}
```
Notes:
- a poll can emit `0..N` events
- stream sources emit events continuously
- a single source may emit multiple event kinds
- driver implementations live in downstream daemons and are registered via `sources.Registry`
### `scheduler`
Runs one goroutine per source job:
- poll sources: cadence driven (`every` + jitter)
- stream sources: continuous run loop
### `pipeline`
Optional processing chain between collection and dispatch.
Processors can transform, drop, or reject events.
### `processors`
Defines the generic processor interface and a named-driver registry used by
daemons to build ordered processor chains.
### `processors/dedupe`
Built-in in-memory LRU dedupe processor that drops repeated events by `Event.ID`.
### `processors/normalize`
Concrete normalization processor implementation. Typical use: sources emit raw
payload events, then a normalize stage maps them to canonical schemas.
### `dispatch`
Compiles routes and fans out events to sinks with per-sink queue/worker isolation.
### `sinks`
Defines sink interface and sink registry. Built-ins include:
- `stdout`
- `nats`
- `postgres`
Detailed Postgres configuration and wiring examples live in package docs:
`sinks/doc.go`.
## Typical wiring
1. Load config.
2. Register/build sources from `cfg.Sources`.
3. Register/build sinks from `cfg.Sinks`.
4. Compile routes.
5. Start scheduler (`sources -> bus`).
6. Start dispatcher (`bus -> pipeline -> sinks`).
## Non-goals
feedkit intentionally does not:
- define domain payload schemas
- enforce domain-specific event kinds
- own application lifecycle
- prescribe observability stack choices

View File

@@ -21,20 +21,56 @@ type Config struct {
Routes []RouteConfig `yaml:"routes"`
}
// SourceConfig describes one polling job.
// SourceMode selects how a source receives upstream input.
//
// Empty mode means "auto": feedkit infers mode from the registered driver type.
type SourceMode string
const (
SourceModeAuto SourceMode = ""
SourceModePoll SourceMode = "poll"
SourceModeStream SourceMode = "stream"
)
// Normalize lowercases and trims the mode.
func (m SourceMode) Normalize() SourceMode {
switch strings.ToLower(strings.TrimSpace(string(m))) {
case "":
return SourceModeAuto
case string(SourceModePoll):
return SourceModePoll
case string(SourceModeStream):
return SourceModeStream
default:
return SourceMode(strings.ToLower(strings.TrimSpace(string(m))))
}
}
// SourceConfig describes one input source.
//
// This is intentionally generic:
// - driver-specific knobs belong in Params.
// - "kind" is allowed (useful for safety checks / routing), but feedkit does not
// restrict the allowed values.
// - mode controls polling vs streaming behavior.
// - expected emitted kinds are optional and domain-defined.
type SourceConfig struct {
Name string `yaml:"name"`
Driver string `yaml:"driver"` // e.g. "openmeteo_observation", "rss_feed", etc.
Every Duration `yaml:"every"` // "15m", "1m", etc.
// Mode is optional:
// - "poll": Every must be set (>0)
// - "stream": Every must be omitted/zero
// - empty: infer from driver registration type (poll vs stream)
Mode SourceMode `yaml:"mode"`
// Kind is optional and domain-defined. If set, it should be a non-empty string.
// Domains commonly use it to enforce "this source should only emit kind X".
// Every is the poll cadence for poll-mode sources ("15m", "1m", etc.).
Every Duration `yaml:"every"`
// Kinds is optional and domain-defined.
// If set, it describes the expected emitted event kinds for this source.
Kinds []string `yaml:"kinds"`
// Kind is the legacy singular form. Prefer "kinds".
// If both kind and kinds are set, validation fails.
Kind string `yaml:"kind"`
// Params are driver-specific settings (URL, headers, station IDs, API keys, etc.).
@@ -42,6 +78,26 @@ type SourceConfig struct {
Params map[string]any `yaml:"params"`
}
// ExpectedKinds returns normalized expected kinds from config.
// "kinds" takes precedence; "kind" is used as a legacy fallback.
func (cfg SourceConfig) ExpectedKinds() []string {
if len(cfg.Kinds) > 0 {
out := make([]string, 0, len(cfg.Kinds))
for _, k := range cfg.Kinds {
k = strings.TrimSpace(k)
if k == "" {
continue
}
out = append(out, k)
}
return out
}
if k := strings.TrimSpace(cfg.Kind); k != "" {
return []string{k}
}
return nil
}
// SinkConfig describes one output sink adapter.
type SinkConfig struct {
Name string `yaml:"name"`
@@ -54,7 +110,10 @@ type RouteConfig struct {
Sink string `yaml:"sink"` // sink name
// Kinds is domain-defined. feedkit only enforces that each entry is non-empty.
// Whether a given daemon "recognizes" a kind is domain-specific validation.
//
// If Kinds is omitted or empty, the route matches ALL kinds.
// This is useful when you want explicit per-sink routing rules even when a
// particular sink should receive everything.
Kinds []string `yaml:"kinds"`
}
@@ -128,12 +187,3 @@ func (d *Duration) UnmarshalYAML(value *yaml.Node) error {
// Anything else: reject.
return fmt.Errorf("duration must be a string like 15m or an integer minutes, got tag %s", value.Tag)
}
func isAllDigits(s string) bool {
for _, r := range s {
if r < '0' || r > '9' {
return false
}
}
return len(s) > 0
}

56
config/config_test.go Normal file
View File

@@ -0,0 +1,56 @@
package config
import (
"reflect"
"testing"
)
func TestSourceConfigExpectedKinds(t *testing.T) {
tests := []struct {
name string
cfg SourceConfig
want []string
}{
{
name: "plural kinds preferred",
cfg: SourceConfig{
Kinds: []string{" observation ", "forecast"},
Kind: "alert",
},
want: []string{"observation", "forecast"},
},
{
name: "legacy singular fallback",
cfg: SourceConfig{
Kind: " alert ",
},
want: []string{"alert"},
},
{
name: "empty kinds",
cfg: SourceConfig{},
want: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := tt.cfg.ExpectedKinds()
if !reflect.DeepEqual(got, tt.want) {
t.Fatalf("ExpectedKinds() = %#v, want %#v", got, tt.want)
}
})
}
}
func TestSourceModeNormalize(t *testing.T) {
if got := SourceMode(" Poll ").Normalize(); got != SourceModePoll {
t.Fatalf("Normalize poll = %q, want %q", got, SourceModePoll)
}
if got := SourceMode("STREAM").Normalize(); got != SourceModeStream {
t.Fatalf("Normalize stream = %q, want %q", got, SourceModeStream)
}
if got := SourceMode("").Normalize(); got != SourceModeAuto {
t.Fatalf("Normalize auto = %q, want %q", got, SourceModeAuto)
}
}

View File

@@ -83,15 +83,41 @@ func (c *Config) Validate() error {
m.Add(fieldErr(path+".driver", "is required (e.g. openmeteo_observation, rss_feed, ...)"))
}
// Every
if s.Every.Duration <= 0 {
m.Add(fieldErr(path+".every", "must be a positive duration (e.g. 15m, 1m, 30s)"))
// Mode
mode := s.Mode.Normalize()
if s.Mode != SourceModeAuto && mode != SourceModePoll && mode != SourceModeStream {
m.Add(fieldErr(path+".mode", `must be one of: "poll", "stream" (or omit for auto)`))
}
// Kind (optional but if present must be non-empty after trimming)
// Every
if s.Every.Duration < 0 {
m.Add(fieldErr(path+".every", "is optional, but must be a positive duration (e.g. 15m, 1m, 30s) if provided"))
} else {
switch mode {
case SourceModePoll:
if s.Every.Duration <= 0 {
m.Add(fieldErr(path+".every", `is required when mode="poll" (e.g. 15m, 1m, 30s)`))
}
case SourceModeStream:
if s.Every.Duration > 0 {
m.Add(fieldErr(path+".every", `must be omitted when mode="stream"`))
}
}
}
// Kind/Kinds (optional)
if s.Kind != "" && len(s.Kinds) > 0 {
m.Add(fieldErr(path+".kind", `cannot be set when "kinds" is provided (use only "kinds")`))
}
if s.Kind != "" && strings.TrimSpace(s.Kind) == "" {
m.Add(fieldErr(path+".kind", "cannot be blank (omit it entirely, or provide a non-empty string)"))
}
for j, k := range s.Kinds {
kpath := fmt.Sprintf("%s.kinds[%d]", path, j)
if strings.TrimSpace(k) == "" {
m.Add(fieldErr(kpath, "kind cannot be empty"))
}
}
// Params can be nil; that's fine.
}
@@ -133,11 +159,8 @@ func (c *Config) Validate() error {
m.Add(fieldErr(path+".sink", fmt.Sprintf("references unknown sink %q (define it under sinks:)", r.Sink)))
}
if len(r.Kinds) == 0 {
// You could relax this later (e.g. empty == "all kinds"), but for now
// keeping it strict prevents accidental "route does nothing".
m.Add(fieldErr(path+".kinds", "must contain at least one kind"))
} else {
// Kinds is optional. If omitted or empty, the route matches ALL kinds.
// If provided, each entry must be non-empty.
for j, k := range r.Kinds {
kpath := fmt.Sprintf("%s.kinds[%d]", path, j)
if strings.TrimSpace(k) == "" {
@@ -145,7 +168,6 @@ func (c *Config) Validate() error {
}
}
}
}
return m.Err()
}

View File

@@ -1,32 +1,21 @@
// feedkit/config/params.go
package config
import "strings"
import (
"math"
"strconv"
"strings"
"time"
)
// ---- SourceConfig param helpers ----
// ParamString returns the first non-empty string found for any of the provided keys.
// Values must actually be strings in the decoded config; other types are ignored.
//
// This keeps cfg.Params flexible (map[string]any) while letting callers stay type-safe.
func (cfg SourceConfig) ParamString(keys ...string) (string, bool) {
if cfg.Params == nil {
return "", false
}
for _, k := range keys {
v, ok := cfg.Params[k]
if !ok || v == nil {
continue
}
s, ok := v.(string)
if !ok {
continue
}
s = strings.TrimSpace(s)
if s == "" {
continue
}
return s, true
}
return "", false
return paramString(cfg.Params, keys...)
}
// ParamStringDefault returns ParamString(keys...) if present; otherwise it returns def.
@@ -38,14 +27,150 @@ func (cfg SourceConfig) ParamStringDefault(def string, keys ...string) string {
return strings.TrimSpace(def)
}
// ParamBool returns the first boolean found for any of the provided keys.
//
// Accepted types in Params:
// - bool
// - string: parsed via strconv.ParseBool ("true"/"false"/"1"/"0", etc.)
func (cfg SourceConfig) ParamBool(keys ...string) (bool, bool) {
return paramBool(cfg.Params, keys...)
}
func (cfg SourceConfig) ParamBoolDefault(def bool, keys ...string) bool {
if v, ok := cfg.ParamBool(keys...); ok {
return v
}
return def
}
// ParamInt returns the first integer-like value found for any of the provided keys.
//
// Accepted types in Params:
// - any integer type (int, int64, uint32, ...)
// - float32/float64 ONLY if it is an exact integer (e.g. 15.0)
// - string: parsed via strconv.Atoi (e.g. "42")
func (cfg SourceConfig) ParamInt(keys ...string) (int, bool) {
return paramInt(cfg.Params, keys...)
}
func (cfg SourceConfig) ParamIntDefault(def int, keys ...string) int {
if v, ok := cfg.ParamInt(keys...); ok {
return v
}
return def
}
// ParamDuration returns the first duration-like value found for any of the provided keys.
//
// Accepted types in Params:
// - time.Duration
// - string: parsed via time.ParseDuration (e.g. "250ms", "30s", "5m")
// - if the string is all digits (e.g. "30"), it is interpreted as SECONDS
// - numeric: interpreted as SECONDS (e.g. 30 => 30s)
//
// Rationale: Param durations are usually timeouts/backoffs; seconds are a sane numeric default.
// If you want minutes/hours, prefer a duration string like "5m" or "1h".
func (cfg SourceConfig) ParamDuration(keys ...string) (time.Duration, bool) {
return paramDuration(cfg.Params, keys...)
}
func (cfg SourceConfig) ParamDurationDefault(def time.Duration, keys ...string) time.Duration {
if v, ok := cfg.ParamDuration(keys...); ok {
return v
}
return def
}
// ParamStringSlice returns the first string-slice-like value found for any of the provided keys.
//
// Accepted types in Params:
// - []string
// - []any where each element is a string
// - string:
// - if it contains commas, split on commas (",") and trim each item
// - otherwise treat as a single-item list
//
// Empty/blank items are removed.
func (cfg SourceConfig) ParamStringSlice(keys ...string) ([]string, bool) {
return paramStringSlice(cfg.Params, keys...)
}
// ---- SinkConfig param helpers ----
// ParamString returns the first non-empty string found for any of the provided keys
// in SinkConfig.Params. (Same rationale as SourceConfig.ParamString.)
func (cfg SinkConfig) ParamString(keys ...string) (string, bool) {
if cfg.Params == nil {
return "", false
return paramString(cfg.Params, keys...)
}
// ParamStringDefault returns ParamString(keys...) if present; otherwise it returns def.
// Symmetric helper for sink implementations.
func (cfg SinkConfig) ParamStringDefault(def string, keys ...string) string {
if s, ok := cfg.ParamString(keys...); ok {
return s
}
return strings.TrimSpace(def)
}
func (cfg SinkConfig) ParamBool(keys ...string) (bool, bool) {
return paramBool(cfg.Params, keys...)
}
func (cfg SinkConfig) ParamBoolDefault(def bool, keys ...string) bool {
if v, ok := cfg.ParamBool(keys...); ok {
return v
}
return def
}
func (cfg SinkConfig) ParamInt(keys ...string) (int, bool) {
return paramInt(cfg.Params, keys...)
}
func (cfg SinkConfig) ParamIntDefault(def int, keys ...string) int {
if v, ok := cfg.ParamInt(keys...); ok {
return v
}
return def
}
func (cfg SinkConfig) ParamDuration(keys ...string) (time.Duration, bool) {
return paramDuration(cfg.Params, keys...)
}
func (cfg SinkConfig) ParamDurationDefault(def time.Duration, keys ...string) time.Duration {
if v, ok := cfg.ParamDuration(keys...); ok {
return v
}
return def
}
func (cfg SinkConfig) ParamStringSlice(keys ...string) ([]string, bool) {
return paramStringSlice(cfg.Params, keys...)
}
// ---- shared implementations (package-private) ----
func paramAny(params map[string]any, keys ...string) (any, bool) {
if params == nil {
return nil, false
}
for _, k := range keys {
v, ok := cfg.Params[k]
v, ok := params[k]
if !ok || v == nil {
continue
}
return v, true
}
return nil, false
}
func paramString(params map[string]any, keys ...string) (string, bool) {
for _, k := range keys {
if params == nil {
return "", false
}
v, ok := params[k]
if !ok || v == nil {
continue
}
@@ -62,11 +187,213 @@ func (cfg SinkConfig) ParamString(keys ...string) (string, bool) {
return "", false
}
// ParamStringDefault returns ParamString(keys...) if present; otherwise it returns def.
// Symmetric helper for sink implementations.
func (cfg SinkConfig) ParamStringDefault(def string, keys ...string) string {
if s, ok := cfg.ParamString(keys...); ok {
return s
func paramBool(params map[string]any, keys ...string) (bool, bool) {
v, ok := paramAny(params, keys...)
if !ok {
return false, false
}
return strings.TrimSpace(def)
switch t := v.(type) {
case bool:
return t, true
case string:
s := strings.TrimSpace(t)
if s == "" {
return false, false
}
parsed, err := strconv.ParseBool(s)
if err != nil {
return false, false
}
return parsed, true
default:
return false, false
}
}
func paramInt(params map[string]any, keys ...string) (int, bool) {
v, ok := paramAny(params, keys...)
if !ok {
return 0, false
}
switch t := v.(type) {
case int:
return t, true
case int8:
return int(t), true
case int16:
return int(t), true
case int32:
return int(t), true
case int64:
return int(t), true
case uint:
return int(t), true
case uint8:
return int(t), true
case uint16:
return int(t), true
case uint32:
return int(t), true
case uint64:
return int(t), true
case float32:
f := float64(t)
if math.IsNaN(f) || math.IsInf(f, 0) {
return 0, false
}
if math.Trunc(f) != f {
return 0, false
}
return int(f), true
case float64:
if math.IsNaN(t) || math.IsInf(t, 0) {
return 0, false
}
if math.Trunc(t) != t {
return 0, false
}
return int(t), true
case string:
s := strings.TrimSpace(t)
if s == "" {
return 0, false
}
n, err := strconv.Atoi(s)
if err != nil {
return 0, false
}
return n, true
default:
return 0, false
}
}
func paramDuration(params map[string]any, keys ...string) (time.Duration, bool) {
v, ok := paramAny(params, keys...)
if !ok {
return 0, false
}
switch t := v.(type) {
case time.Duration:
if t <= 0 {
return 0, false
}
return t, true
case string:
s := strings.TrimSpace(t)
if s == "" {
return 0, false
}
// Numeric strings are interpreted as seconds (see doc comment).
if isAllDigits(s) {
n, err := strconv.Atoi(s)
if err != nil || n <= 0 {
return 0, false
}
return time.Duration(n) * time.Second, true
}
d, err := time.ParseDuration(s)
if err != nil || d <= 0 {
return 0, false
}
return d, true
case int:
if t <= 0 {
return 0, false
}
return time.Duration(t) * time.Second, true
case int64:
if t <= 0 {
return 0, false
}
return time.Duration(t) * time.Second, true
case float64:
if math.IsNaN(t) || math.IsInf(t, 0) || t <= 0 {
return 0, false
}
// Allow fractional seconds.
secs := t * float64(time.Second)
return time.Duration(secs), true
case float32:
f := float64(t)
if math.IsNaN(f) || math.IsInf(f, 0) || f <= 0 {
return 0, false
}
secs := f * float64(time.Second)
return time.Duration(secs), true
default:
return 0, false
}
}
func paramStringSlice(params map[string]any, keys ...string) ([]string, bool) {
v, ok := paramAny(params, keys...)
if !ok {
return nil, false
}
clean := func(items []string) ([]string, bool) {
out := make([]string, 0, len(items))
for _, it := range items {
it = strings.TrimSpace(it)
if it == "" {
continue
}
out = append(out, it)
}
if len(out) == 0 {
return nil, false
}
return out, true
}
switch t := v.(type) {
case []string:
return clean(t)
case []any:
tmp := make([]string, 0, len(t))
for _, it := range t {
s, ok := it.(string)
if !ok {
continue
}
tmp = append(tmp, s)
}
return clean(tmp)
case string:
s := strings.TrimSpace(t)
if s == "" {
return nil, false
}
if strings.Contains(s, ",") {
parts := strings.Split(s, ",")
return clean(parts)
}
return clean([]string{s})
default:
return nil, false
}
}
func isAllDigits(s string) bool {
for _, r := range s {
if r < '0' || r > '9' {
return false
}
}
return len(s) > 0
}

164
config/validate_test.go Normal file
View File

@@ -0,0 +1,164 @@
package config
import (
"strings"
"testing"
"time"
)
func TestValidate_RouteKindsEmptyIsAllowed(t *testing.T) {
cfg := &Config{
Sources: []SourceConfig{
{Name: "src1", Driver: "driver1", Every: Duration{Duration: time.Minute}},
},
Sinks: []SinkConfig{
{Name: "sink1", Driver: "stdout"},
},
Routes: []RouteConfig{
{Sink: "sink1", Kinds: nil}, // omitted
{Sink: "sink1", Kinds: []string{}}, // explicit empty
},
}
if err := cfg.Validate(); err != nil {
t.Fatalf("expected no error, got: %v", err)
}
}
func TestValidate_RouteKindsRejectsBlankEntries(t *testing.T) {
cfg := &Config{
Sources: []SourceConfig{
{Name: "src1", Driver: "driver1", Every: Duration{Duration: time.Minute}},
},
Sinks: []SinkConfig{
{Name: "sink1", Driver: "stdout"},
},
Routes: []RouteConfig{
{Sink: "sink1", Kinds: []string{"observation", " ", "alert"}},
},
}
err := cfg.Validate()
if err == nil {
t.Fatalf("expected error, got nil")
}
if !strings.Contains(err.Error(), "routes[0].kinds[1]") {
t.Fatalf("expected error to mention blank kind entry, got: %v", err)
}
}
func TestValidate_SourceModePollRequiresEvery(t *testing.T) {
cfg := &Config{
Sources: []SourceConfig{
{Name: "src1", Driver: "driver1", Mode: SourceModePoll},
},
Sinks: []SinkConfig{
{Name: "sink1", Driver: "stdout"},
},
}
err := cfg.Validate()
if err == nil {
t.Fatalf("expected error, got nil")
}
if !strings.Contains(err.Error(), `sources[0].every`) {
t.Fatalf("expected error to mention sources[0].every, got: %v", err)
}
}
func TestValidate_SourceModeStreamRejectsEvery(t *testing.T) {
cfg := &Config{
Sources: []SourceConfig{
{
Name: "src1",
Driver: "driver1",
Mode: SourceModeStream,
Every: Duration{Duration: time.Minute},
},
},
Sinks: []SinkConfig{
{Name: "sink1", Driver: "stdout"},
},
}
err := cfg.Validate()
if err == nil {
t.Fatalf("expected error, got nil")
}
if !strings.Contains(err.Error(), `sources[0].every`) {
t.Fatalf("expected error to mention sources[0].every, got: %v", err)
}
}
func TestValidate_SourceModeRejectsUnknownValue(t *testing.T) {
cfg := &Config{
Sources: []SourceConfig{
{
Name: "src1",
Driver: "driver1",
Mode: SourceMode("batch"),
Every: Duration{Duration: time.Minute},
},
},
Sinks: []SinkConfig{
{Name: "sink1", Driver: "stdout"},
},
}
err := cfg.Validate()
if err == nil {
t.Fatalf("expected error, got nil")
}
if !strings.Contains(err.Error(), `sources[0].mode`) {
t.Fatalf("expected error to mention sources[0].mode, got: %v", err)
}
}
func TestValidate_SourceKindAndKindsConflict(t *testing.T) {
cfg := &Config{
Sources: []SourceConfig{
{
Name: "src1",
Driver: "driver1",
Every: Duration{Duration: time.Minute},
Kind: "observation",
Kinds: []string{"forecast"},
},
},
Sinks: []SinkConfig{
{Name: "sink1", Driver: "stdout"},
},
}
err := cfg.Validate()
if err == nil {
t.Fatalf("expected error, got nil")
}
if !strings.Contains(err.Error(), `sources[0].kind`) {
t.Fatalf("expected error to mention sources[0].kind, got: %v", err)
}
}
func TestValidate_SourceKindsRejectBlankEntries(t *testing.T) {
cfg := &Config{
Sources: []SourceConfig{
{
Name: "src1",
Driver: "driver1",
Every: Duration{Duration: time.Minute},
Kinds: []string{"observation", " "},
},
},
Sinks: []SinkConfig{
{Name: "sink1", Driver: "stdout"},
},
}
err := cfg.Validate()
if err == nil {
t.Fatalf("expected error, got nil")
}
if !strings.Contains(err.Error(), `sources[0].kinds[1]`) {
t.Fatalf("expected error to mention sources[0].kinds[1], got: %v", err)
}
}

View File

@@ -6,10 +6,16 @@ import (
"time"
"gitea.maximumdirect.net/ejr/feedkit/event"
"gitea.maximumdirect.net/ejr/feedkit/logging"
"gitea.maximumdirect.net/ejr/feedkit/pipeline"
"gitea.maximumdirect.net/ejr/feedkit/sinks"
)
// Logger is a printf-style logger used throughout dispatch.
// It is an alias to the shared feedkit logging type so callers can pass
// one function everywhere without type mismatch friction.
type Logger = logging.Logf
type Dispatcher struct {
In <-chan event.Event
@@ -35,8 +41,6 @@ type Route struct {
Kinds map[event.Kind]bool
}
type Logger func(format string, args ...any)
func (d *Dispatcher) Run(ctx context.Context, logf Logger) error {
if d.In == nil {
return fmt.Errorf("dispatcher.Run: In channel is nil")

89
dispatch/routes.go Normal file
View File

@@ -0,0 +1,89 @@
package dispatch
import (
"fmt"
"strings"
"gitea.maximumdirect.net/ejr/feedkit/config"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
// CompileRoutes converts config.Config routes into dispatch.Route rules.
//
// Behavior:
// - If cfg.Routes is empty, we default to "all sinks receive all kinds".
// (Implemented as one Route per sink with Kinds == nil.)
// - If a specific route's kinds: is omitted or empty, that route matches ALL kinds.
// (Also compiled as Kinds == nil.)
// - Kind strings are normalized via event.ParseKind (lowercase + trim).
//
// Note: config.Validate() ensures route.sink references a known sink and rejects
// blank kind entries. We re-check a few invariants here anyway so CompileRoutes
// is safe to call even if a daemon chooses not to call Validate().
func CompileRoutes(cfg *config.Config) ([]Route, error) {
if cfg == nil {
return nil, fmt.Errorf("dispatch.CompileRoutes: cfg is nil")
}
if len(cfg.Sinks) == 0 {
return nil, fmt.Errorf("dispatch.CompileRoutes: cfg has no sinks")
}
// Build a quick lookup of sink names (exact match; no normalization).
sinkNames := make(map[string]bool, len(cfg.Sinks))
for i, s := range cfg.Sinks {
if strings.TrimSpace(s.Name) == "" {
return nil, fmt.Errorf("dispatch.CompileRoutes: sinks[%d].name is empty", i)
}
sinkNames[s.Name] = true
}
// Default routing: everything to every sink.
if len(cfg.Routes) == 0 {
out := make([]Route, 0, len(cfg.Sinks))
for _, s := range cfg.Sinks {
out = append(out, Route{
SinkName: s.Name,
Kinds: nil, // nil/empty map means "all kinds"
})
}
return out, nil
}
out := make([]Route, 0, len(cfg.Routes))
for i, r := range cfg.Routes {
sink := r.Sink
if strings.TrimSpace(sink) == "" {
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].sink is required", i)
}
if !sinkNames[sink] {
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].sink references unknown sink %q", i, sink)
}
// If kinds is omitted/empty, this route matches all kinds.
if len(r.Kinds) == 0 {
out = append(out, Route{
SinkName: sink,
Kinds: nil,
})
continue
}
kinds := make(map[event.Kind]bool, len(r.Kinds))
for j, raw := range r.Kinds {
k, err := event.ParseKind(raw)
if err != nil {
return nil, fmt.Errorf("dispatch.CompileRoutes: routes[%d].kinds[%d]: %w", i, j, err)
}
kinds[k] = true
}
out = append(out, Route{
SinkName: sink,
Kinds: kinds,
})
}
return out, nil
}

67
dispatch/routes_test.go Normal file
View File

@@ -0,0 +1,67 @@
package dispatch
import (
"testing"
"gitea.maximumdirect.net/ejr/feedkit/config"
)
func TestCompileRoutes_DefaultIsAllSinksAllKinds(t *testing.T) {
cfg := &config.Config{
Sinks: []config.SinkConfig{
{Name: "a", Driver: "stdout"},
{Name: "b", Driver: "stdout"},
},
// Routes omitted => default
}
routes, err := CompileRoutes(cfg)
if err != nil {
t.Fatalf("CompileRoutes error: %v", err)
}
if len(routes) != 2 {
t.Fatalf("expected 2 routes, got %d", len(routes))
}
// Order should match cfg.Sinks order (deterministic).
if routes[0].SinkName != "a" || routes[1].SinkName != "b" {
t.Fatalf("unexpected route order: %+v", routes)
}
for _, r := range routes {
if len(r.Kinds) != 0 {
t.Fatalf("expected nil/empty kinds for default routes, got: %+v", r.Kinds)
}
}
}
func TestCompileRoutes_EmptyKindsMeansAllKinds(t *testing.T) {
cfg := &config.Config{
Sinks: []config.SinkConfig{
{Name: "sink1", Driver: "stdout"},
},
Routes: []config.RouteConfig{
{Sink: "sink1"}, // omitted kinds
{Sink: "sink1", Kinds: nil}, // explicit nil
{Sink: "sink1", Kinds: []string{}}, // explicit empty
},
}
routes, err := CompileRoutes(cfg)
if err != nil {
t.Fatalf("CompileRoutes error: %v", err)
}
if len(routes) != 3 {
t.Fatalf("expected 3 routes, got %d", len(routes))
}
for i, r := range routes {
if r.SinkName != "sink1" {
t.Fatalf("route[%d] unexpected sink: %q", i, r.SinkName)
}
if len(r.Kinds) != 0 {
t.Fatalf("route[%d] expected nil/empty kinds (match all), got: %+v", i, r.Kinds)
}
}
}

116
doc.go Normal file
View File

@@ -0,0 +1,116 @@
// Package feedkit provides domain-agnostic plumbing for feed-processing daemons.
//
// A feed daemon ingests upstream input, turns it into event.Event values, applies
// optional processing, and emits to sinks.
//
// Conceptual flow:
//
// Collect -> Process (optional stages, including dedupe + normalize) -> Route -> Emit
//
// In feedkit this maps to:
//
// Collect: sources + scheduler
// Process: pipeline + processors + processors/dedupe + processors/normalize (optional stages)
// Route: dispatch
// Emit: sinks
// Config: config
//
// feedkit intentionally does not define domain payload schemas or domain-specific
// validation rules. Those belong in each concrete daemon.
//
// Public packages
//
// - config
// YAML config loading/validation (strict decode + domain-agnostic checks).
//
// SourceConfig supports both polling and streaming sources:
//
// - mode: "poll" | "stream" | omitted (auto by driver type)
//
// - every: poll interval (required for mode="poll")
//
// - kinds: optional expected emitted kinds
//
// - kind: legacy singular fallback
//
// - params: driver-specific settings
//
// - event
// Domain-agnostic event envelope (ID, Kind, Source, EmittedAt, Schema, Payload).
//
// - sources
// Source abstractions and source-driver registry.
//
// There are two source interfaces:
//
// - PollSource: Poll(ctx) ([]event.Event, error)
//
// - StreamSource: Run(ctx, out) error
//
// Both share Input{Name()}. A source may emit 0..N events per poll/run step,
// and may emit multiple event kinds.
//
// - scheduler
// Runs one goroutine per job:
//
// - PollSource jobs run on Every (+ jitter)
//
// - StreamSource jobs run continuously
//
// - pipeline
// Processor chain between scheduler and dispatch.
// Processors can transform, drop, or reject events.
//
// - processors
// Generic processor interface and named factory registry for wiring chains.
//
// - processors/dedupe
// Built-in in-memory LRU dedupe processor keyed by Event.ID.
//
// - processors/normalize
// Concrete pipeline processor for raw->canonical mapping.
// If no normalizer matches, the event passes through unchanged by default.
//
// - dispatch
// Routes events to sinks and isolates slow sinks via per-sink queues/workers.
//
// - sinks
// Sink abstractions + sink registry.
// Built-ins include stdout, NATS, and Postgres. For Postgres, downstream
// code registers table schemas/mappers while feedkit manages DDL, writes,
// and optional prune helpers.
//
// Typical wiring (daemon main.go)
//
// 1. Load config.
// 2. Register source drivers and build sources from config.Sources.
// 3. Register sink drivers and build sinks from config.Sinks.
// 4. Compile routes.
// 5. Start scheduler (sources -> bus) and dispatcher (bus -> pipeline -> sinks).
//
// Sketch:
//
// cfg, _ := config.Load("config.yml")
// srcReg := sources.NewRegistry()
// // domain registers poll/stream drivers...
//
// var jobs []scheduler.Job
// for _, sc := range cfg.Sources {
// src, _ := srcReg.BuildInput(sc)
// jobs = append(jobs, scheduler.Job{
// Source: src,
// Every: sc.Every.Duration,
// })
// }
//
// bus := make(chan event.Event, 256)
// s := &scheduler.Scheduler{Jobs: jobs, Out: bus, Logf: logf}
// // start dispatcher similarly...
//
// # Context and cancellation
//
// All blocking work should honor context cancellation:
// - source polling/streaming I/O
// - sink consumption
// - any expensive processor work
package feedkit

14
go.mod
View File

@@ -2,4 +2,16 @@ module gitea.maximumdirect.net/ejr/feedkit
go 1.22
require gopkg.in/yaml.v3 v3.0.1
require (
github.com/lib/pq v1.10.9
github.com/nats-io/nats.go v1.34.0
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/klauspost/compress v1.17.2 // indirect
github.com/nats-io/nkeys v0.4.7 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
golang.org/x/crypto v0.18.0 // indirect
golang.org/x/sys v0.16.0 // indirect
)

15
go.sum
View File

@@ -1,3 +1,18 @@
github.com/klauspost/compress v1.17.2 h1:RlWWUY/Dr4fL8qk9YG7DTZ7PDgME2V4csBXA8L/ixi4=
github.com/klauspost/compress v1.17.2/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/nats-io/nats.go v1.34.0 h1:fnxnPCNiwIG5w08rlMcEKTUw4AV/nKyGCOJE8TdhSPk=
github.com/nats-io/nats.go v1.34.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
golang.org/x/crypto v0.18.0 h1:PGVlW0xEltQnzFZ55hkuX5+KLyrMYhHld1YHO4AKcdc=
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

8
logging/logging.go Normal file
View File

@@ -0,0 +1,8 @@
package logging
// Logf is the shared printf-style logger signature used across feedkit.
//
// Keeping this in one place avoids the "scheduler.Logger vs dispatch.Logger"
// friction and makes it trivial for downstream apps to pass a single log
// function throughout the system.
type Logf func(format string, args ...any)

View File

@@ -1,5 +0,0 @@
package pipeline
// Placeholder for dedupe processor:
// - key by Event.ID or computed key
// - in-memory store first; later optional Postgres-backed

View File

@@ -5,15 +5,11 @@ import (
"fmt"
"gitea.maximumdirect.net/ejr/feedkit/event"
"gitea.maximumdirect.net/ejr/feedkit/processors"
)
// Processor can mutate/drop events (dedupe, rate-limit, normalization tweaks).
type Processor interface {
Process(ctx context.Context, in event.Event) (out *event.Event, err error)
}
type Pipeline struct {
Processors []Processor
Processors []processors.Processor
}
func (p *Pipeline) Process(ctx context.Context, e event.Event) (*event.Event, error) {

115
pipeline/pipeline_test.go Normal file
View File

@@ -0,0 +1,115 @@
package pipeline
import (
"context"
"fmt"
"strings"
"testing"
"time"
"gitea.maximumdirect.net/ejr/feedkit/event"
"gitea.maximumdirect.net/ejr/feedkit/processors"
)
type procFunc func(context.Context, event.Event) (*event.Event, error)
func (f procFunc) Process(ctx context.Context, in event.Event) (*event.Event, error) {
return f(ctx, in)
}
func TestPipelineProcessSequentialOrder(t *testing.T) {
var gotOrder []string
p := &Pipeline{
Processors: []processors.Processor{
procFunc(func(_ context.Context, in event.Event) (*event.Event, error) {
gotOrder = append(gotOrder, "first")
out := in
out.Schema = "stage.one.v1"
return &out, nil
}),
procFunc(func(_ context.Context, in event.Event) (*event.Event, error) {
gotOrder = append(gotOrder, "second")
if in.Schema != "stage.one.v1" {
return nil, fmt.Errorf("expected schema from first stage, got %q", in.Schema)
}
out := in
out.Schema = "stage.two.v1"
return &out, nil
}),
},
}
out, err := p.Process(context.Background(), validEvent())
if err != nil {
t.Fatalf("Process error: %v", err)
}
if out == nil {
t.Fatalf("expected output event, got nil")
}
if out.Schema != "stage.two.v1" {
t.Fatalf("unexpected output schema: %q", out.Schema)
}
if strings.Join(gotOrder, ",") != "first,second" {
t.Fatalf("unexpected processor order: %v", gotOrder)
}
}
func TestPipelineProcessInvalidInput(t *testing.T) {
p := &Pipeline{}
_, err := p.Process(context.Background(), event.Event{})
if err == nil {
t.Fatalf("expected input validation error")
}
if !strings.Contains(err.Error(), "invalid input event") {
t.Fatalf("unexpected error: %v", err)
}
}
func TestPipelineProcessDrop(t *testing.T) {
p := &Pipeline{
Processors: []processors.Processor{
procFunc(func(context.Context, event.Event) (*event.Event, error) {
return nil, nil
}),
},
}
out, err := p.Process(context.Background(), validEvent())
if err != nil {
t.Fatalf("Process error: %v", err)
}
if out != nil {
t.Fatalf("expected nil output for dropped event, got %#v", out)
}
}
func TestPipelineProcessInvalidOutput(t *testing.T) {
p := &Pipeline{
Processors: []processors.Processor{
procFunc(func(_ context.Context, in event.Event) (*event.Event, error) {
out := in
out.Payload = nil
return &out, nil
}),
},
}
_, err := p.Process(context.Background(), validEvent())
if err == nil {
t.Fatalf("expected output validation error")
}
if !strings.Contains(err.Error(), "invalid output event") {
t.Fatalf("unexpected error: %v", err)
}
}
func validEvent() event.Event {
return event.Event{
ID: "evt-1",
Kind: event.Kind("observation"),
Source: "source-1",
EmittedAt: time.Now().UTC(),
Payload: map[string]any{"ok": true},
}
}

28
processors/dedupe/doc.go Normal file
View File

@@ -0,0 +1,28 @@
// Package dedupe provides a default in-memory LRU deduplication processor.
//
// The processor keys strictly by event.Event.ID:
// - first-seen IDs pass through
// - repeated IDs are dropped
//
// The in-memory seen-ID set is bounded by a required maxEntries capacity.
// When capacity is exceeded, the least recently used ID is evicted.
//
// Typical registry wiring:
//
// ```go
// reg := processors.NewRegistry()
// reg.Register("dedupe", dedupe.Factory(10_000))
//
// reg.Register("normalize", func() (processors.Processor, error) {
// return normalize.NewProcessor(myNormalizers, false), nil
// })
//
// chain, err := reg.BuildChain([]string{"dedupe", "normalize"})
//
// if err != nil {
// // handle wiring error
// }
//
// p := &pipeline.Pipeline{Processors: chain}
// ```
package dedupe

View File

@@ -0,0 +1,89 @@
package dedupe
import (
"container/list"
"context"
"fmt"
"strings"
"sync"
"gitea.maximumdirect.net/ejr/feedkit/event"
"gitea.maximumdirect.net/ejr/feedkit/processors"
)
// Processor drops duplicate events by Event.ID using an in-memory LRU.
type Processor struct {
maxEntries int
mu sync.Mutex
order *list.List // most-recent at front, least-recent at back
byID map[string]*list.Element // id -> list element (element.Value is string id)
}
var _ processors.Processor = (*Processor)(nil)
// NewProcessor constructs a dedupe processor with a required max entry count.
func NewProcessor(maxEntries int) (*Processor, error) {
if maxEntries <= 0 {
return nil, fmt.Errorf("dedupe: maxEntries must be > 0, got %d", maxEntries)
}
return &Processor{
maxEntries: maxEntries,
order: list.New(),
byID: make(map[string]*list.Element, maxEntries),
}, nil
}
// Factory returns a processors.Factory that constructs Processor instances.
func Factory(maxEntries int) processors.Factory {
return func() (processors.Processor, error) {
return NewProcessor(maxEntries)
}
}
// Process implements processors.Processor.
func (p *Processor) Process(_ context.Context, in event.Event) (*event.Event, error) {
if p == nil {
return nil, fmt.Errorf("dedupe: processor is nil")
}
if p.maxEntries <= 0 {
return nil, fmt.Errorf("dedupe: processor maxEntries must be > 0")
}
id := strings.TrimSpace(in.ID)
if id == "" {
return nil, fmt.Errorf("dedupe: event ID is required")
}
p.mu.Lock()
if p.order == nil || p.byID == nil {
p.mu.Unlock()
return nil, fmt.Errorf("dedupe: processor is not initialized")
}
if elem, exists := p.byID[id]; exists {
p.order.MoveToFront(elem)
p.mu.Unlock()
return nil, nil
}
elem := p.order.PushFront(id)
p.byID[id] = elem
if p.order.Len() > p.maxEntries {
oldest := p.order.Back()
if oldest != nil {
p.order.Remove(oldest)
if oldestID, ok := oldest.Value.(string); ok {
delete(p.byID, oldestID)
}
}
}
p.mu.Unlock()
out := in
return &out, nil
}

View File

@@ -0,0 +1,163 @@
package dedupe
import (
"context"
"strings"
"testing"
"time"
"gitea.maximumdirect.net/ejr/feedkit/event"
"gitea.maximumdirect.net/ejr/feedkit/processors"
)
func TestNewProcessorValidation(t *testing.T) {
t.Run("rejects non-positive maxEntries", func(t *testing.T) {
for _, maxEntries := range []int{0, -1} {
p, err := NewProcessor(maxEntries)
if err == nil {
t.Fatalf("expected error for maxEntries=%d, got nil", maxEntries)
}
if p != nil {
t.Fatalf("expected nil processor for maxEntries=%d", maxEntries)
}
if !strings.Contains(err.Error(), "maxEntries") {
t.Fatalf("unexpected error: %v", err)
}
}
})
t.Run("accepts positive maxEntries", func(t *testing.T) {
p, err := NewProcessor(1)
if err != nil {
t.Fatalf("NewProcessor error: %v", err)
}
if p == nil {
t.Fatalf("expected processor, got nil")
}
})
}
func TestProcessorFirstSeenAndDuplicate(t *testing.T) {
p, err := NewProcessor(8)
if err != nil {
t.Fatalf("NewProcessor error: %v", err)
}
ctx := context.Background()
first := testEvent("evt-1")
out, err := p.Process(ctx, first)
if err != nil {
t.Fatalf("Process first error: %v", err)
}
if out == nil {
t.Fatalf("expected first event to pass through")
}
if out.ID != first.ID {
t.Fatalf("expected unchanged ID %q, got %q", first.ID, out.ID)
}
out, err = p.Process(ctx, first)
if err != nil {
t.Fatalf("Process duplicate error: %v", err)
}
if out != nil {
t.Fatalf("expected duplicate to be dropped, got %#v", out)
}
out, err = p.Process(ctx, testEvent("evt-2"))
if err != nil {
t.Fatalf("Process second unique error: %v", err)
}
if out == nil {
t.Fatalf("expected second unique event to pass through")
}
}
func TestProcessorLRUEvictionAndPromotion(t *testing.T) {
p, err := NewProcessor(2)
if err != nil {
t.Fatalf("NewProcessor error: %v", err)
}
ctx := context.Background()
mustPass(t, p, ctx, "a")
mustPass(t, p, ctx, "b")
mustDrop(t, p, ctx, "a") // promote "a" so "b" becomes least-recently-used
mustPass(t, p, ctx, "c") // evicts "b"
mustDrop(t, p, ctx, "a") // "a" should still be tracked after promotion
mustPass(t, p, ctx, "b") // "b" was evicted, so now it passes again
}
func TestProcessorRejectsBlankID(t *testing.T) {
p, err := NewProcessor(4)
if err != nil {
t.Fatalf("NewProcessor error: %v", err)
}
in := testEvent(" ")
out, err := p.Process(context.Background(), in)
if err == nil {
t.Fatalf("expected error for blank ID")
}
if out != nil {
t.Fatalf("expected nil output on error, got %#v", out)
}
if !strings.Contains(err.Error(), "event ID is required") {
t.Fatalf("unexpected error: %v", err)
}
}
func TestFactoryWithRegistry(t *testing.T) {
r := processors.NewRegistry()
r.Register("dedupe", Factory(3))
p, err := r.Build("dedupe")
if err != nil {
t.Fatalf("Build error: %v", err)
}
if p == nil {
t.Fatalf("expected processor, got nil")
}
out, err := p.Process(context.Background(), testEvent("evt-factory-1"))
if err != nil {
t.Fatalf("Process error: %v", err)
}
if out == nil {
t.Fatalf("expected first event to pass through")
}
}
func mustPass(t *testing.T, p *Processor, ctx context.Context, id string) {
t.Helper()
out, err := p.Process(ctx, testEvent(id))
if err != nil {
t.Fatalf("expected pass for id=%q, got error: %v", id, err)
}
if out == nil {
t.Fatalf("expected pass for id=%q, got drop", id)
}
}
func mustDrop(t *testing.T, p *Processor, ctx context.Context, id string) {
t.Helper()
out, err := p.Process(ctx, testEvent(id))
if err != nil {
t.Fatalf("expected drop for id=%q, got error: %v", id, err)
}
if out != nil {
t.Fatalf("expected drop for id=%q, got output", id)
}
}
func testEvent(id string) event.Event {
return event.Event{
ID: id,
Kind: event.Kind("observation"),
Source: "source-1",
EmittedAt: time.Now().UTC(),
Payload: map[string]any{"ok": true},
}
}

24
processors/doc.go Normal file
View File

@@ -0,0 +1,24 @@
// Package processors defines feedkit's generic processor abstraction and registry.
//
// Processors are optional pipeline stages that can transform, drop, or reject
// events before dispatch to sinks.
//
// Registry provides name-based construction so daemons can assemble processor
// chains without embedding switch statements in wiring code.
//
// Example:
//
// reg := processors.NewRegistry()
// reg.Register("dedupe", dedupe.Factory(10_000))
// reg.Register("normalize", func() (processors.Processor, error) {
// // import "gitea.maximumdirect.net/ejr/feedkit/processors/normalize"
// return normalize.NewProcessor(myNormalizers, false), nil
// })
//
// chain, err := reg.BuildChain([]string{"dedupe", "normalize"})
// if err != nil {
// // handle wiring error
// }
//
// p := &pipeline.Pipeline{Processors: chain}
package processors

View File

@@ -0,0 +1,17 @@
// Package normalize provides a concrete normalization processor for feedkit pipelines.
//
// Motivation:
// Many daemons have sources that:
// 1. fetch raw upstream data (often JSON), and
// 2. transform it into a domain's normalized payload format.
//
// Doing both steps inside Source.Poll works, but tends to make sources large and
// encourages duplication (unit conversions, common mapping helpers, etc.).
//
// This package lets a source emit a "raw" event (e.g., Schema="raw.openweather.current.v1",
// Payload=json.RawMessage), and then a normalize.Processor can convert it into a
// normalized event (e.g., Schema="weather.observation.v1", Payload=WeatherObservation{}).
//
// Key property: normalization is optional.
// If no Normalizer matches an event, Processor passes it through unchanged by default.
package normalize

View File

@@ -0,0 +1,76 @@
package normalize
import (
"context"
"fmt"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
// Normalizer converts one event shape into another.
//
// A Normalizer is typically domain-owned code (weatherfeeder/newsfeeder/...)
// that knows how to interpret a specific upstream payload and produce a
// normalized payload.
//
// Normalizers are selected via Match(). The matching strategy is intentionally
// flexible: implementations may match on Schema, Kind, Source, or any other
// Event fields.
type Normalizer interface {
// Match reports whether this normalizer applies to the given event.
//
// Common patterns:
// - match on e.Schema (recommended for versioning)
// - match on e.Source (useful if Schema is empty)
// - match on (e.Kind + e.Source), etc.
Match(e event.Event) bool
// Normalize transforms the incoming event into a new (or modified) event.
//
// Return values:
// - (out, nil) where out != nil: emit the normalized event
// - (nil, nil): drop the event (treat as policy drop)
// - (nil, err): fail the pipeline
//
// Note: If you simply want to pass the event through unchanged, return &in.
Normalize(ctx context.Context, in event.Event) (*event.Event, error)
}
// Func is an ergonomic adapter that lets you define a Normalizer with functions.
//
// Example:
//
// n := normalize.Func{
// MatchFn: func(e event.Event) bool { return e.Schema == "raw.openweather.current.v1" },
// NormalizeFn: func(ctx context.Context, in event.Event) (*event.Event, error) {
// // ... map in.Payload -> normalized payload ...
// },
// }
type Func struct {
MatchFn func(e event.Event) bool
NormalizeFn func(ctx context.Context, in event.Event) (*event.Event, error)
// Optional: helps produce nicer panic/error messages if something goes wrong.
Name string
}
func (f Func) Match(e event.Event) bool {
if f.MatchFn == nil {
return false
}
return f.MatchFn(e)
}
func (f Func) Normalize(ctx context.Context, in event.Event) (*event.Event, error) {
if f.NormalizeFn == nil {
return nil, fmt.Errorf("normalize.Func(%s): NormalizeFn is nil", f.safeName())
}
return f.NormalizeFn(ctx, in)
}
func (f Func) safeName() string {
if f.Name == "" {
return "<unnamed>"
}
return f.Name
}

View File

@@ -0,0 +1,57 @@
package normalize
import (
"context"
"fmt"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
// Processor applies ordered normalization rules to pipeline events.
//
// Selection rule:
// - iterate in Normalizers order
// - the first Normalizer whose Match returns true is applied
//
// If no normalizer matches, the default behavior is pass-through.
type Processor struct {
Normalizers []Normalizer
// If true, events that do not match any normalizer cause an error.
// Default is false (pass-through).
RequireMatch bool
}
// NewProcessor constructs a normalization processor from an ordered normalizer list.
func NewProcessor(normalizers []Normalizer, requireMatch bool) Processor {
return Processor{
Normalizers: append([]Normalizer(nil), normalizers...),
RequireMatch: requireMatch,
}
}
// Process implements processors.Processor.
func (p Processor) Process(ctx context.Context, in event.Event) (*event.Event, error) {
for _, n := range p.Normalizers {
if n == nil {
continue
}
if !n.Match(in) {
continue
}
out, err := n.Normalize(ctx, in)
if err != nil {
return nil, fmt.Errorf("normalize: normalizer failed: %w", err)
}
return out, nil
}
if p.RequireMatch {
return nil, fmt.Errorf("normalize: no normalizer matched event (id=%s kind=%s source=%s schema=%q)",
in.ID, in.Kind, in.Source, in.Schema)
}
out := in
return &out, nil
}

View File

@@ -0,0 +1,139 @@
package normalize
import (
"context"
"errors"
"strings"
"testing"
"time"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
func TestProcessorFirstMatchWins(t *testing.T) {
var firstCalls, secondCalls int
p := NewProcessor([]Normalizer{
Func{
MatchFn: func(event.Event) bool { return true },
NormalizeFn: func(_ context.Context, in event.Event) (*event.Event, error) {
firstCalls++
out := in
out.Schema = "normalized.first.v1"
return &out, nil
},
},
Func{
MatchFn: func(event.Event) bool { return true },
NormalizeFn: func(_ context.Context, in event.Event) (*event.Event, error) {
secondCalls++
out := in
out.Schema = "normalized.second.v1"
return &out, nil
},
},
}, false)
out, err := p.Process(context.Background(), testEvent())
if err != nil {
t.Fatalf("Process error: %v", err)
}
if out == nil {
t.Fatalf("expected output event, got nil")
}
if out.Schema != "normalized.first.v1" {
t.Fatalf("unexpected schema: %q", out.Schema)
}
if firstCalls != 1 {
t.Fatalf("expected first normalizer called once, got %d", firstCalls)
}
if secondCalls != 0 {
t.Fatalf("expected second normalizer skipped, got %d calls", secondCalls)
}
}
func TestProcessorNoMatchPassThroughAndRequireMatch(t *testing.T) {
in := testEvent()
in.Schema = "raw.schema.v1"
passThrough := NewProcessor([]Normalizer{
Func{
MatchFn: func(event.Event) bool { return false },
NormalizeFn: func(_ context.Context, in event.Event) (*event.Event, error) {
out := in
out.Schema = "should.not.run"
return &out, nil
},
},
}, false)
out, err := passThrough.Process(context.Background(), in)
if err != nil {
t.Fatalf("pass-through Process error: %v", err)
}
if out == nil {
t.Fatalf("expected pass-through output event, got nil")
}
if out.Schema != "raw.schema.v1" {
t.Fatalf("expected unchanged schema, got %q", out.Schema)
}
required := NewProcessor(nil, true)
_, err = required.Process(context.Background(), in)
if err == nil {
t.Fatalf("expected require-match error")
}
if !strings.Contains(err.Error(), "no normalizer matched") {
t.Fatalf("unexpected error: %v", err)
}
}
func TestProcessorDropAndErrorPropagation(t *testing.T) {
t.Run("drop", func(t *testing.T) {
p := NewProcessor([]Normalizer{
Func{
MatchFn: func(event.Event) bool { return true },
NormalizeFn: func(context.Context, event.Event) (*event.Event, error) {
return nil, nil
},
},
}, false)
out, err := p.Process(context.Background(), testEvent())
if err != nil {
t.Fatalf("Process error: %v", err)
}
if out != nil {
t.Fatalf("expected nil output for dropped event, got %#v", out)
}
})
t.Run("error", func(t *testing.T) {
p := NewProcessor([]Normalizer{
Func{
MatchFn: func(event.Event) bool { return true },
NormalizeFn: func(context.Context, event.Event) (*event.Event, error) {
return nil, errors.New("map failed")
},
},
}, false)
_, err := p.Process(context.Background(), testEvent())
if err == nil {
t.Fatalf("expected error")
}
if !strings.Contains(err.Error(), "normalizer failed") {
t.Fatalf("unexpected error: %v", err)
}
})
}
func testEvent() event.Event {
return event.Event{
ID: "evt-normalize-1",
Kind: event.Kind("observation"),
Source: "source-1",
EmittedAt: time.Now().UTC(),
Payload: map[string]any{"x": 1},
}
}

15
processors/processor.go Normal file
View File

@@ -0,0 +1,15 @@
package processors
import (
"context"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
// Processor can mutate/drop events (dedupe, rate-limit, normalization tweaks).
type Processor interface {
Process(ctx context.Context, in event.Event) (out *event.Event, err error)
}
// Factory constructs a configured Processor instance.
type Factory func() (Processor, error)

71
processors/registry.go Normal file
View File

@@ -0,0 +1,71 @@
package processors
import (
"fmt"
"strings"
)
type Registry struct {
byDriver map[string]Factory
}
func NewRegistry() *Registry {
return &Registry{byDriver: map[string]Factory{}}
}
// Register associates a processor driver name with a factory.
//
// Register panics for empty driver names, nil factories, and duplicates.
func (r *Registry) Register(driver string, f Factory) {
if r == nil {
panic("processors.Registry.Register: registry cannot be nil")
}
driver = strings.TrimSpace(driver)
if driver == "" {
panic("processors.Registry.Register: driver cannot be empty")
}
if f == nil {
panic(fmt.Sprintf("processors.Registry.Register: factory cannot be nil (driver=%q)", driver))
}
if r.byDriver == nil {
r.byDriver = map[string]Factory{}
}
if _, exists := r.byDriver[driver]; exists {
panic(fmt.Sprintf("processors.Registry.Register: driver %q already registered", driver))
}
r.byDriver[driver] = f
}
// Build constructs a Processor by driver name.
func (r *Registry) Build(driver string) (Processor, error) {
if r == nil {
return nil, fmt.Errorf("processors registry is nil")
}
driver = strings.TrimSpace(driver)
f, ok := r.byDriver[driver]
if !ok {
return nil, fmt.Errorf("unknown processor driver: %q", driver)
}
p, err := f()
if err != nil {
return nil, fmt.Errorf("build processor %q: %w", driver, err)
}
if p == nil {
return nil, fmt.Errorf("build processor %q: factory returned nil processor", driver)
}
return p, nil
}
// BuildChain constructs an ordered processor chain from a driver list.
func (r *Registry) BuildChain(drivers []string) ([]Processor, error) {
out := make([]Processor, 0, len(drivers))
for i, driver := range drivers {
p, err := r.Build(driver)
if err != nil {
return nil, fmt.Errorf("build processor chain[%d] (%q): %w", i, strings.TrimSpace(driver), err)
}
out = append(out, p)
}
return out, nil
}

100
processors/registry_test.go Normal file
View File

@@ -0,0 +1,100 @@
package processors
import (
"context"
"errors"
"strings"
"testing"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
type testProcessor struct {
name string
}
func (p testProcessor) Process(context.Context, event.Event) (*event.Event, error) {
return nil, nil
}
func TestRegistryRegisterValidation(t *testing.T) {
t.Run("empty driver panics", func(t *testing.T) {
r := NewRegistry()
assertPanics(t, func() {
r.Register(" ", func() (Processor, error) { return testProcessor{name: "x"}, nil })
})
})
t.Run("nil factory panics", func(t *testing.T) {
r := NewRegistry()
assertPanics(t, func() {
r.Register("normalize", nil)
})
})
t.Run("duplicate driver panics", func(t *testing.T) {
r := NewRegistry()
r.Register("normalize", func() (Processor, error) { return testProcessor{name: "a"}, nil })
assertPanics(t, func() {
r.Register("normalize", func() (Processor, error) { return testProcessor{name: "b"}, nil })
})
})
}
func TestRegistryBuildUnknownDriver(t *testing.T) {
r := NewRegistry()
_, err := r.Build("does_not_exist")
if err == nil {
t.Fatalf("expected error for unknown driver")
}
if !strings.Contains(err.Error(), "unknown processor driver") {
t.Fatalf("unexpected error: %v", err)
}
}
func TestRegistryBuildChainPreservesOrder(t *testing.T) {
r := NewRegistry()
r.Register("first", func() (Processor, error) { return testProcessor{name: "first"}, nil })
r.Register("second", func() (Processor, error) { return testProcessor{name: "second"}, nil })
chain, err := r.BuildChain([]string{"first", "second"})
if err != nil {
t.Fatalf("BuildChain error: %v", err)
}
if len(chain) != 2 {
t.Fatalf("expected 2 processors, got %d", len(chain))
}
p0, ok := chain[0].(testProcessor)
if !ok || p0.name != "first" {
t.Fatalf("unexpected chain[0]: %#v", chain[0])
}
p1, ok := chain[1].(testProcessor)
if !ok || p1.name != "second" {
t.Fatalf("unexpected chain[1]: %#v", chain[1])
}
}
func TestRegistryBuildChainIndexedFailure(t *testing.T) {
r := NewRegistry()
r.Register("ok", func() (Processor, error) { return testProcessor{name: "ok"}, nil })
r.Register("broken", func() (Processor, error) { return nil, errors.New("boom") })
_, err := r.BuildChain([]string{"ok", "broken"})
if err == nil {
t.Fatalf("expected error")
}
if !strings.Contains(err.Error(), "chain[1]") {
t.Fatalf("expected indexed error, got: %v", err)
}
}
func assertPanics(t *testing.T, fn func()) {
t.Helper()
defer func() {
if recover() == nil {
t.Fatalf("expected panic")
}
}()
fn()
}

View File

@@ -8,31 +8,48 @@ import (
"time"
"gitea.maximumdirect.net/ejr/feedkit/event"
"gitea.maximumdirect.net/ejr/feedkit/logging"
"gitea.maximumdirect.net/ejr/feedkit/sources"
)
// Logger is a printf-style logger used throughout scheduler.
// It is an alias to the shared feedkit logging type so callers can pass
// one function everywhere without type mismatch friction.
type Logger = logging.Logf
// Job describes one scheduler task.
//
// A Job may be backed by either:
// - a polling source (sources.PollSource): uses Every + jitter and calls Poll()
// - a stream source (sources.StreamSource): ignores Every and calls Run()
//
// Jitter behavior:
// - For polling sources: Jitter is applied at startup and before each poll tick.
// - For stream sources: Jitter is applied once at startup only (optional; useful to avoid
// reconnect storms when many instances start together).
type Job struct {
Source sources.Source
Source sources.Input
Every time.Duration
// Jitter is the maximum additional delay added before each poll.
// Example: if Every=15m and Jitter=30s, each poll will occur at:
// tick time + random(0..30s)
//
// If Jitter == 0, we compute a default jitter based on Every.
// If Jitter == 0 for polling sources, we compute a default jitter based on Every.
//
// For stream sources, Jitter is treated as *startup jitter only*.
Jitter time.Duration
}
type Logger func(format string, args ...any)
type Scheduler struct {
Jobs []Job
Out chan<- event.Event
Logf Logger
}
// Run starts one polling goroutine per job.
// Each job runs on its own interval and emits 0..N events per poll.
// Run starts one goroutine per job.
// Poll jobs run on their own interval and emit 0..N events per poll.
// Stream jobs run continuously and emit events as they arrive.
func (s *Scheduler) Run(ctx context.Context) error {
if s.Out == nil {
return fmt.Errorf("scheduler.Run: Out channel is nil")
@@ -55,17 +72,48 @@ func (s *Scheduler) runJob(ctx context.Context, job Job) {
s.logf("scheduler: job has nil source")
return
}
if job.Every <= 0 {
s.logf("scheduler: job %s has invalid interval", job.Source.Name())
// Stream sources: event-driven.
if ss, ok := job.Source.(sources.StreamSource); ok {
s.runStream(ctx, job, ss)
return
}
// Poll sources: time-based.
ps, ok := job.Source.(sources.PollSource)
if !ok {
s.logf("scheduler: source %T (%s) implements neither Poll() nor Run()", job.Source, job.Source.Name())
return
}
if job.Every <= 0 {
s.logf("scheduler: polling job %q missing/invalid interval (sources[].every)", ps.Name())
return
}
s.runPoller(ctx, job, ps)
}
func (s *Scheduler) runStream(ctx context.Context, job Job, src sources.StreamSource) {
// Optional startup jitter: helps avoid reconnect storms if many daemons start at once.
if job.Jitter > 0 {
rng := seededRNG(src.Name())
if !sleepJitter(ctx, rng, job.Jitter) {
return
}
}
// Stream sources should block until ctx cancel or fatal error.
if err := src.Run(ctx, s.Out); err != nil && ctx.Err() == nil {
s.logf("scheduler: stream source %q exited with error: %v", src.Name(), err)
}
}
func (s *Scheduler) runPoller(ctx context.Context, job Job, src sources.PollSource) {
// Compute jitter: either configured per job, or a sensible default.
jitter := effectiveJitter(job.Every, job.Jitter)
// Each worker gets its own RNG (safe + no lock contention).
seed := time.Now().UnixNano() ^ int64(hashStringFNV32a(job.Source.Name()))
rng := rand.New(rand.NewSource(seed))
rng := seededRNG(src.Name())
// Optional startup jitter: avoids all jobs firing at the exact moment the daemon starts.
if !sleepJitter(ctx, rng, jitter) {
@@ -73,7 +121,7 @@ func (s *Scheduler) runJob(ctx context.Context, job Job) {
}
// Immediate poll at startup (after startup jitter).
s.pollOnce(ctx, job)
s.pollOnce(ctx, src)
t := time.NewTicker(job.Every)
defer t.Stop()
@@ -85,7 +133,7 @@ func (s *Scheduler) runJob(ctx context.Context, job Job) {
if !sleepJitter(ctx, rng, jitter) {
return
}
s.pollOnce(ctx, job)
s.pollOnce(ctx, src)
case <-ctx.Done():
return
@@ -93,10 +141,10 @@ func (s *Scheduler) runJob(ctx context.Context, job Job) {
}
}
func (s *Scheduler) pollOnce(ctx context.Context, job Job) {
events, err := job.Source.Poll(ctx)
func (s *Scheduler) pollOnce(ctx context.Context, src sources.PollSource) {
events, err := src.Poll(ctx)
if err != nil {
s.logf("scheduler: poll failed (%s): %v", job.Source.Name(), err)
s.logf("scheduler: poll failed (%s): %v", src.Name(), err)
return
}
@@ -116,6 +164,13 @@ func (s *Scheduler) logf(format string, args ...any) {
s.Logf(format, args...)
}
// ---- helpers ----
func seededRNG(name string) *rand.Rand {
seed := time.Now().UnixNano() ^ int64(hashStringFNV32a(name))
return rand.New(rand.NewSource(seed))
}
// effectiveJitter chooses a jitter value.
// - If configuredMax > 0, use it (but clamp).
// - Else default to min(every/10, 30s).

View File

@@ -27,9 +27,9 @@ func RegisterBuiltins(r *Registry) {
return NewPostgresSinkFromConfig(cfg)
})
// RabbitMQ sink: publishes events to a broker for downstream consumers.
r.Register("rabbitmq", func(cfg config.SinkConfig) (Sink, error) {
return NewRabbitMQSinkFromConfig(cfg)
// NATS sink: publishes events to a broker for downstream consumers.
r.Register("nats", func(cfg config.SinkConfig) (Sink, error) {
return NewNATSSinkFromConfig(cfg)
})
}

83
sinks/doc.go Normal file
View File

@@ -0,0 +1,83 @@
// Package sinks provides sink abstractions, a sink driver registry, and several
// built-in sink drivers.
//
// Built-in drivers:
// - stdout
// - nats
// - postgres
//
// # NATS built-in overview
//
// The NATS sink publishes each event as JSON to a configured subject.
//
// Required params:
// - url: NATS server URL (for example, nats://localhost:4222)
// - exchange: NATS subject to publish to
//
// Example config:
//
// sinks:
// - name: nats_main
// driver: nats
// params:
// url: nats://localhost:4222
// exchange: feedkit.events
//
// # Postgres built-in overview
//
// The postgres sink is intentionally split between downstream daemon ownership
// and feedkit ownership:
// - downstream code registers table schema + event mapping functions
// - feedkit manages DB connection, create-if-missing DDL, transactional
// inserts, and prune helpers
//
// Example config:
//
// sinks:
// - name: pg_main
// driver: postgres
// params:
// uri: postgres://localhost:5432/feedkit?sslmode=disable
// username: feedkit_user
// password: feedkit_pass
//
// Example downstream wiring:
//
// sinks.MustRegisterPostgresSchema("pg_main", sinks.PostgresSchema{
// Tables: []sinks.PostgresTable{
// {
// Name: "events",
// Columns: []sinks.PostgresColumn{
// {Name: "event_id", Type: "TEXT", Nullable: false},
// {Name: "emitted_at", Type: "TIMESTAMPTZ", Nullable: false},
// {Name: "payload_json", Type: "JSONB", Nullable: false},
// },
// PrimaryKey: []string{"event_id"},
// PruneColumn: "emitted_at",
// },
// },
// MapEvent: func(ctx context.Context, e event.Event) ([]sinks.PostgresWrite, error) {
// b, err := json.Marshal(e.Payload)
// if err != nil {
// return nil, err
// }
// return []sinks.PostgresWrite{
// {
// Table: "events",
// Values: map[string]any{
// "event_id": e.ID,
// "emitted_at": e.EmittedAt,
// "payload_json": string(b),
// },
// },
// }, nil
// },
// })
//
// Pruning via type assertion:
//
// if p, ok := sink.(sinks.PostgresPruner); ok {
// _, _ = p.PruneKeepLatest(ctx, "events", 10000)
// _, _ = p.PruneOlderThan(ctx, "events", time.Now().UTC().AddDate(0, -1, 0))
// }
package sinks

97
sinks/nats.go Normal file
View File

@@ -0,0 +1,97 @@
package sinks
import (
"context"
"encoding/json"
"fmt"
"sync"
"time"
"gitea.maximumdirect.net/ejr/feedkit/config"
"gitea.maximumdirect.net/ejr/feedkit/event"
"github.com/nats-io/nats.go"
)
type NATSSink struct {
name string
url string
exchange string
mu sync.Mutex
conn *nats.Conn
}
func NewNATSSinkFromConfig(cfg config.SinkConfig) (Sink, error) {
url, err := requireStringParam(cfg, "url")
if err != nil {
return nil, err
}
ex, err := requireStringParam(cfg, "exchange")
if err != nil {
return nil, err
}
return &NATSSink{name: cfg.Name, url: url, exchange: ex}, nil
}
func (r *NATSSink) Name() string { return r.name }
func (r *NATSSink) Consume(ctx context.Context, e event.Event) error {
// Boundary validation: if something upstream violated invariants,
// surface it loudly rather than printing partial nonsense.
if err := e.Validate(); err != nil {
return fmt.Errorf("NATS sink: invalid event: %w", err)
}
if err := ctx.Err(); err != nil {
return err
}
conn, err := r.connect(ctx)
if err != nil {
return fmt.Errorf("NATS sink: connect: %w", err)
}
b, err := json.Marshal(e)
if err != nil {
return fmt.Errorf("NATS sink: marshal event: %w", err)
}
if err := ctx.Err(); err != nil {
return err
}
if err := conn.Publish(r.exchange, b); err != nil {
return fmt.Errorf("NATS sink: publish: %w", err)
}
return nil
}
func (r *NATSSink) connect(ctx context.Context) (*nats.Conn, error) {
if err := ctx.Err(); err != nil {
return nil, err
}
r.mu.Lock()
defer r.mu.Unlock()
if r.conn != nil && r.conn.Status() != nats.CLOSED {
return r.conn, nil
}
opts := []nats.Option{
nats.Name(fmt.Sprintf("feedkit sink %s", r.name)),
}
if deadline, ok := ctx.Deadline(); ok {
timeout := time.Until(deadline)
if timeout <= 0 {
return nil, ctx.Err()
}
opts = append(opts, nats.Timeout(timeout))
}
conn, err := nats.Connect(r.url, opts...)
if err != nil {
return nil, err
}
r.conn = conn
return conn, nil
}

View File

@@ -2,36 +2,381 @@ package sinks
import (
"context"
"database/sql"
"fmt"
"net/url"
"strconv"
"strings"
"time"
"gitea.maximumdirect.net/ejr/feedkit/config"
"gitea.maximumdirect.net/ejr/feedkit/event"
_ "github.com/lib/pq"
)
type PostgresSink struct {
name string
dsn string
const postgresInitTimeout = 5 * time.Second
type postgresTx interface {
ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error)
Commit() error
Rollback() error
}
func NewPostgresSinkFromConfig(cfg config.SinkConfig) (Sink, error) {
dsn, err := requireStringParam(cfg, "dsn")
type postgresDB interface {
PingContext(ctx context.Context) error
BeginTx(ctx context.Context, opts *sql.TxOptions) (postgresTx, error)
ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error)
Close() error
}
type sqlDBWrapper struct {
db *sql.DB
}
func (w *sqlDBWrapper) PingContext(ctx context.Context) error {
return w.db.PingContext(ctx)
}
func (w *sqlDBWrapper) BeginTx(ctx context.Context, opts *sql.TxOptions) (postgresTx, error) {
tx, err := w.db.BeginTx(ctx, opts)
if err != nil {
return nil, err
}
return &PostgresSink{name: cfg.Name, dsn: dsn}, nil
return &sqlTxWrapper{tx: tx}, nil
}
func (w *sqlDBWrapper) ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error) {
return w.db.ExecContext(ctx, query, args...)
}
func (w *sqlDBWrapper) Close() error {
return w.db.Close()
}
type sqlTxWrapper struct {
tx *sql.Tx
}
func (w *sqlTxWrapper) ExecContext(ctx context.Context, query string, args ...any) (sql.Result, error) {
return w.tx.ExecContext(ctx, query, args...)
}
func (w *sqlTxWrapper) Commit() error {
return w.tx.Commit()
}
func (w *sqlTxWrapper) Rollback() error {
return w.tx.Rollback()
}
var openPostgresDB = func(dsn string) (postgresDB, error) {
db, err := sql.Open("postgres", dsn)
if err != nil {
return nil, err
}
return &sqlDBWrapper{db: db}, nil
}
type PostgresSink struct {
name string
db postgresDB
schema postgresSchemaCompiled
}
func NewPostgresSinkFromConfig(cfg config.SinkConfig) (Sink, error) {
uri, err := requireStringParam(cfg, "uri")
if err != nil {
return nil, err
}
username, err := requireStringParam(cfg, "username")
if err != nil {
return nil, err
}
password, err := requireStringParam(cfg, "password")
if err != nil {
return nil, err
}
schema, ok := lookupPostgresSchema(cfg.Name)
if !ok {
return nil, fmt.Errorf("postgres sink %q: no schema registered (call sinks.RegisterPostgresSchema before building sinks)", cfg.Name)
}
dsn, err := buildPostgresDSN(uri, username, password)
if err != nil {
return nil, fmt.Errorf("postgres sink %q: build dsn: %w", cfg.Name, err)
}
db, err := openPostgresDB(dsn)
if err != nil {
return nil, fmt.Errorf("postgres sink %q: open db: %w", cfg.Name, err)
}
s := &PostgresSink{name: cfg.Name, db: db, schema: schema}
if err := s.initialize(); err != nil {
_ = db.Close()
return nil, err
}
return s, nil
}
func (p *PostgresSink) Name() string { return p.name }
func (p *PostgresSink) Consume(ctx context.Context, e event.Event) error {
_ = ctx
// Boundary validation: if something upstream violated invariants,
// surface it loudly rather than printing partial nonsense.
// surface it loudly rather than writing corrupt rows.
if err := e.Validate(); err != nil {
return fmt.Errorf("rabbitmq sink: invalid event: %w", err)
return fmt.Errorf("postgres sink: invalid event: %w", err)
}
// TODO implement Postgres transaction
if err := ctx.Err(); err != nil {
return err
}
writes, err := p.schema.mapEvent(ctx, e)
if err != nil {
return fmt.Errorf("postgres sink: map event: %w", err)
}
if len(writes) == 0 {
return nil
}
tx, err := p.db.BeginTx(ctx, nil)
if err != nil {
return fmt.Errorf("postgres sink: begin tx: %w", err)
}
committed := false
defer func() {
if !committed {
_ = tx.Rollback()
}
}()
for _, w := range writes {
tbl, err := p.schema.validateWrite(w)
if err != nil {
return fmt.Errorf("postgres sink: %w", err)
}
query, args, err := buildInsertSQL(tbl, w)
if err != nil {
return fmt.Errorf("postgres sink: build insert for table %q: %w", tbl.name, err)
}
if _, err := tx.ExecContext(ctx, query, args...); err != nil {
return fmt.Errorf("postgres sink: insert into %q: %w", tbl.name, err)
}
}
if err := tx.Commit(); err != nil {
return fmt.Errorf("postgres sink: commit tx: %w", err)
}
committed = true
return nil
}
func (p *PostgresSink) PruneKeepLatest(ctx context.Context, table string, keep int) (int64, error) {
if keep < 0 {
return 0, fmt.Errorf("postgres sink: keep must be >= 0")
}
tbl, err := p.lookupTable(table)
if err != nil {
return 0, err
}
query := fmt.Sprintf(
`DELETE FROM %s WHERE ctid IN (
SELECT ctid FROM %s
ORDER BY %s DESC
OFFSET $1
)`,
quotePostgresIdent(tbl.name),
quotePostgresIdent(tbl.name),
quotePostgresIdent(tbl.pruneColumn),
)
res, err := p.db.ExecContext(ctx, query, keep)
if err != nil {
return 0, fmt.Errorf("postgres sink: prune keep latest table %q: %w", tbl.name, err)
}
rows, err := res.RowsAffected()
if err != nil {
return 0, fmt.Errorf("postgres sink: prune keep latest table %q rows affected: %w", tbl.name, err)
}
return rows, nil
}
func (p *PostgresSink) PruneOlderThan(ctx context.Context, table string, cutoff time.Time) (int64, error) {
tbl, err := p.lookupTable(table)
if err != nil {
return 0, err
}
query := fmt.Sprintf(
`DELETE FROM %s WHERE %s < $1`,
quotePostgresIdent(tbl.name),
quotePostgresIdent(tbl.pruneColumn),
)
res, err := p.db.ExecContext(ctx, query, cutoff)
if err != nil {
return 0, fmt.Errorf("postgres sink: prune older than table %q: %w", tbl.name, err)
}
rows, err := res.RowsAffected()
if err != nil {
return 0, fmt.Errorf("postgres sink: prune older than table %q rows affected: %w", tbl.name, err)
}
return rows, nil
}
func (p *PostgresSink) PruneAllKeepLatest(ctx context.Context, keep int) (map[string]int64, error) {
counts := make(map[string]int64, len(p.schema.tableOrder))
for _, table := range p.schema.tableOrder {
n, err := p.PruneKeepLatest(ctx, table, keep)
if err != nil {
return counts, err
}
counts[table] = n
}
return counts, nil
}
func (p *PostgresSink) PruneAllOlderThan(ctx context.Context, cutoff time.Time) (map[string]int64, error) {
counts := make(map[string]int64, len(p.schema.tableOrder))
for _, table := range p.schema.tableOrder {
n, err := p.PruneOlderThan(ctx, table, cutoff)
if err != nil {
return counts, err
}
counts[table] = n
}
return counts, nil
}
func (p *PostgresSink) initialize() error {
ctx, cancel := context.WithTimeout(context.Background(), postgresInitTimeout)
defer cancel()
if err := p.db.PingContext(ctx); err != nil {
return fmt.Errorf("postgres sink %q: ping db: %w", p.name, err)
}
for _, tableName := range p.schema.tableOrder {
tbl := p.schema.tables[tableName]
createTableSQL := buildCreateTableSQL(tbl)
if _, err := p.db.ExecContext(ctx, createTableSQL); err != nil {
return fmt.Errorf("postgres sink %q: ensure table %q: %w", p.name, tbl.name, err)
}
for _, idx := range tbl.indexes {
createIndexSQL := buildCreateIndexSQL(tbl.name, idx)
if _, err := p.db.ExecContext(ctx, createIndexSQL); err != nil {
return fmt.Errorf("postgres sink %q: ensure index %q on %q: %w", p.name, idx.Name, tbl.name, err)
}
}
}
return nil
}
func (p *PostgresSink) lookupTable(table string) (postgresTableCompiled, error) {
table = strings.TrimSpace(table)
if table == "" {
return postgresTableCompiled{}, fmt.Errorf("postgres sink: table cannot be empty")
}
tbl, ok := p.schema.tables[table]
if !ok {
return postgresTableCompiled{}, fmt.Errorf("postgres sink: unknown table %q", table)
}
return tbl, nil
}
func buildPostgresDSN(uri, username, password string) (string, error) {
u, err := url.Parse(strings.TrimSpace(uri))
if err != nil {
return "", fmt.Errorf("invalid uri: %w", err)
}
if u.Scheme == "" {
return "", fmt.Errorf("invalid uri: missing scheme")
}
if u.Host == "" {
return "", fmt.Errorf("invalid uri: missing host")
}
u.User = url.UserPassword(username, password)
return u.String(), nil
}
func buildCreateTableSQL(tbl postgresTableCompiled) string {
defs := make([]string, 0, len(tbl.columnOrder)+1)
for _, colName := range tbl.columnOrder {
col := tbl.columns[colName]
def := fmt.Sprintf("%s %s", quotePostgresIdent(col.Name), col.Type)
if !col.Nullable {
def += " NOT NULL"
}
defs = append(defs, def)
}
if len(tbl.primaryKey) > 0 {
defs = append(defs, fmt.Sprintf("PRIMARY KEY (%s)", joinQuotedIdents(tbl.primaryKey)))
}
return fmt.Sprintf(
"CREATE TABLE IF NOT EXISTS %s (%s)",
quotePostgresIdent(tbl.name),
strings.Join(defs, ", "),
)
}
func buildCreateIndexSQL(tableName string, idx PostgresIndex) string {
unique := ""
if idx.Unique {
unique = "UNIQUE "
}
return fmt.Sprintf(
"CREATE %sINDEX IF NOT EXISTS %s ON %s (%s)",
unique,
quotePostgresIdent(idx.Name),
quotePostgresIdent(tableName),
joinQuotedIdents(idx.Columns),
)
}
func buildInsertSQL(tbl postgresTableCompiled, w PostgresWrite) (string, []any, error) {
cols := make([]string, 0, len(tbl.columnOrder))
args := make([]any, 0, len(tbl.columnOrder))
placeholders := make([]string, 0, len(tbl.columnOrder))
for i, colName := range tbl.columnOrder {
v, ok := w.Values[colName]
if !ok {
return "", nil, fmt.Errorf("missing value for column %q", colName)
}
cols = append(cols, quotePostgresIdent(colName))
args = append(args, v)
placeholders = append(placeholders, "$"+strconv.Itoa(i+1))
}
q := fmt.Sprintf(
"INSERT INTO %s (%s) VALUES (%s)",
quotePostgresIdent(tbl.name),
strings.Join(cols, ", "),
strings.Join(placeholders, ", "),
)
return q, args, nil
}
func joinQuotedIdents(idents []string) string {
quoted := make([]string, 0, len(idents))
for _, s := range idents {
quoted = append(quoted, quotePostgresIdent(s))
}
return strings.Join(quoted, ", ")
}
func quotePostgresIdent(s string) string {
return `"` + strings.ReplaceAll(s, `"`, `""`) + `"`
}

285
sinks/postgres_schema.go Normal file
View File

@@ -0,0 +1,285 @@
package sinks
import (
"context"
"fmt"
"strings"
"sync"
"time"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
// PostgresMapFunc maps one event into zero or more table writes.
//
// Returning zero writes means "this event is not mapped for this sink" and is
// treated as a non-error no-op.
type PostgresMapFunc func(ctx context.Context, e event.Event) ([]PostgresWrite, error)
// PostgresSchema describes the downstream-provided relational model and mapper
// for one configured postgres sink.
type PostgresSchema struct {
Tables []PostgresTable
MapEvent PostgresMapFunc
}
type PostgresWrite struct {
Table string
Values map[string]any
}
type PostgresTable struct {
Name string
Columns []PostgresColumn
PrimaryKey []string
PruneColumn string
Indexes []PostgresIndex
}
type PostgresColumn struct {
Name string
Type string
Nullable bool
}
type PostgresIndex struct {
Name string
Columns []string
Unique bool
}
// PostgresPruner is an optional interface exposed by PostgresSink so downstream
// applications can call retention helpers via type assertion.
type PostgresPruner interface {
PruneKeepLatest(ctx context.Context, table string, keep int) (int64, error)
PruneOlderThan(ctx context.Context, table string, cutoff time.Time) (int64, error)
PruneAllKeepLatest(ctx context.Context, keep int) (map[string]int64, error)
PruneAllOlderThan(ctx context.Context, cutoff time.Time) (map[string]int64, error)
}
type postgresSchemaCompiled struct {
tableOrder []string
tables map[string]postgresTableCompiled
mapEvent PostgresMapFunc
}
type postgresTableCompiled struct {
name string
columns map[string]PostgresColumn
columnOrder []string
primaryKey []string
pruneColumn string
indexes []PostgresIndex
}
var (
postgresSchemaRegistryMu sync.RWMutex
postgresSchemaRegistry = map[string]postgresSchemaCompiled{}
)
// RegisterPostgresSchema registers one downstream schema by sink name.
//
// This should be called by downstream daemon wiring code before sink
// construction. Duplicate sink-name registrations are rejected.
func RegisterPostgresSchema(sinkName string, schema PostgresSchema) error {
sinkName = strings.TrimSpace(sinkName)
if sinkName == "" {
return fmt.Errorf("postgres schema: sink name cannot be empty")
}
compiled, err := compilePostgresSchema(schema)
if err != nil {
return err
}
postgresSchemaRegistryMu.Lock()
defer postgresSchemaRegistryMu.Unlock()
if _, exists := postgresSchemaRegistry[sinkName]; exists {
return fmt.Errorf("postgres schema: sink %q already registered", sinkName)
}
postgresSchemaRegistry[sinkName] = compiled
return nil
}
func MustRegisterPostgresSchema(sinkName string, schema PostgresSchema) {
if err := RegisterPostgresSchema(sinkName, schema); err != nil {
panic(err)
}
}
func lookupPostgresSchema(sinkName string) (postgresSchemaCompiled, bool) {
postgresSchemaRegistryMu.RLock()
defer postgresSchemaRegistryMu.RUnlock()
s, ok := postgresSchemaRegistry[sinkName]
return s, ok
}
func compilePostgresSchema(schema PostgresSchema) (postgresSchemaCompiled, error) {
if schema.MapEvent == nil {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: map function is required")
}
if len(schema.Tables) == 0 {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: at least one table is required")
}
compiled := postgresSchemaCompiled{
tableOrder: make([]string, 0, len(schema.Tables)),
tables: make(map[string]postgresTableCompiled, len(schema.Tables)),
mapEvent: schema.MapEvent,
}
seenTables := map[string]bool{}
seenIndexes := map[string]bool{}
for i, tbl := range schema.Tables {
tableName := strings.TrimSpace(tbl.Name)
if tableName == "" {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: tables[%d].name is required", i)
}
if seenTables[tableName] {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: duplicate table name %q", tableName)
}
seenTables[tableName] = true
if len(tbl.Columns) == 0 {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: table %q must define at least one column", tableName)
}
colOrder := make([]string, 0, len(tbl.Columns))
colMap := make(map[string]PostgresColumn, len(tbl.Columns))
for j, col := range tbl.Columns {
colName := strings.TrimSpace(col.Name)
if colName == "" {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: table %q columns[%d].name is required", tableName, j)
}
if _, exists := colMap[colName]; exists {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: table %q duplicate column %q", tableName, colName)
}
if strings.TrimSpace(col.Type) == "" {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: table %q column %q type is required", tableName, colName)
}
colOrder = append(colOrder, colName)
colMap[colName] = PostgresColumn{
Name: colName,
Type: strings.TrimSpace(col.Type),
Nullable: col.Nullable,
}
}
pk, err := validatePostgresColumnSet(tableName, "primary key", tbl.PrimaryKey, colMap)
if err != nil {
return postgresSchemaCompiled{}, err
}
pruneCol := strings.TrimSpace(tbl.PruneColumn)
if pruneCol == "" {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: table %q prune column is required", tableName)
}
if _, ok := colMap[pruneCol]; !ok {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: table %q prune column %q not found in columns", tableName, pruneCol)
}
indexes := make([]PostgresIndex, 0, len(tbl.Indexes))
for j, idx := range tbl.Indexes {
idxName := strings.TrimSpace(idx.Name)
if idxName == "" {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: table %q indexes[%d].name is required", tableName, j)
}
if len(idx.Columns) == 0 {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: table %q index %q must include at least one column", tableName, idxName)
}
if seenIndexes[idxName] {
return postgresSchemaCompiled{}, fmt.Errorf("postgres schema: duplicate index name %q", idxName)
}
seenIndexes[idxName] = true
idxCols, err := validatePostgresColumnSet(tableName, fmt.Sprintf("index %q columns", idxName), idx.Columns, colMap)
if err != nil {
return postgresSchemaCompiled{}, err
}
indexes = append(indexes, PostgresIndex{
Name: idxName,
Columns: idxCols,
Unique: idx.Unique,
})
}
compiled.tableOrder = append(compiled.tableOrder, tableName)
compiled.tables[tableName] = postgresTableCompiled{
name: tableName,
columns: colMap,
columnOrder: colOrder,
primaryKey: pk,
pruneColumn: pruneCol,
indexes: indexes,
}
}
return compiled, nil
}
func validatePostgresColumnSet(tableName, label string, cols []string, colMap map[string]PostgresColumn) ([]string, error) {
if len(cols) == 0 {
return nil, nil
}
out := make([]string, 0, len(cols))
seen := map[string]bool{}
for _, c := range cols {
name := strings.TrimSpace(c)
if name == "" {
return nil, fmt.Errorf("postgres schema: table %q %s contains empty column name", tableName, label)
}
if seen[name] {
return nil, fmt.Errorf("postgres schema: table %q %s contains duplicate column %q", tableName, label, name)
}
if _, ok := colMap[name]; !ok {
return nil, fmt.Errorf("postgres schema: table %q %s references unknown column %q", tableName, label, name)
}
seen[name] = true
out = append(out, name)
}
return out, nil
}
func (s postgresSchemaCompiled) validateWrite(w PostgresWrite) (postgresTableCompiled, error) {
tableName := strings.TrimSpace(w.Table)
if tableName == "" {
return postgresTableCompiled{}, fmt.Errorf("write table is required")
}
t, ok := s.tables[tableName]
if !ok {
return postgresTableCompiled{}, fmt.Errorf("table %q is not defined in postgres schema", tableName)
}
if len(w.Values) == 0 {
return postgresTableCompiled{}, fmt.Errorf("write for table %q must include values", tableName)
}
for k := range w.Values {
if _, ok := t.columns[k]; !ok {
return postgresTableCompiled{}, fmt.Errorf("write for table %q includes unknown column %q", tableName, k)
}
}
if len(w.Values) != len(t.columnOrder) {
return postgresTableCompiled{}, fmt.Errorf("write for table %q must include all declared columns", tableName)
}
for _, col := range t.columnOrder {
v, ok := w.Values[col]
if !ok {
return postgresTableCompiled{}, fmt.Errorf("write for table %q is missing column %q", tableName, col)
}
if v == nil {
if c := t.columns[col]; !c.Nullable {
return postgresTableCompiled{}, fmt.Errorf("write for table %q has nil value for non-null column %q", tableName, col)
}
}
}
return t, nil
}

590
sinks/postgres_test.go Normal file
View File

@@ -0,0 +1,590 @@
package sinks
import (
"context"
"database/sql"
"errors"
"net/url"
"strings"
"testing"
"time"
"gitea.maximumdirect.net/ejr/feedkit/config"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
type fakeResult struct {
rows int64
}
func (r fakeResult) LastInsertId() (int64, error) { return 0, errors.New("unsupported") }
func (r fakeResult) RowsAffected() (int64, error) { return r.rows, nil }
type execCall struct {
query string
args []any
}
type fakeTx struct {
execCalls []execCall
execErr error
execErrOnCall int
execRows int64
commitErr error
rollbackErr error
commitCalls int
rollbackCalls int
}
func (t *fakeTx) ExecContext(_ context.Context, query string, args ...any) (sql.Result, error) {
t.execCalls = append(t.execCalls, execCall{query: query, args: append([]any(nil), args...)})
if t.execErr != nil && t.execErrOnCall == len(t.execCalls) {
return nil, t.execErr
}
return fakeResult{rows: t.execRows}, nil
}
func (t *fakeTx) Commit() error {
t.commitCalls++
return t.commitErr
}
func (t *fakeTx) Rollback() error {
t.rollbackCalls++
return t.rollbackErr
}
type fakeDB struct {
pingErr error
beginErr error
execErr error
execErrOnCall int
execRows int64
pingCalls int
beginCalls int
execCalls []execCall
closeCalls int
tx *fakeTx
}
func (d *fakeDB) PingContext(_ context.Context) error {
d.pingCalls++
return d.pingErr
}
func (d *fakeDB) BeginTx(_ context.Context, _ *sql.TxOptions) (postgresTx, error) {
d.beginCalls++
if d.beginErr != nil {
return nil, d.beginErr
}
if d.tx == nil {
d.tx = &fakeTx{}
}
return d.tx, nil
}
func (d *fakeDB) ExecContext(_ context.Context, query string, args ...any) (sql.Result, error) {
d.execCalls = append(d.execCalls, execCall{query: query, args: append([]any(nil), args...)})
if d.execErr != nil && d.execErrOnCall == len(d.execCalls) {
return nil, d.execErr
}
return fakeResult{rows: d.execRows}, nil
}
func (d *fakeDB) Close() error {
d.closeCalls++
return nil
}
func resetPostgresSchemaRegistryForTest() {
postgresSchemaRegistryMu.Lock()
defer postgresSchemaRegistryMu.Unlock()
postgresSchemaRegistry = map[string]postgresSchemaCompiled{}
}
func withPostgresTestState(t *testing.T) {
t.Helper()
resetPostgresSchemaRegistryForTest()
oldOpen := openPostgresDB
t.Cleanup(func() {
openPostgresDB = oldOpen
resetPostgresSchemaRegistryForTest()
})
}
func validTestEvent() event.Event {
now := time.Now().UTC()
return event.Event{
ID: "evt-1",
Kind: event.Kind("observation"),
Source: "source-1",
EmittedAt: now,
Payload: map[string]any{
"x": 1,
},
}
}
func schemaOneTable(mapFn PostgresMapFunc) PostgresSchema {
return PostgresSchema{
Tables: []PostgresTable{
{
Name: "events",
Columns: []PostgresColumn{
{Name: "event_id", Type: "TEXT", Nullable: false},
{Name: "emitted_at", Type: "TIMESTAMPTZ", Nullable: false},
{Name: "payload_json", Type: "JSONB", Nullable: false},
},
PrimaryKey: []string{"event_id"},
PruneColumn: "emitted_at",
Indexes: []PostgresIndex{
{Name: "idx_events_emitted_at", Columns: []string{"emitted_at"}},
},
},
},
MapEvent: mapFn,
}
}
func schemaTwoTables(mapFn PostgresMapFunc) PostgresSchema {
return PostgresSchema{
Tables: []PostgresTable{
{
Name: "events",
Columns: []PostgresColumn{
{Name: "event_id", Type: "TEXT", Nullable: false},
{Name: "emitted_at", Type: "TIMESTAMPTZ", Nullable: false},
},
PrimaryKey: []string{"event_id"},
PruneColumn: "emitted_at",
},
{
Name: "event_payloads",
Columns: []PostgresColumn{
{Name: "event_id", Type: "TEXT", Nullable: false},
{Name: "payload_json", Type: "JSONB", Nullable: false},
{Name: "emitted_at", Type: "TIMESTAMPTZ", Nullable: false},
},
PrimaryKey: []string{"event_id"},
PruneColumn: "emitted_at",
},
},
MapEvent: mapFn,
}
}
func mustCompileSchema(t *testing.T, s PostgresSchema) postgresSchemaCompiled {
t.Helper()
compiled, err := compilePostgresSchema(s)
if err != nil {
t.Fatalf("compile schema: %v", err)
}
return compiled
}
func TestRegisterPostgresSchema(t *testing.T) {
withPostgresTestState(t)
err := RegisterPostgresSchema("pg", schemaOneTable(func(_ context.Context, _ event.Event) ([]PostgresWrite, error) {
return nil, nil
}))
if err != nil {
t.Fatalf("register schema: %v", err)
}
if _, ok := lookupPostgresSchema("pg"); !ok {
t.Fatalf("expected schema registration")
}
err = RegisterPostgresSchema("pg", schemaOneTable(func(_ context.Context, _ event.Event) ([]PostgresWrite, error) {
return nil, nil
}))
if err == nil {
t.Fatalf("expected duplicate registration error")
}
if !strings.Contains(err.Error(), "already registered") {
t.Fatalf("unexpected duplicate error: %v", err)
}
}
func TestRegisterPostgresSchema_RejectsInvalidSchema(t *testing.T) {
withPostgresTestState(t)
err := RegisterPostgresSchema("pg", PostgresSchema{
Tables: []PostgresTable{
{
Name: "events",
Columns: []PostgresColumn{
{Name: "id", Type: "TEXT", Nullable: false},
},
PruneColumn: "missing_col",
},
},
MapEvent: func(_ context.Context, _ event.Event) ([]PostgresWrite, error) { return nil, nil },
})
if err == nil {
t.Fatalf("expected invalid schema error")
}
if !strings.Contains(err.Error(), "prune column") {
t.Fatalf("unexpected schema validation error: %v", err)
}
err = RegisterPostgresSchema("pg2", PostgresSchema{
Tables: []PostgresTable{
{
Name: "events",
Columns: []PostgresColumn{
{Name: "id", Type: "TEXT", Nullable: false},
{Name: "emitted_at", Type: "TIMESTAMPTZ", Nullable: false},
},
PruneColumn: "emitted_at",
Indexes: []PostgresIndex{
{Name: "idx_events_empty", Columns: nil},
},
},
},
MapEvent: func(_ context.Context, _ event.Event) ([]PostgresWrite, error) { return nil, nil },
})
if err == nil {
t.Fatalf("expected invalid index schema error")
}
if !strings.Contains(err.Error(), "at least one column") {
t.Fatalf("unexpected index validation error: %v", err)
}
}
func TestNewPostgresSinkFromConfig_MissingParams(t *testing.T) {
withPostgresTestState(t)
tests := []struct {
name string
params map[string]any
want string
}{
{name: "missing uri", params: map[string]any{"username": "u", "password": "p"}, want: "params.uri"},
{name: "missing username", params: map[string]any{"uri": "postgres://localhost/db", "password": "p"}, want: "params.username"},
{name: "missing password", params: map[string]any{"uri": "postgres://localhost/db", "username": "u"}, want: "params.password"},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
_, err := NewPostgresSinkFromConfig(config.SinkConfig{
Name: "pg",
Driver: "postgres",
Params: tc.params,
})
if err == nil {
t.Fatalf("expected error")
}
if !strings.Contains(err.Error(), tc.want) {
t.Fatalf("expected %q in error, got: %v", tc.want, err)
}
})
}
}
func TestNewPostgresSinkFromConfig_MissingSchemaRegistration(t *testing.T) {
withPostgresTestState(t)
_, err := NewPostgresSinkFromConfig(config.SinkConfig{
Name: "pg",
Driver: "postgres",
Params: map[string]any{
"uri": "postgres://localhost/db",
"username": "user",
"password": "pass",
},
})
if err == nil {
t.Fatalf("expected error")
}
if !strings.Contains(err.Error(), "no schema registered") {
t.Fatalf("unexpected error: %v", err)
}
}
func TestNewPostgresSinkFromConfig_EagerInit(t *testing.T) {
withPostgresTestState(t)
err := RegisterPostgresSchema("pg", schemaOneTable(func(_ context.Context, _ event.Event) ([]PostgresWrite, error) {
return nil, nil
}))
if err != nil {
t.Fatalf("register schema: %v", err)
}
db := &fakeDB{}
var gotDSN string
openPostgresDB = func(dsn string) (postgresDB, error) {
gotDSN = dsn
return db, nil
}
s, err := NewPostgresSinkFromConfig(config.SinkConfig{
Name: "pg",
Driver: "postgres",
Params: map[string]any{
"uri": "postgres://db.example.local:5432/feedkit?sslmode=disable",
"username": "app_user",
"password": "app_pass",
},
})
if err != nil {
t.Fatalf("new postgres sink: %v", err)
}
if s == nil {
t.Fatalf("expected sink")
}
if db.pingCalls != 1 {
t.Fatalf("expected one ping, got %d", db.pingCalls)
}
if len(db.execCalls) != 2 {
t.Fatalf("expected 2 init exec calls (table + index), got %d", len(db.execCalls))
}
if !strings.Contains(db.execCalls[0].query, `CREATE TABLE IF NOT EXISTS "events"`) {
t.Fatalf("unexpected create table query: %s", db.execCalls[0].query)
}
if !strings.Contains(db.execCalls[1].query, `CREATE INDEX IF NOT EXISTS "idx_events_emitted_at"`) {
t.Fatalf("unexpected create index query: %s", db.execCalls[1].query)
}
u, err := url.Parse(gotDSN)
if err != nil {
t.Fatalf("parse dsn: %v", err)
}
if u.User == nil || u.User.Username() != "app_user" {
t.Fatalf("dsn missing username: %q", gotDSN)
}
pass, ok := u.User.Password()
if !ok || pass != "app_pass" {
t.Fatalf("dsn missing password: %q", gotDSN)
}
}
func TestNewPostgresSinkFromConfig_InitFailureClosesDB(t *testing.T) {
withPostgresTestState(t)
err := RegisterPostgresSchema("pg", schemaOneTable(func(_ context.Context, _ event.Event) ([]PostgresWrite, error) {
return nil, nil
}))
if err != nil {
t.Fatalf("register schema: %v", err)
}
db := &fakeDB{execErrOnCall: 1, execErr: errors.New("ddl failed")}
openPostgresDB = func(_ string) (postgresDB, error) {
return db, nil
}
_, err = NewPostgresSinkFromConfig(config.SinkConfig{
Name: "pg",
Driver: "postgres",
Params: map[string]any{
"uri": "postgres://localhost/db",
"username": "user",
"password": "pass",
},
})
if err == nil {
t.Fatalf("expected init error")
}
if db.closeCalls != 1 {
t.Fatalf("expected db close on init failure")
}
}
func TestPostgresSinkConsume_InvalidEvent(t *testing.T) {
db := &fakeDB{}
called := 0
sink := &PostgresSink{
name: "pg",
db: db,
schema: mustCompileSchema(t, schemaOneTable(func(_ context.Context, _ event.Event) ([]PostgresWrite, error) {
called++
return nil, nil
})),
}
err := sink.Consume(context.Background(), event.Event{})
if err == nil {
t.Fatalf("expected invalid event error")
}
if !strings.Contains(err.Error(), "invalid event") {
t.Fatalf("unexpected error: %v", err)
}
if called != 0 {
t.Fatalf("expected mapper not called for invalid events")
}
}
func TestPostgresSinkConsume_UnmappedEventIsNoOp(t *testing.T) {
db := &fakeDB{}
sink := &PostgresSink{
name: "pg",
db: db,
schema: mustCompileSchema(t, schemaOneTable(func(_ context.Context, _ event.Event) ([]PostgresWrite, error) {
return nil, nil
})),
}
if err := sink.Consume(context.Background(), validTestEvent()); err != nil {
t.Fatalf("consume: %v", err)
}
if db.beginCalls != 0 {
t.Fatalf("expected no transaction for unmapped events")
}
}
func TestPostgresSinkConsume_OneEventWritesMultipleTablesAtomically(t *testing.T) {
tx := &fakeTx{}
db := &fakeDB{tx: tx}
sink := &PostgresSink{
name: "pg",
db: db,
schema: mustCompileSchema(t, schemaTwoTables(func(_ context.Context, e event.Event) ([]PostgresWrite, error) {
return []PostgresWrite{
{Table: "events", Values: map[string]any{"event_id": e.ID, "emitted_at": e.EmittedAt}},
{Table: "event_payloads", Values: map[string]any{"event_id": e.ID, "payload_json": `{}`, "emitted_at": e.EmittedAt}},
}, nil
})),
}
if err := sink.Consume(context.Background(), validTestEvent()); err != nil {
t.Fatalf("consume: %v", err)
}
if db.beginCalls != 1 {
t.Fatalf("expected one transaction begin, got %d", db.beginCalls)
}
if len(tx.execCalls) != 2 {
t.Fatalf("expected 2 insert statements, got %d", len(tx.execCalls))
}
if tx.commitCalls != 1 {
t.Fatalf("expected one commit, got %d", tx.commitCalls)
}
if tx.rollbackCalls != 0 {
t.Fatalf("expected zero rollbacks, got %d", tx.rollbackCalls)
}
}
func TestPostgresSinkConsume_InsertFailureRollsBack(t *testing.T) {
tx := &fakeTx{execErrOnCall: 2, execErr: errors.New("duplicate key")}
db := &fakeDB{tx: tx}
sink := &PostgresSink{
name: "pg",
db: db,
schema: mustCompileSchema(t, schemaTwoTables(func(_ context.Context, e event.Event) ([]PostgresWrite, error) {
return []PostgresWrite{
{Table: "events", Values: map[string]any{"event_id": e.ID, "emitted_at": e.EmittedAt}},
{Table: "event_payloads", Values: map[string]any{"event_id": e.ID, "payload_json": `{}`, "emitted_at": e.EmittedAt}},
}, nil
})),
}
err := sink.Consume(context.Background(), validTestEvent())
if err == nil {
t.Fatalf("expected insert error")
}
if !strings.Contains(err.Error(), "insert into") {
t.Fatalf("unexpected error: %v", err)
}
if tx.commitCalls != 0 {
t.Fatalf("expected no commit")
}
if tx.rollbackCalls != 1 {
t.Fatalf("expected rollback, got %d", tx.rollbackCalls)
}
}
func TestPostgresSinkPrune_PerTable(t *testing.T) {
db := &fakeDB{execRows: 7}
sink := &PostgresSink{
name: "pg",
db: db,
schema: mustCompileSchema(t, schemaOneTable(func(_ context.Context, _ event.Event) ([]PostgresWrite, error) {
return nil, nil
})),
}
rows, err := sink.PruneKeepLatest(context.Background(), "events", 10)
if err != nil {
t.Fatalf("prune keep latest: %v", err)
}
if rows != 7 {
t.Fatalf("unexpected rows affected: %d", rows)
}
if len(db.execCalls) != 1 {
t.Fatalf("expected one prune query")
}
if !strings.Contains(db.execCalls[0].query, `ORDER BY "emitted_at" DESC`) {
t.Fatalf("unexpected keep-latest query: %s", db.execCalls[0].query)
}
if len(db.execCalls[0].args) != 1 || db.execCalls[0].args[0] != 10 {
t.Fatalf("unexpected keep-latest args: %#v", db.execCalls[0].args)
}
cutoff := time.Now().UTC().Add(-24 * time.Hour)
rows, err = sink.PruneOlderThan(context.Background(), "events", cutoff)
if err != nil {
t.Fatalf("prune older than: %v", err)
}
if rows != 7 {
t.Fatalf("unexpected rows affected: %d", rows)
}
if len(db.execCalls) != 2 {
t.Fatalf("expected two prune queries")
}
if !strings.Contains(db.execCalls[1].query, `WHERE "emitted_at" < $1`) {
t.Fatalf("unexpected older-than query: %s", db.execCalls[1].query)
}
}
func TestPostgresSinkPrune_AllTables(t *testing.T) {
db := &fakeDB{execRows: 3}
sink := &PostgresSink{
name: "pg",
db: db,
schema: mustCompileSchema(t, schemaTwoTables(func(_ context.Context, _ event.Event) ([]PostgresWrite, error) {
return nil, nil
})),
}
keepCounts, err := sink.PruneAllKeepLatest(context.Background(), 5)
if err != nil {
t.Fatalf("prune all keep latest: %v", err)
}
if len(keepCounts) != 2 || keepCounts["events"] != 3 || keepCounts["event_payloads"] != 3 {
t.Fatalf("unexpected keep counts: %#v", keepCounts)
}
db.execCalls = nil
olderCounts, err := sink.PruneAllOlderThan(context.Background(), time.Now().UTC())
if err != nil {
t.Fatalf("prune all older than: %v", err)
}
if len(olderCounts) != 2 || olderCounts["events"] != 3 || olderCounts["event_payloads"] != 3 {
t.Fatalf("unexpected older-than counts: %#v", olderCounts)
}
if len(db.execCalls) != 2 {
t.Fatalf("expected one prune call per table")
}
}
func TestPostgresSinkPrune_Errors(t *testing.T) {
db := &fakeDB{}
sink := &PostgresSink{
name: "pg",
db: db,
schema: mustCompileSchema(t, schemaOneTable(func(_ context.Context, _ event.Event) ([]PostgresWrite, error) {
return nil, nil
})),
}
if _, err := sink.PruneKeepLatest(context.Background(), "events", -1); err == nil {
t.Fatalf("expected negative keep error")
}
if _, err := sink.PruneOlderThan(context.Background(), "missing", time.Now().UTC()); err == nil {
t.Fatalf("expected unknown table error")
}
}

View File

@@ -1,42 +0,0 @@
package sinks
import (
"context"
"fmt"
"gitea.maximumdirect.net/ejr/feedkit/config"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
type RabbitMQSink struct {
name string
url string
exchange string
}
func NewRabbitMQSinkFromConfig(cfg config.SinkConfig) (Sink, error) {
url, err := requireStringParam(cfg, "url")
if err != nil {
return nil, err
}
ex, err := requireStringParam(cfg, "exchange")
if err != nil {
return nil, err
}
return &RabbitMQSink{name: cfg.Name, url: url, exchange: ex}, nil
}
func (r *RabbitMQSink) Name() string { return r.name }
func (r *RabbitMQSink) Consume(ctx context.Context, e event.Event) error {
_ = ctx
// Boundary validation: if something upstream violated invariants,
// surface it loudly rather than printing partial nonsense.
if err := e.Validate(); err != nil {
return fmt.Errorf("rabbitmq sink: invalid event: %w", err)
}
// TODO implement RabbitMQ publishing
return nil
}

14
sources/doc.go Normal file
View File

@@ -0,0 +1,14 @@
// Package sources defines feedkit's input-source abstraction.
//
// A source ingests upstream input and emits one or more event.Event values.
//
// feedkit supports two source modes:
// - PollSource: scheduler invokes Poll on a cadence.
// - StreamSource: source runs continuously and pushes events as input arrives.
//
// Source drivers are domain-specific and registered into Registry by driver name.
// Registry can then build configured sources from config.SourceConfig.
//
// A single source may emit 0..N events per poll or stream iteration, and those
// events may span multiple event kinds.
package sources

View File

@@ -7,43 +7,130 @@ import (
"gitea.maximumdirect.net/ejr/feedkit/config"
)
// Factory constructs a configured Source instance from config.
// PollFactory constructs a configured PollSource instance from config.
//
// This is how concrete daemons (weatherfeeder/newsfeeder/...) register their
// domain-specific source drivers (Open-Meteo, NWS, RSS, etc.) while feedkit
// remains domain-agnostic.
type Factory func(cfg config.SourceConfig) (Source, error)
type PollFactory func(cfg config.SourceConfig) (PollSource, error)
type StreamFactory func(cfg config.SourceConfig) (StreamSource, error)
// Factory is the legacy alias for poll source factories.
type Factory = PollFactory
type Registry struct {
byDriver map[string]Factory
byPollDriver map[string]PollFactory
byStreamDriver map[string]StreamFactory
}
func NewRegistry() *Registry {
return &Registry{byDriver: map[string]Factory{}}
return &Registry{
byPollDriver: map[string]PollFactory{},
byStreamDriver: map[string]StreamFactory{},
}
}
// Register associates a driver name (e.g. "openmeteo_observation") with a factory.
//
// The driver string is the "lookup key" used by config.sources[].driver.
func (r *Registry) Register(driver string, f Factory) {
func (r *Registry) Register(driver string, f PollFactory) {
r.RegisterPoll(driver, f)
}
// RegisterPoll associates a driver name with a polling-source factory.
func (r *Registry) RegisterPoll(driver string, f PollFactory) {
driver = strings.TrimSpace(driver)
if driver == "" {
// Panic is appropriate here: registering an empty driver is always a programmer error,
// and it will lead to extremely confusing runtime behavior if allowed.
panic("sources.Registry.Register: driver cannot be empty")
panic("sources.Registry.RegisterPoll: driver cannot be empty")
}
if f == nil {
panic(fmt.Sprintf("sources.Registry.Register: factory cannot be nil (driver=%q)", driver))
panic(fmt.Sprintf("sources.Registry.RegisterPoll: factory cannot be nil (driver=%q)", driver))
}
if _, exists := r.byStreamDriver[driver]; exists {
panic(fmt.Sprintf("sources.Registry.RegisterPoll: driver %q already registered as a stream source", driver))
}
if _, exists := r.byPollDriver[driver]; exists {
panic(fmt.Sprintf("sources.Registry.RegisterPoll: driver %q already registered as a polling source", driver))
}
r.byPollDriver[driver] = f
}
r.byDriver[driver] = f
// RegisterStream is the StreamSource equivalent of Register.
func (r *Registry) RegisterStream(driver string, f StreamFactory) {
driver = strings.TrimSpace(driver)
if driver == "" {
panic("sources.Registry.RegisterStream: driver cannot be empty")
}
if f == nil {
panic(fmt.Sprintf("sources.Registry.RegisterStream: factory cannot be nil (driver=%q)", driver))
}
if _, exists := r.byPollDriver[driver]; exists {
panic(fmt.Sprintf("sources.Registry.RegisterStream: driver %q already registered as a polling source", driver))
}
if _, exists := r.byStreamDriver[driver]; exists {
panic(fmt.Sprintf("sources.Registry.RegisterStream: driver %q already registered as a stream source", driver))
}
r.byStreamDriver[driver] = f
}
// Build constructs a Source from a SourceConfig by looking up cfg.Driver.
func (r *Registry) Build(cfg config.SourceConfig) (Source, error) {
f, ok := r.byDriver[cfg.Driver]
// Build constructs a polling source from a SourceConfig by looking up cfg.Driver.
func (r *Registry) Build(cfg config.SourceConfig) (PollSource, error) {
return r.BuildPoll(cfg)
}
// BuildPoll constructs a polling source from a SourceConfig by looking up cfg.Driver.
func (r *Registry) BuildPoll(cfg config.SourceConfig) (PollSource, error) {
driver := strings.TrimSpace(cfg.Driver)
if cfg.Mode.Normalize() == config.SourceModeStream {
return nil, fmt.Errorf("source %q mode=stream cannot be built as polling source", cfg.Name)
}
f, ok := r.byPollDriver[driver]
if !ok {
return nil, fmt.Errorf("unknown source driver: %q", cfg.Driver)
if _, streamExists := r.byStreamDriver[driver]; streamExists {
return nil, fmt.Errorf("source driver %q is stream-only; cannot build as polling source", driver)
}
return nil, fmt.Errorf("unknown source driver: %q", driver)
}
return f(cfg)
}
// BuildInput can return either a polling Source or a StreamSource.
func (r *Registry) BuildInput(cfg config.SourceConfig) (Input, error) {
driver := strings.TrimSpace(cfg.Driver)
mode := cfg.Mode.Normalize()
if mode != config.SourceModeAuto && mode != config.SourceModePoll && mode != config.SourceModeStream {
return nil, fmt.Errorf("source %q has invalid mode %q (expected \"poll\" or \"stream\")", cfg.Name, cfg.Mode)
}
switch mode {
case config.SourceModePoll:
f, ok := r.byPollDriver[driver]
if !ok {
if _, streamExists := r.byStreamDriver[driver]; streamExists {
return nil, fmt.Errorf("source %q mode=poll conflicts with stream-only driver %q", cfg.Name, driver)
}
return nil, fmt.Errorf("unknown source driver: %q", driver)
}
return f(cfg)
case config.SourceModeStream:
f, ok := r.byStreamDriver[driver]
if !ok {
if _, pollExists := r.byPollDriver[driver]; pollExists {
return nil, fmt.Errorf("source %q mode=stream conflicts with polling driver %q", cfg.Name, driver)
}
return nil, fmt.Errorf("unknown source driver: %q", driver)
}
return f(cfg)
}
if f, ok := r.byStreamDriver[driver]; ok {
return f(cfg)
}
if f, ok := r.byPollDriver[driver]; ok {
return f(cfg)
}
return nil, fmt.Errorf("unknown source driver: %q", driver)
}

84
sources/registry_test.go Normal file
View File

@@ -0,0 +1,84 @@
package sources
import (
"context"
"strings"
"testing"
"gitea.maximumdirect.net/ejr/feedkit/config"
"gitea.maximumdirect.net/ejr/feedkit/event"
)
type testPollSource struct{ name string }
func (s testPollSource) Name() string { return s.name }
func (s testPollSource) Poll(context.Context) ([]event.Event, error) {
return nil, nil
}
type testStreamSource struct{ name string }
func (s testStreamSource) Name() string { return s.name }
func (s testStreamSource) Run(context.Context, chan<- event.Event) error {
return nil
}
func TestRegistryBuildInputModeConflicts(t *testing.T) {
r := NewRegistry()
r.RegisterPoll("poll_driver", func(cfg config.SourceConfig) (PollSource, error) {
return testPollSource{name: cfg.Name}, nil
})
r.RegisterStream("stream_driver", func(cfg config.SourceConfig) (StreamSource, error) {
return testStreamSource{name: cfg.Name}, nil
})
_, err := r.BuildInput(config.SourceConfig{
Name: "s1",
Driver: "stream_driver",
Mode: config.SourceModePoll,
})
if err == nil {
t.Fatalf("expected mode conflict error, got nil")
}
if !strings.Contains(err.Error(), "mode=poll") {
t.Fatalf("expected poll conflict error, got: %v", err)
}
_, err = r.BuildInput(config.SourceConfig{
Name: "s2",
Driver: "poll_driver",
Mode: config.SourceModeStream,
})
if err == nil {
t.Fatalf("expected mode conflict error, got nil")
}
if !strings.Contains(err.Error(), "mode=stream") {
t.Fatalf("expected stream conflict error, got: %v", err)
}
}
func TestRegistryBuildInputAutoByDriverType(t *testing.T) {
r := NewRegistry()
r.RegisterPoll("poll_driver", func(cfg config.SourceConfig) (PollSource, error) {
return testPollSource{name: cfg.Name}, nil
})
r.RegisterStream("stream_driver", func(cfg config.SourceConfig) (StreamSource, error) {
return testStreamSource{name: cfg.Name}, nil
})
src, err := r.BuildInput(config.SourceConfig{Name: "p", Driver: "poll_driver"})
if err != nil {
t.Fatalf("BuildInput poll auto failed: %v", err)
}
if _, ok := src.(PollSource); !ok {
t.Fatalf("expected PollSource, got %T", src)
}
src, err = r.BuildInput(config.SourceConfig{Name: "s", Driver: "stream_driver"})
if err != nil {
t.Fatalf("BuildInput stream auto failed: %v", err)
}
if _, ok := src.(StreamSource); !ok {
t.Fatalf("expected StreamSource, got %T", src)
}
}

View File

@@ -6,25 +6,50 @@ import (
"gitea.maximumdirect.net/ejr/feedkit/event"
)
// Source is a configured polling job that emits 0..N events per poll.
// Input is the common surface shared by all source types.
//
// Source implementations live in domain modules (weatherfeeder/newsfeeder/...)
// A source may be polling (PollSource) or event-driven (StreamSource).
// Both source types emit domain-agnostic event.Event values.
type Input interface {
Name() string
}
// PollSource is a configured polling source that emits 0..N events per poll.
//
// PollSource implementations live in domain modules (weatherfeeder/newsfeeder/...)
// and are registered into a feedkit sources.Registry.
//
// feedkit infrastructure treats Source as opaque; it just calls Poll()
// feedkit infrastructure treats PollSource as opaque; it just calls Poll()
// on the configured cadence and publishes the resulting events.
type Source interface {
type PollSource interface {
// Name is the configured source name (used for logs and included in emitted events).
Name() string
// Kind is the "primary kind" emitted by this source.
//
// This is mainly useful as a *safety check* (e.g. config says kind=forecast but
// driver emits observation). Some future sources may emit multiple kinds; if/when
// that happens, we can evolve this interface (e.g., make Kind optional, or remove it).
Kind() event.Kind
// Poll fetches from upstream and returns 0..N events.
// Poll fetches/processes one input batch and returns 0..N events.
// A single poll can emit multiple event kinds.
// Implementations should honor ctx.Done() for network calls and other I/O.
Poll(ctx context.Context) ([]event.Event, error)
}
// Source is a compatibility alias for the legacy polling-source name.
type Source = PollSource
// StreamSource is an event-driven source (NATS/RabbitMQ/MQTT/etc).
//
// Run should block, producing events into `out` until ctx is cancelled or a fatal error occurs.
// It MUST NOT close out (the scheduler/daemon owns the bus).
type StreamSource interface {
Input
Run(ctx context.Context, out chan<- event.Event) error
}
// KindSource is an optional interface for sources that advertise one "primary" kind.
// This is legacy-friendly but no longer required.
type KindSource interface {
Kind() event.Kind
}
// KindsSource is an optional interface for sources that advertise multiple kinds.
type KindsSource interface {
Kinds() []event.Kind
}

70
transport/http.go Normal file
View File

@@ -0,0 +1,70 @@
// FILE: ./transport/http.go
package transport
import (
"context"
"fmt"
"io"
"net/http"
"time"
)
// maxResponseBodyBytes is a hard safety limit on HTTP response bodies.
// API responses should be small, so this protects us from accidental
// or malicious large responses.
const maxResponseBodyBytes = 2 << 21 // 4 MiB
// DefaultHTTPTimeout is the standard timeout used by HTTP sources.
// Individual drivers may override this if they have a specific need.
const DefaultHTTPTimeout = 10 * time.Second
// NewHTTPClient returns a simple http.Client configured with a timeout.
// If timeout <= 0, DefaultHTTPTimeout is used.
func NewHTTPClient(timeout time.Duration) *http.Client {
if timeout <= 0 {
timeout = DefaultHTTPTimeout
}
return &http.Client{Timeout: timeout}
}
func FetchBody(ctx context.Context, client *http.Client, url, userAgent, accept string) ([]byte, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return nil, err
}
if userAgent != "" {
req.Header.Set("User-Agent", userAgent)
}
if accept != "" {
req.Header.Set("Accept", accept)
}
res, err := client.Do(req)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode < 200 || res.StatusCode >= 300 {
return nil, fmt.Errorf("HTTP %s", res.Status)
}
// Read at most maxResponseBodyBytes + 1 so we can detect overflow.
limited := io.LimitReader(res.Body, maxResponseBodyBytes+1)
b, err := io.ReadAll(limited)
if err != nil {
return nil, err
}
if len(b) == 0 {
return nil, fmt.Errorf("empty response body")
}
if len(b) > maxResponseBodyBytes {
return nil, fmt.Errorf("response body too large (>%d bytes)", maxResponseBodyBytes)
}
return b, nil
}