Skip to content

Coordinator

Coordinator protocol and implementations for learning decision logic.

CoordinatorProtocol

CoordinatorProtocol

Bases: ABC

Abstract interface for learning coordination.

Implementations can be simple (heuristics) or agentic (LLM-powered). RuleChef uses this interface, making coordinators swappable.

should_trigger_learning(buffer, current_rules) abstractmethod

Decide if learning should be triggered now.

Parameters:

Name Type Description Default
buffer ExampleBuffer

Current example buffer

required
current_rules list[Rule] | None

Currently learned rules (None if first learn)

required

Returns:

Type Description
CoordinationDecision

CoordinationDecision with should_learn, strategy, reasoning

analyze_buffer(buffer) abstractmethod

Analyze current buffer state.

Returns:

Type Description
dict[str, Any]

Dict with buffer statistics and insights

on_learning_complete(old_rules, new_rules, metrics) abstractmethod

Callback after learning completes.

Parameters:

Name Type Description Default
old_rules list[Rule] | None

Rules before learning (None if first learn)

required
new_rules list[Rule]

Newly learned rules

required
metrics dict[str, Any]

Learning metrics (accuracy, etc.)

required

guide_refinement(eval_result, iteration, max_iterations)

Analyze per-class metrics and return (guidance_text, should_continue).

Called after each refinement iteration. The guidance string is injected into the patch prompt. should_continue=False stops the loop early.

Default: no guidance, always continue.

audit_rules(rules, rule_metrics)

Audit rules for redundancy, dead rules, and conflicts.

Called after learning completes when pruning is enabled. Returns an AuditResult with actions (remove/merge). The engine applies actions and reverts if performance drops.

Parameters:

Name Type Description Default
rules list[Rule]

Current learned rules.

required
rule_metrics list[Any]

Per-rule RuleMetrics from evaluate_rules_individually.

required

Returns:

Type Description
AuditResult

AuditResult with actions to take.

SimpleCoordinator

SimpleCoordinator(trigger_threshold=50, correction_threshold=10, verbose=True)

Bases: CoordinatorProtocol

Deterministic heuristic-based coordinator.

Uses simple rules to make decisions: - First learn: trigger after N examples - Subsequent: trigger after N examples OR M corrections - Strategy selection: corrections_first if corrections, else balanced/diversity

Parameters:

Name Type Description Default
trigger_threshold int

Number of examples needed to trigger learning

50
correction_threshold int

Number of corrections to trigger early learning

10
verbose bool

Print coordination decisions

True

should_trigger_learning(buffer, current_rules)

Simple heuristic decision

analyze_buffer(buffer)

Basic buffer statistics

on_learning_complete(old_rules, new_rules, metrics)

Log learning results. metrics is an EvalResult or None.

CoordinationDecision

CoordinationDecision(should_learn, strategy, reasoning, max_iterations=3, metadata=None) dataclass

Result of coordinator analysis - explains what/why/how to learn

AgenticCoordinator

AgenticCoordinator(llm_client, model='gpt-4o-mini', min_batch_size=5, min_correction_batch=1, verbose=True, prune_after_learn=False, training_logger=None)

Bases: CoordinatorProtocol

LLM-based intelligent coordinator.

Uses LLM to make adaptive decisions: - Analyze buffer patterns to detect when learning would be beneficial - Choose optimal sampling strategy based on data characteristics - Decide iteration count based on learning progress - Provide detailed reasoning for decisions

Parameters:

Name Type Description Default
llm_client Any

OpenAI client

required
model str

Model to use for coordination

'gpt-4o-mini'
min_batch_size int

Minimum new examples before asking LLM

5
min_correction_batch int

Minimum corrections before asking LLM

1
verbose bool

Print coordination decisions

True
prune_after_learn bool

If True, audit and prune/merge rules after learning

False
training_logger

Optional TrainingDataLogger for capturing LLM calls.

None

should_trigger_learning(buffer, current_rules)

Agentic decision based on buffer content

analyze_buffer(buffer)

Analyze buffer stats

guide_refinement(eval_result, iteration, max_iterations)

LLM-powered refinement guidance based on per-class metrics.

on_learning_complete(old_rules, new_rules, metrics)

Log learning results. metrics is an EvalResult or None.

audit_rules(rules, rule_metrics)

LLM-powered rule audit: merge redundant rules, remove pure noise.

AuditResult

AuditResult(actions=list(), analysis='') dataclass

Result of a rule audit.

AuditAction

AuditAction(action, rule_ids, reason, merged_pattern=None, merged_name=None) dataclass

A single audit action: remove or merge.