Skip to content

Evaluation

Metrics for evaluating model quality.

evaluate_model(model_path, eval_file, max_samples=None, max_new_tokens=1024, server_url=None, server_model=None, temperature=0.1, request_concurrency=1, examples_output=None)

Evaluate the model on the eval set.

Args: model_path: Path to trained model, if evaluating locally eval_file: Path to eval.jsonl max_samples: Maximum samples to evaluate max_new_tokens: Max tokens to generate server_url: OpenAI-compatible server URL, if evaluating remotely server_model: Remote model ID for the server backend temperature: Generation temperature request_concurrency: Number of concurrent remote requests for server evaluation examples_output: Optional path to save per-sample predictions/metrics

Returns: Dict with aggregate metrics

compute_span_metrics(predicted, reference)

Compute span-level precision, recall, F1 using set overlap on normalized lines.

compute_fuzzy_span_metrics(predicted, reference, threshold=0.5)

Compute one-to-one fuzzy line overlap metrics at a fixed threshold.

compute_partial_overlap(predicted, reference)

Compute partial overlap ratio using character-level intersection.

For each reference line, find the best matching predicted line (substring match) and compute the fraction of reference characters covered.

compute_empty_accuracy(predicted, reference)

Check if model correctly predicts empty vs non-empty.

Returns category (true_positive, true_negative, false_positive, false_negative) and whether correct.

compute_rouge_l(predicted, reference)

Compute ROUGE-L F1 score.

compute_compression_ratio(original, filtered)

Compute compression ratio (1 - filtered/original).