Inference¶
The main entry point for hallucination detection.
HallucinationDetector(method='transformer', **kwargs)
¶
Facade class that delegates to a concrete detector chosen by method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
method
|
str
|
|
'transformer'
|
kwargs
|
Passed straight through to the chosen detector's constructor. |
{}
|
Initialize the detector.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
method
|
str
|
Detection method to use. |
'transformer'
|
kwargs
|
Passed to the detector constructor. |
{}
|
predict(context, answer, question=None, output_format='tokens')
¶
Predict hallucination tokens or spans given passages and an answer.
This is the call most RAG pipelines use.
See the concrete detector docs for the structure of the returned list.
predict_prompt(prompt, answer, output_format='tokens')
¶
Predict hallucinations when you already have a single full prompt string.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompt
|
str
|
The prompt string. |
required |
answer
|
str
|
The answer string. |
required |
output_format
|
str
|
"tokens" to return token-level predictions, or "spans" to return grouped spans. |
'tokens'
|
predict_prompt_batch(prompts, answers, output_format='tokens')
¶
Batch version of :py:meth:predict_prompt.
Length of prompts and answers must match.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompts
|
list[str]
|
List of prompt strings. |
required |
answers
|
list[str]
|
List of answer strings. |
required |
output_format
|
str
|
"tokens" to return token-level predictions, or "spans" to return grouped spans. |
'tokens'
|