plexus.rubric_memory.models module
- class plexus.rubric_memory.models.ConfidenceInputs(*, score_version_id: Annotated[str, MinLen(min_length=1)], total_evidence_count: Annotated[int, Ge(ge=0)], score_scope_evidence_count: Annotated[int, Ge(ge=0)], prefix_scope_evidence_count: Annotated[int, Ge(ge=0)] = 0, scorecard_scope_evidence_count: Annotated[int, Ge(ge=0)], unknown_scope_evidence_count: Annotated[int, Ge(ge=0)], high_authority_evidence_count: Annotated[int, Ge(ge=0)], low_authority_evidence_count: Annotated[int, Ge(ge=0)], conflicting_or_stale_evidence_count: Annotated[int, Ge(ge=0)], chronological_evidence_count: Annotated[int, Ge(ge=0)], suggested_confidence: ConfidenceLevel)
Bases:
BaseModelDeterministic inputs used by Python to constrain confidence.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- chronological_evidence_count: int
- conflicting_or_stale_evidence_count: int
- high_authority_evidence_count: int
- low_authority_evidence_count: int
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- prefix_scope_evidence_count: int
- score_scope_evidence_count: int
- score_version_id: str
- scorecard_scope_evidence_count: int
- suggested_confidence: ConfidenceLevel
- total_evidence_count: int
- unknown_scope_evidence_count: int
- class plexus.rubric_memory.models.ConfidenceLevel(value)
Bases:
str,Enum- HIGH = 'high'
- LOW = 'low'
- MEDIUM = 'medium'
- class plexus.rubric_memory.models.EvidenceClassification(value)
Bases:
str,Enum- HISTORICAL_CONTEXT = 'historical_context'
- POSSIBLE_STALE_RUBRIC = 'possible_stale_rubric'
- RUBRIC_CONFLICTING = 'rubric_conflicting'
- RUBRIC_GAP = 'rubric_gap'
- RUBRIC_SUPPORTED = 'rubric_supported'
- class plexus.rubric_memory.models.EvidenceSnippet(*, snippet_text: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], source_uri: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], scope_level: str = 'unknown', source_type: str = 'unknown', authority_level: str = 'unknown', source_timestamp: ~datetime.datetime | None = None, author: str | None = None, retrieval_score: float = 0.0, policy_concepts: list[str] = <factory>, evidence_classification: ~plexus.rubric_memory.models.EvidenceClassification = EvidenceClassification.HISTORICAL_CONTEXT)
Bases:
BaseModelCorpus evidence with provenance retained from Biblicus retrieval.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- author: str | None
- authority_level: str
- evidence_classification: EvidenceClassification
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- policy_concepts: list[str]
- retrieval_score: float
- scope_level: str
- snippet_text: str
- source_timestamp: datetime | None
- source_type: str
- source_uri: str
- class plexus.rubric_memory.models.RubricAuthority(*, score_version_id: Annotated[str, MinLen(min_length=1)], rubric_text: Annotated[str, MinLen(min_length=1)], score_code: str = '')
Bases:
BaseModelStorage-boundary projection of ScoreVersion authority into rubric terms.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- rubric_text: str
- score_code: str
- score_version_id: str
- class plexus.rubric_memory.models.RubricEvidencePack(*, score_version_id: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], rubric_reading: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], evidence_classification: ~plexus.rubric_memory.models.EvidenceClassification, supporting_evidence: list[~plexus.rubric_memory.models.EvidenceSnippet] = <factory>, conflicting_evidence: list[~plexus.rubric_memory.models.EvidenceSnippet] = <factory>, history_of_change: list[~plexus.rubric_memory.models.RubricHistoryEvent] = <factory>, likely_reason_for_disagreement: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], confidence: ~plexus.rubric_memory.models.ConfidenceLevel, confidence_inputs: ~plexus.rubric_memory.models.ConfidenceInputs, open_questions: list[str] = <factory>)
Bases:
BaseModelStructured answer for one disputed classification.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- confidence: ConfidenceLevel
- confidence_inputs: ConfidenceInputs
- conflicting_evidence: list[EvidenceSnippet]
- evidence_classification: EvidenceClassification
- history_of_change: list[RubricHistoryEvent]
- likely_reason_for_disagreement: str
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- open_questions: list[str]
- rubric_reading: str
- score_version_id: str
- supporting_evidence: list[EvidenceSnippet]
- class plexus.rubric_memory.models.RubricEvidencePackRequest(*, scorecard_identifier: Annotated[str, MinLen(min_length=1)], score_identifier: Annotated[str, MinLen(min_length=1)], score_version_id: Annotated[str, MinLen(min_length=1)], rubric_text: Annotated[str, MinLen(min_length=1)], score_code: str = '', transcript_text: str = '', model_value: str = '', model_explanation: str = '', feedback_value: str = '', feedback_comment: str = '', topic_hint: str | None = None)
Bases:
BaseModelInputs needed to explain one disputed classification.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- feedback_comment: str
- feedback_value: str
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_explanation: str
- model_value: str
- rubric_text: str
- score_code: str
- score_identifier: str
- score_version_id: str
- scorecard_identifier: str
- topic_hint: str | None
- transcript_text: str
- class plexus.rubric_memory.models.RubricHistoryEvent(*, source_timestamp: datetime | None = None, source_uri: Annotated[str, MinLen(min_length=1)], scope_level: str = 'unknown', authority_level: str = 'unknown', summary: Annotated[str, MinLen(min_length=1)], evidence_classification: EvidenceClassification = EvidenceClassification.HISTORICAL_CONTEXT)
Bases:
BaseModelChronological policy-memory event derived from corpus evidence.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- authority_level: str
- evidence_classification: EvidenceClassification
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- scope_level: str
- source_timestamp: datetime | None
- source_uri: str
- summary: str