plexus.rubric_memory package
- class plexus.rubric_memory.BiblicusRubricEvidenceRetriever(corpus_root: str | Path | None = None, *, corpus_sources: Sequence[LocalRubricMemorySource | S3RubricMemorySource] | None = None, retriever_id: str = 'scan', max_total_items: int = 16, maximum_total_characters: int = 60000, source_window_characters: int = 6000, query_planner: RubricMemoryQueryPlanner | None = None, prepared_corpus_manager: RubricMemoryPreparedCorpusManager | None = None)
Bases:
objectRetrieve rubric-memory evidence from one prepared Biblicus corpus.
- __init__(corpus_root: str | Path | None = None, *, corpus_sources: Sequence[LocalRubricMemorySource | S3RubricMemorySource] | None = None, retriever_id: str = 'scan', max_total_items: int = 16, maximum_total_characters: int = 60000, source_window_characters: int = 6000, query_planner: RubricMemoryQueryPlanner | None = None, prepared_corpus_manager: RubricMemoryPreparedCorpusManager | None = None)
- classmethod from_local_score(*, scorecard_name: str, score_name: str, retriever_id: str = 'scan', max_total_items: int = 16, maximum_total_characters: int = 60000, source_window_characters: int = 6000, prepared_corpus_manager: RubricMemoryPreparedCorpusManager | None = None) BiblicusRubricEvidenceRetriever
- classmethod from_score(*, scorecard_name: str, score_name: str, retriever_id: str = 'scan', max_total_items: int = 16, maximum_total_characters: int = 60000, source_window_characters: int = 6000, prepared_corpus_manager: RubricMemoryPreparedCorpusManager | None = None, s3_client: Any | None = None) BiblicusRubricEvidenceRetriever
- async retrieve(request: RubricEvidencePackRequest) Sequence[EvidenceSnippet]
- class plexus.rubric_memory.ConfidenceInputs(*, score_version_id: Annotated[str, MinLen(min_length=1)], total_evidence_count: Annotated[int, Ge(ge=0)], score_scope_evidence_count: Annotated[int, Ge(ge=0)], prefix_scope_evidence_count: Annotated[int, Ge(ge=0)] = 0, scorecard_scope_evidence_count: Annotated[int, Ge(ge=0)], unknown_scope_evidence_count: Annotated[int, Ge(ge=0)], high_authority_evidence_count: Annotated[int, Ge(ge=0)], low_authority_evidence_count: Annotated[int, Ge(ge=0)], conflicting_or_stale_evidence_count: Annotated[int, Ge(ge=0)], chronological_evidence_count: Annotated[int, Ge(ge=0)], suggested_confidence: ConfidenceLevel)
Bases:
BaseModelDeterministic inputs used by Python to constrain confidence.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- chronological_evidence_count: int
- conflicting_or_stale_evidence_count: int
- high_authority_evidence_count: int
- low_authority_evidence_count: int
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- prefix_scope_evidence_count: int
- score_scope_evidence_count: int
- score_version_id: str
- scorecard_scope_evidence_count: int
- suggested_confidence: ConfidenceLevel
- total_evidence_count: int
- unknown_scope_evidence_count: int
- class plexus.rubric_memory.ConfidenceLevel(value)
Bases:
str,Enum- HIGH = 'high'
- LOW = 'low'
- MEDIUM = 'medium'
- class plexus.rubric_memory.EvidenceClassification(value)
Bases:
str,Enum- HISTORICAL_CONTEXT = 'historical_context'
- POSSIBLE_STALE_RUBRIC = 'possible_stale_rubric'
- RUBRIC_CONFLICTING = 'rubric_conflicting'
- RUBRIC_GAP = 'rubric_gap'
- RUBRIC_SUPPORTED = 'rubric_supported'
- class plexus.rubric_memory.EvidenceSnippet(*, snippet_text: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], source_uri: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], scope_level: str = 'unknown', source_type: str = 'unknown', authority_level: str = 'unknown', source_timestamp: ~datetime.datetime | None = None, author: str | None = None, retrieval_score: float = 0.0, policy_concepts: list[str] = <factory>, evidence_classification: ~plexus.rubric_memory.models.EvidenceClassification = EvidenceClassification.HISTORICAL_CONTEXT)
Bases:
BaseModelCorpus evidence with provenance retained from Biblicus retrieval.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- author: str | None
- authority_level: str
- evidence_classification: EvidenceClassification
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- policy_concepts: list[str]
- retrieval_score: float
- scope_level: str
- snippet_text: str
- source_timestamp: datetime | None
- source_type: str
- source_uri: str
- class plexus.rubric_memory.LocalRubricMemoryCorpusPaths(scorecard_root: Path, scorecard_knowledge_base: Path, prefix_knowledge_bases: list[Path], score_knowledge_base: Path)
Bases:
objectConvention-derived local rubric-memory paths for one score.
- __init__(scorecard_root: Path, scorecard_knowledge_base: Path, prefix_knowledge_bases: list[Path], score_knowledge_base: Path) None
- prefix_knowledge_bases: list[Path]
- score_knowledge_base: Path
- scorecard_knowledge_base: Path
- scorecard_root: Path
- property sources: list[LocalRubricMemorySource]
- class plexus.rubric_memory.LocalRubricMemoryCorpusResolver
Bases:
objectResolve rubric-memory folders using the existing pulled-score convention.
- resolve(*, scorecard_name: str, score_name: str) LocalRubricMemoryCorpusPaths
- class plexus.rubric_memory.LocalRubricMemorySource(root: Path, scope_level: str)
Bases:
objectA local rubric-memory corpus folder with its score hierarchy scope.
- __init__(root: Path, scope_level: str) None
- root: Path
- scope_level: str
- class plexus.rubric_memory.PreparedRubricMemoryCorpus(corpus_root: Path, prepared_root: Path, manifest_path: Path, fingerprint: str, retriever_id: str, status: str, source_file_count: int, sources: list[dict[str, Any]])
Bases:
objectA prepared Biblicus corpus built from rubric-memory sources.
- __init__(corpus_root: Path, prepared_root: Path, manifest_path: Path, fingerprint: str, retriever_id: str, status: str, source_file_count: int, sources: list[dict[str, Any]]) None
- corpus_root: Path
- fingerprint: str
- manifest_path: Path
- prepared_root: Path
- retriever_id: str
- source_file_count: int
- sources: list[dict[str, Any]]
- status: str
- class plexus.rubric_memory.RubricAuthority(*, score_version_id: Annotated[str, MinLen(min_length=1)], rubric_text: Annotated[str, MinLen(min_length=1)], score_code: str = '')
Bases:
BaseModelStorage-boundary projection of ScoreVersion authority into rubric terms.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- rubric_text: str
- score_code: str
- score_version_id: str
- exception plexus.rubric_memory.RubricAuthorityError
Bases:
RuntimeErrorRaised when official ScoreVersion rubric authority cannot be resolved.
- class plexus.rubric_memory.RubricAuthorityResolver(api_client: Any)
Bases:
objectResolve the canonical rubric and score code from the champion ScoreVersion.
Plexus storage still names the rubric field
guidelines. This class is the storage adapter boundary: callers receiverubric_textand do not need to know about the legacy field name.- __init__(api_client: Any)
- async resolve(score_id: str) RubricAuthority
- async resolve_score_version(score_version_id: str) RubricAuthority
- class plexus.rubric_memory.RubricEvidencePack(*, score_version_id: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], rubric_reading: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], evidence_classification: ~plexus.rubric_memory.models.EvidenceClassification, supporting_evidence: list[~plexus.rubric_memory.models.EvidenceSnippet] = <factory>, conflicting_evidence: list[~plexus.rubric_memory.models.EvidenceSnippet] = <factory>, history_of_change: list[~plexus.rubric_memory.models.RubricHistoryEvent] = <factory>, likely_reason_for_disagreement: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], confidence: ~plexus.rubric_memory.models.ConfidenceLevel, confidence_inputs: ~plexus.rubric_memory.models.ConfidenceInputs, open_questions: list[str] = <factory>)
Bases:
BaseModelStructured answer for one disputed classification.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- confidence: ConfidenceLevel
- confidence_inputs: ConfidenceInputs
- conflicting_evidence: list[EvidenceSnippet]
- evidence_classification: EvidenceClassification
- history_of_change: list[RubricHistoryEvent]
- likely_reason_for_disagreement: str
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- open_questions: list[str]
- rubric_reading: str
- score_version_id: str
- supporting_evidence: list[EvidenceSnippet]
- class plexus.rubric_memory.RubricEvidencePackContextFormatter(*, max_snippet_characters: int = 1400)
Bases:
objectRender a rubric evidence pack as deterministic sub-agent context.
- __init__(*, max_snippet_characters: int = 1400)
- format(pack: RubricEvidencePack) str
- class plexus.rubric_memory.RubricEvidencePackRequest(*, scorecard_identifier: Annotated[str, MinLen(min_length=1)], score_identifier: Annotated[str, MinLen(min_length=1)], score_version_id: Annotated[str, MinLen(min_length=1)], rubric_text: Annotated[str, MinLen(min_length=1)], score_code: str = '', transcript_text: str = '', model_value: str = '', model_explanation: str = '', feedback_value: str = '', feedback_comment: str = '', topic_hint: str | None = None)
Bases:
BaseModelInputs needed to explain one disputed classification.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- feedback_comment: str
- feedback_value: str
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_explanation: str
- model_value: str
- rubric_text: str
- score_code: str
- score_identifier: str
- score_version_id: str
- scorecard_identifier: str
- topic_hint: str | None
- transcript_text: str
- class plexus.rubric_memory.RubricEvidencePackService(*, retriever: RubricEvidenceRetriever, synthesizer: RubricEvidenceSynthesizer)
Bases:
objectGenerate rubric evidence packs from official rubric authority and corpus evidence.
Python owns retrieval, provenance, dedupe, chronology, confidence policy, and final response shaping. The synthesizer is responsible only for interpreting the shaped evidence into prose and structured labels.
- __init__(*, retriever: RubricEvidenceRetriever, synthesizer: RubricEvidenceSynthesizer)
- async generate(request: RubricEvidencePackRequest) RubricEvidencePack
- class plexus.rubric_memory.RubricEvidenceRetriever(*args, **kwargs)
Bases:
Protocol- __init__(*args, **kwargs)
- async retrieve(request: RubricEvidencePackRequest) Sequence[EvidenceSnippet]
Return candidate evidence snippets for the disputed item.
- class plexus.rubric_memory.RubricEvidenceSynthesizer(*args, **kwargs)
Bases:
Protocol- __init__(*args, **kwargs)
- async synthesize(*, request: RubricEvidencePackRequest, evidence: Sequence[EvidenceSnippet], history: Sequence[RubricHistoryEvent], confidence_inputs: ConfidenceInputs) RubricEvidencePack
Interpret shaped evidence and return a structured pack.
- class plexus.rubric_memory.RubricHistoryEvent(*, source_timestamp: datetime | None = None, source_uri: Annotated[str, MinLen(min_length=1)], scope_level: str = 'unknown', authority_level: str = 'unknown', summary: Annotated[str, MinLen(min_length=1)], evidence_classification: EvidenceClassification = EvidenceClassification.HISTORICAL_CONTEXT)
Bases:
BaseModelChronological policy-memory event derived from corpus evidence.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- authority_level: str
- evidence_classification: EvidenceClassification
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- scope_level: str
- source_timestamp: datetime | None
- source_uri: str
- summary: str
- class plexus.rubric_memory.RubricMemoryCitation(*, id: Annotated[str, MinLen(min_length=1)], kind: Literal['official_rubric', 'corpus_evidence'], excerpt: Annotated[str, MinLen(min_length=1)], source_uri: str | None = None, scope_level: str = 'unknown', source_timestamp: datetime | None = None, authority_level: str = 'unknown', score_version_id: Annotated[str, MinLen(min_length=1)], evidence_classification: str = 'unknown')
Bases:
BaseModelStable citation handle for official rubric authority or corpus evidence.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- authority_level: str
- evidence_classification: str
- excerpt: str
- id: str
- kind: Literal['official_rubric', 'corpus_evidence']
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- scope_level: str
- score_version_id: str
- source_timestamp: datetime | None
- source_uri: str | None
- class plexus.rubric_memory.RubricMemoryCitationContext(*, markdown_context: str, citation_index: list[RubricMemoryCitation] = <factory>, machine_context: dict[str, ~typing.Any]=<factory>, diagnostics: list[dict[str, ~typing.Any]]=<factory>)
Bases:
BaseModelHuman-readable rubric-memory context plus machine-readable citations.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- citation_ids() set[str]
- citation_index: list[RubricMemoryCitation]
- diagnostics: list[dict[str, Any]]
- machine_context: dict[str, Any]
- markdown_context: str
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class plexus.rubric_memory.RubricMemoryCitationFormatter(*, max_excerpt_characters: int = 900)
Bases:
objectConvert a RubricEvidencePack into deterministic citation context.
- __init__(*, max_excerpt_characters: int = 900)
- from_pack(pack: RubricEvidencePack) RubricMemoryCitationContext
- from_recent_evidence(*, request: RubricEvidencePackRequest, evidence: Sequence[EvidenceSnippet], metadata: dict[str, Any]) RubricMemoryCitationContext
Format recency-biased retrieved evidence for optimizer briefings.
- from_retrieved_evidence(*, request: RubricEvidencePackRequest, evidence: Sequence[EvidenceSnippet]) RubricMemoryCitationContext
Format retrieved evidence as LLM input context without synthesis.
- class plexus.rubric_memory.RubricMemoryCitationValidation(*, supplied_ids: list[str] = <factory>, valid_ids: list[str] = <factory>, missing_ids: list[str] = <factory>, unused_ids: list[str] = <factory>, omitted_citations: bool = False, warnings: list[str] = <factory>)
Bases:
BaseModelNon-blocking diagnostics for citation use by an LLM consumer.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- missing_ids: list[str]
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- omitted_citations: bool
- supplied_ids: list[str]
- unused_ids: list[str]
- valid_ids: list[str]
- warnings: list[str]
- class plexus.rubric_memory.RubricMemoryContextProvider(*, api_client: Any, citation_formatter: RubricMemoryCitationFormatter | None = None)
Bases:
objectShared service that produces citation-ready rubric-memory context.
- __init__(*, api_client: Any, citation_formatter: RubricMemoryCitationFormatter | None = None)
- async generate_for_request(request: RubricEvidencePackRequest) RubricMemoryCitationContext
- async generate_for_score_item(*, scorecard_identifier: str, score_identifier: str, score_id: str, score_version_id: str | None = None, transcript_text: str = '', model_value: str = '', model_explanation: str = '', feedback_value: str = '', feedback_comment: str = '', topic_hint: str | None = None) RubricMemoryCitationContext
- local_corpus_status(*, scorecard_identifier: str, score_identifier: str) dict[str, Any]
- async retrieve_for_score_item(*, scorecard_identifier: str, score_identifier: str, score_id: str, score_version_id: str | None = None, transcript_text: str = '', model_value: str = '', model_explanation: str = '', feedback_value: str = '', feedback_comment: str = '', topic_hint: str | None = None) RubricMemoryCitationContext
- async retrieve_for_score_items(*, scorecard_identifier: str, score_identifier: str, score_id: str, item_contexts: Sequence[dict[str, str]], score_version_id: str | None = None, topic_hint: str | None = None) dict[str, RubricMemoryCitationContext]
Retrieve citation context for existing LLM consumers without synthesis.
- class plexus.rubric_memory.RubricMemoryGatedSMEQuestion(*, id: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], original_text: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], final_text: str = '', action: ~plexus.rubric_memory.sme_question_gate.SMEQuestionGateAction, answer_status: ~plexus.rubric_memory.sme_question_gate.SMEQuestionAnswerStatus, rationale: str = '', citation_ids: list[str] = <factory>, citation_validation: ~plexus.rubric_memory.citations.RubricMemoryCitationValidation | None = None)
Bases:
BaseModelCreate a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- action: SMEQuestionGateAction
- answer_status: SMEQuestionAnswerStatus
- citation_ids: list[str]
- citation_validation: RubricMemoryCitationValidation | None
- final_text: str
- id: str
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- original_text: str
- rationale: str
- class plexus.rubric_memory.RubricMemoryPreparedCorpusManager(cache_root: str | Path | None = None)
Bases:
objectPrepare rubric-memory sources into a reusable Biblicus corpus.
- SIDECAR_SCHEMA_VERSION = 'rubric-memory-sidecar-v1'
- __init__(cache_root: str | Path | None = None)
- infer_source_timestamp(relative_path: Path | PurePosixPath) datetime | None
- prepare(*, corpus_sources: Sequence[LocalRubricMemorySource | S3RubricMemorySource], retriever_id: str = 'scan', force: bool = False) PreparedRubricMemoryCorpus
- class plexus.rubric_memory.RubricMemoryQueryPlan(expanded_query_text: 'str', retrieval_phrases: 'list[str]', important_tokens: 'list[str]')
Bases:
object- __init__(expanded_query_text: str, retrieval_phrases: list[str], important_tokens: list[str]) None
- expanded_query_text: str
- important_tokens: list[str]
- retrieval_phrases: list[str]
- class plexus.rubric_memory.RubricMemoryQueryPlanner(*, max_phrases: int = 80)
Bases:
objectBuild runtime retrieval hints from existing rubric/item context.
- __init__(*, max_phrases: int = 80)
- plan(request: RubricEvidencePackRequest) RubricMemoryQueryPlan
- class plexus.rubric_memory.RubricMemoryRecentBriefingProvider(*, api_client: Any, citation_formatter: RubricMemoryCitationFormatter | None = None, prepared_corpus_manager: RubricMemoryPreparedCorpusManager | None = None, s3_client: Any | None = None, reference_date: date | None = None)
Bases:
objectBuild recency-biased rubric-memory citation context for one score.
- DEFAULT_DAYS = 30
- DEFAULT_QUERY = 'recent SME stakeholder policy update rubric guideline change clarification score scorecard scoring decision'
- __init__(*, api_client: Any, citation_formatter: RubricMemoryCitationFormatter | None = None, prepared_corpus_manager: RubricMemoryPreparedCorpusManager | None = None, s3_client: Any | None = None, reference_date: date | None = None)
- async retrieve_recent(*, scorecard_identifier: str, score_identifier: str, score_id: str, score_version_id: str | None = None, query: str = '', days: int = 30, since: date | str | None = None, limit: int = 16) RubricMemoryCitationContext
- class plexus.rubric_memory.RubricMemorySMEQuestion(*, id: Annotated[str, MinLen(min_length=1)], text: Annotated[str, MinLen(min_length=1)])
Bases:
BaseModelCreate a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- id: str
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- text: str
- class plexus.rubric_memory.RubricMemorySMEQuestionGateRequest(*, scorecard_identifier: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], score_identifier: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], score_version_id: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], rubric_memory_context: ~plexus.rubric_memory.citations.RubricMemoryCitationContext, candidate_agenda_items: list[~plexus.rubric_memory.sme_question_gate.RubricMemorySMEQuestion] = <factory>, optimizer_context: str = '')
Bases:
BaseModelCreate a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- candidate_agenda_items: list[RubricMemorySMEQuestion]
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- optimizer_context: str
- rubric_memory_context: RubricMemoryCitationContext
- score_identifier: str
- score_version_id: str
- scorecard_identifier: str
- class plexus.rubric_memory.RubricMemorySMEQuestionGateResult(*, score_version_id: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], final_agenda_markdown: ~typing.Annotated[str, ~annotated_types.MinLen(min_length=1)], final_items: list[~plexus.rubric_memory.sme_question_gate.RubricMemoryGatedSMEQuestion] = <factory>, suppressed_items: list[~plexus.rubric_memory.sme_question_gate.RubricMemoryGatedSMEQuestion] = <factory>, transformed_items: list[~plexus.rubric_memory.sme_question_gate.RubricMemoryGatedSMEQuestion] = <factory>, kept_items: list[~plexus.rubric_memory.sme_question_gate.RubricMemoryGatedSMEQuestion] = <factory>, citation_diagnostics: list[dict[str, ~typing.Any]] = <factory>, summary_counts: dict[str, int] = <factory>)
Bases:
BaseModelCreate a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- citation_diagnostics: list[dict[str, Any]]
- final_agenda_markdown: str
- final_items: list[RubricMemoryGatedSMEQuestion]
- kept_items: list[RubricMemoryGatedSMEQuestion]
- model_config = {'extra': 'forbid'}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- score_version_id: str
- summary_counts: dict[str, int]
- suppressed_items: list[RubricMemoryGatedSMEQuestion]
- transformed_items: list[RubricMemoryGatedSMEQuestion]
- class plexus.rubric_memory.RubricMemorySMEQuestionGateService(*, synthesizer: RubricMemorySMEQuestionGateSynthesizer | None = None)
Bases:
objectTyped, citation-validating wrapper around SME question gate synthesis.
- __init__(*, synthesizer: RubricMemorySMEQuestionGateSynthesizer | None = None)
- async gate(request: RubricMemorySMEQuestionGateRequest) RubricMemorySMEQuestionGateResult
- class plexus.rubric_memory.S3RubricMemoryCorpusPaths(bucket_name: str, scorecard_prefix: str, scorecard_knowledge_base_prefix: str, prefix_knowledge_base_prefixes: list[str], score_knowledge_base_prefix: str, sources: list[S3RubricMemorySource])
Bases:
objectConvention-derived S3 rubric-memory prefixes for one score.
- __init__(bucket_name: str, scorecard_prefix: str, scorecard_knowledge_base_prefix: str, prefix_knowledge_base_prefixes: list[str], score_knowledge_base_prefix: str, sources: list[S3RubricMemorySource]) None
- bucket_name: str
- prefix_knowledge_base_prefixes: list[str]
- score_knowledge_base_prefix: str
- scorecard_knowledge_base_prefix: str
- scorecard_prefix: str
- sources: list[S3RubricMemorySource]
- class plexus.rubric_memory.S3RubricMemoryCorpusResolver(*, bucket_name: str | None = None, s3_client: Any | None = None)
Bases:
objectResolve rubric-memory S3 prefixes using the name-based hierarchy.
- __init__(*, bucket_name: str | None = None, s3_client: Any | None = None)
- resolve(*, scorecard_name: str, score_name: str) S3RubricMemoryCorpusPaths
- class plexus.rubric_memory.S3RubricMemoryObject(key: str, size: int, etag: str, last_modified: datetime | None)
Bases:
objectOne raw S3 object in a rubric-memory corpus source.
- __init__(key: str, size: int, etag: str, last_modified: datetime | None) None
- etag: str
- key: str
- last_modified: datetime | None
- size: int
- class plexus.rubric_memory.S3RubricMemorySource(bucket_name: str, prefix: str, scope_level: str, objects: tuple[S3RubricMemoryObject, ...])
Bases:
objectAn S3 rubric-memory prefix with its score hierarchy scope.
- __init__(bucket_name: str, prefix: str, scope_level: str, objects: tuple[S3RubricMemoryObject, ...]) None
- bucket_name: str
- objects: tuple[S3RubricMemoryObject, ...]
- prefix: str
- scope_level: str
- class plexus.rubric_memory.SMEQuestionAnswerStatus(value)
Bases:
str,Enum- ANSWERED_BY_CORPUS = 'answered_by_corpus'
- ANSWERED_BY_RUBRIC = 'answered_by_rubric'
- CONFLICTING_EVIDENCE = 'conflicting_evidence'
- PARTIALLY_ANSWERED = 'partially_answered'
- TRUE_OPEN_QUESTION = 'true_open_question'
- class plexus.rubric_memory.SMEQuestionGateAction(value)
Bases:
str,Enum- KEEP = 'keep'
- SUPPRESS = 'suppress'
- TRANSFORM = 'transform'
- class plexus.rubric_memory.TactusRubricEvidenceSynthesizer(*, provider: str = 'openai', model: str = 'gpt-5-mini', procedure_id: str = 'rubric_evidence_pack_synthesis', max_tokens: int = 16000)
Bases:
objectRun the repo-owned Tactus synthesis procedure for evidence packs.
- __init__(*, provider: str = 'openai', model: str = 'gpt-5-mini', procedure_id: str = 'rubric_evidence_pack_synthesis', max_tokens: int = 16000)
- async synthesize(*, request: RubricEvidencePackRequest, evidence: Sequence[EvidenceSnippet], history: Sequence[RubricHistoryEvent], confidence_inputs: ConfidenceInputs) RubricEvidencePack
- class plexus.rubric_memory.TactusRubricMemorySMEQuestionGateSynthesizer(*, provider: str = 'openai', model: str = 'gpt-5-mini', procedure_id: str = 'rubric_memory_sme_question_gate', max_tokens: int = 16000)
Bases:
objectRun the repo-owned Tactus procedure for SME question gating.
- __init__(*, provider: str = 'openai', model: str = 'gpt-5-mini', procedure_id: str = 'rubric_memory_sme_question_gate', max_tokens: int = 16000)
- async synthesize(*, request: RubricMemorySMEQuestionGateRequest) dict[str, Any]
- plexus.rubric_memory.candidate_agenda_items_from_markdown(markdown: str) list[RubricMemorySMEQuestion]
Split optimizer SME agenda Markdown into deterministic candidate items.
- plexus.rubric_memory.format_gated_sme_agenda(items: Sequence[RubricMemoryGatedSMEQuestion]) str
- plexus.rubric_memory.validate_rubric_memory_citations(supplied_ids: Iterable[str] | None, context: RubricMemoryCitationContext | dict[str, Any] | None, *, require_citation: bool = False) RubricMemoryCitationValidation
Submodules
- plexus.rubric_memory.authority module
- plexus.rubric_memory.citations module
RubricMemoryCitationRubricMemoryCitation.authority_levelRubricMemoryCitation.evidence_classificationRubricMemoryCitation.excerptRubricMemoryCitation.idRubricMemoryCitation.kindRubricMemoryCitation.model_configRubricMemoryCitation.scope_levelRubricMemoryCitation.score_version_idRubricMemoryCitation.source_timestampRubricMemoryCitation.source_uri
RubricMemoryCitationContextRubricMemoryCitationFormatterRubricMemoryCitationValidationRubricMemoryCitationValidation.missing_idsRubricMemoryCitationValidation.model_configRubricMemoryCitationValidation.omitted_citationsRubricMemoryCitationValidation.supplied_idsRubricMemoryCitationValidation.unused_idsRubricMemoryCitationValidation.valid_idsRubricMemoryCitationValidation.warnings
validate_rubric_memory_citations()
- plexus.rubric_memory.context_formatter module
- plexus.rubric_memory.local_corpus module
- plexus.rubric_memory.models module
ConfidenceInputsConfidenceInputs.chronological_evidence_countConfidenceInputs.conflicting_or_stale_evidence_countConfidenceInputs.high_authority_evidence_countConfidenceInputs.low_authority_evidence_countConfidenceInputs.model_configConfidenceInputs.prefix_scope_evidence_countConfidenceInputs.score_scope_evidence_countConfidenceInputs.score_version_idConfidenceInputs.scorecard_scope_evidence_countConfidenceInputs.suggested_confidenceConfidenceInputs.total_evidence_countConfidenceInputs.unknown_scope_evidence_count
ConfidenceLevelEvidenceClassificationEvidenceSnippetEvidenceSnippet.authorEvidenceSnippet.authority_levelEvidenceSnippet.evidence_classificationEvidenceSnippet.model_configEvidenceSnippet.policy_conceptsEvidenceSnippet.retrieval_scoreEvidenceSnippet.scope_levelEvidenceSnippet.snippet_textEvidenceSnippet.source_timestampEvidenceSnippet.source_typeEvidenceSnippet.source_uri
RubricAuthorityRubricEvidencePackRubricEvidencePack.confidenceRubricEvidencePack.confidence_inputsRubricEvidencePack.conflicting_evidenceRubricEvidencePack.evidence_classificationRubricEvidencePack.history_of_changeRubricEvidencePack.likely_reason_for_disagreementRubricEvidencePack.model_configRubricEvidencePack.open_questionsRubricEvidencePack.rubric_readingRubricEvidencePack.score_version_idRubricEvidencePack.supporting_evidence
RubricEvidencePackRequestRubricEvidencePackRequest.feedback_commentRubricEvidencePackRequest.feedback_valueRubricEvidencePackRequest.model_configRubricEvidencePackRequest.model_explanationRubricEvidencePackRequest.model_valueRubricEvidencePackRequest.rubric_textRubricEvidencePackRequest.score_codeRubricEvidencePackRequest.score_identifierRubricEvidencePackRequest.score_version_idRubricEvidencePackRequest.scorecard_identifierRubricEvidencePackRequest.topic_hintRubricEvidencePackRequest.transcript_text
RubricHistoryEvent
- plexus.rubric_memory.preparation module
PreparedRubricMemoryCorpusPreparedRubricMemoryCorpus.__init__()PreparedRubricMemoryCorpus.corpus_rootPreparedRubricMemoryCorpus.fingerprintPreparedRubricMemoryCorpus.manifest_pathPreparedRubricMemoryCorpus.prepared_rootPreparedRubricMemoryCorpus.retriever_idPreparedRubricMemoryCorpus.source_file_countPreparedRubricMemoryCorpus.sourcesPreparedRubricMemoryCorpus.status
RubricMemoryPreparedCorpusManager
- plexus.rubric_memory.provider module
RubricMemoryContextProviderRubricMemoryContextProvider.__init__()RubricMemoryContextProvider.generate_for_request()RubricMemoryContextProvider.generate_for_score_item()RubricMemoryContextProvider.local_corpus_status()RubricMemoryContextProvider.retrieve_for_score_item()RubricMemoryContextProvider.retrieve_for_score_items()
- plexus.rubric_memory.query_planner module
- plexus.rubric_memory.recent module
- plexus.rubric_memory.retrieval module
- plexus.rubric_memory.s3_corpus module
S3RubricMemoryCorpusPathsS3RubricMemoryCorpusPaths.__init__()S3RubricMemoryCorpusPaths.bucket_nameS3RubricMemoryCorpusPaths.prefix_knowledge_base_prefixesS3RubricMemoryCorpusPaths.score_knowledge_base_prefixS3RubricMemoryCorpusPaths.scorecard_knowledge_base_prefixS3RubricMemoryCorpusPaths.scorecard_prefixS3RubricMemoryCorpusPaths.sources
S3RubricMemoryCorpusResolverS3RubricMemoryObjectS3RubricMemorySource
- plexus.rubric_memory.service module
- plexus.rubric_memory.sme_question_gate module
RubricMemoryGatedSMEQuestionRubricMemoryGatedSMEQuestion.actionRubricMemoryGatedSMEQuestion.answer_statusRubricMemoryGatedSMEQuestion.citation_idsRubricMemoryGatedSMEQuestion.citation_validationRubricMemoryGatedSMEQuestion.final_textRubricMemoryGatedSMEQuestion.idRubricMemoryGatedSMEQuestion.model_configRubricMemoryGatedSMEQuestion.original_textRubricMemoryGatedSMEQuestion.rationale
RubricMemorySMEQuestionRubricMemorySMEQuestionGateRequestRubricMemorySMEQuestionGateRequest.candidate_agenda_itemsRubricMemorySMEQuestionGateRequest.model_configRubricMemorySMEQuestionGateRequest.optimizer_contextRubricMemorySMEQuestionGateRequest.rubric_memory_contextRubricMemorySMEQuestionGateRequest.score_identifierRubricMemorySMEQuestionGateRequest.score_version_idRubricMemorySMEQuestionGateRequest.scorecard_identifier
RubricMemorySMEQuestionGateResultRubricMemorySMEQuestionGateResult.citation_diagnosticsRubricMemorySMEQuestionGateResult.final_agenda_markdownRubricMemorySMEQuestionGateResult.final_itemsRubricMemorySMEQuestionGateResult.kept_itemsRubricMemorySMEQuestionGateResult.model_configRubricMemorySMEQuestionGateResult.score_version_idRubricMemorySMEQuestionGateResult.summary_countsRubricMemorySMEQuestionGateResult.suppressed_itemsRubricMemorySMEQuestionGateResult.transformed_items
RubricMemorySMEQuestionGateServiceRubricMemorySMEQuestionGateSynthesizerSMEQuestionAnswerStatusSMEQuestionGateActionTactusRubricMemorySMEQuestionGateSynthesizercandidate_agenda_items_from_markdown()format_gated_sme_agenda()
- plexus.rubric_memory.synthesis module