plexus.dashboard.api.models.evaluation module
Evaluation Model - Python representation of the GraphQL Evaluation type.
This model represents individual Evaluations in the system, tracking: - Accuracy and performance metrics - Processing status and progress - Error states and details - Relationships to accounts, scorecards, and scores
All mutations (create/update) are performed in background threads for non-blocking operation.
- class plexus.dashboard.api.models.evaluation.Evaluation(id: str, type: str, accountId: str, status: str, createdAt: datetime.datetime, updatedAt: datetime.datetime, client: plexus.dashboard.api.client._BaseAPIClient | None = None, parameters: Dict | None = None, metrics: Dict | None = None, inferences: int | None = None, accuracy: float | None = None, cost: float | None = None, startedAt: datetime.datetime | None = None, elapsedSeconds: int | None = None, estimatedRemainingSeconds: int | None = None, totalItems: int | None = None, processedItems: int | None = None, errorMessage: str | None = None, errorDetails: Dict | None = None, scorecardId: str | None = None, scoreId: str | None = None, confusionMatrix: Dict | None = None, scoreGoal: str | None = None, datasetClassDistribution: Dict | None = None, isDatasetClassDistributionBalanced: bool | None = None, predictedClassDistribution: Dict | None = None, isPredictedClassDistributionBalanced: bool | None = None, taskId: str | None = None)
Bases:
BaseModel
- __init__(id: str, type: str, accountId: str, status: str, createdAt: datetime, updatedAt: datetime, client: _BaseAPIClient | None = None, parameters: Dict | None = None, metrics: Dict | None = None, inferences: int | None = None, accuracy: float | None = None, cost: float | None = None, startedAt: datetime | None = None, elapsedSeconds: int | None = None, estimatedRemainingSeconds: int | None = None, totalItems: int | None = None, processedItems: int | None = None, errorMessage: str | None = None, errorDetails: Dict | None = None, scorecardId: str | None = None, scoreId: str | None = None, confusionMatrix: Dict | None = None, scoreGoal: str | None = None, datasetClassDistribution: Dict | None = None, isDatasetClassDistributionBalanced: bool | None = None, predictedClassDistribution: Dict | None = None, isPredictedClassDistributionBalanced: bool | None = None, taskId: str | None = None)
- accountId: str
- accuracy: float | None = None
- confusionMatrix: Dict | None = None
- cost: float | None = None
- classmethod create(client: _BaseAPIClient, type: str, accountId: str, *, status: str = 'PENDING', scorecardId: str | None = None, scoreId: str | None = None, taskId: str | None = None, **kwargs) Evaluation
Create a new Evaluation.
- createdAt: datetime
- datasetClassDistribution: Dict | None = None
- elapsedSeconds: int | None = None
- errorDetails: Dict | None = None
- errorMessage: str | None = None
- estimatedRemainingSeconds: int | None = None
- classmethod fields() str
Fields to request in queries and mutations
- classmethod from_dict(data: Dict[str, Any], client: _BaseAPIClient) Evaluation
Create an instance from a dictionary of data
- classmethod get_by_id(id: str, client: _BaseAPIClient, include_score_results: bool = False) Evaluation
- inferences: int | None = None
- isDatasetClassDistributionBalanced: bool | None = None
- isPredictedClassDistributionBalanced: bool | None = None
- metrics: Dict | None = None
- parameters: Dict | None = None
- predictedClassDistribution: Dict | None = None
- processedItems: int | None = None
- scoreGoal: str | None = None
- scoreId: str | None = None
- scorecardId: str | None = None
- startedAt: datetime | None = None
- status: str
- taskId: str | None = None
- totalItems: int | None = None
- type: str
- update(**kwargs) None
Update Evaluation fields in a background thread.
This is a non-blocking operation - the mutation is performed in a background thread.
- Args:
**kwargs: Fields to update
- updatedAt: datetime