Event System
AI AutoEvals provides both an event system and hook system for extending functionality. Understanding when to use each is important for effective development.
Overview
Section titled “Overview”Events vs Hooks
Section titled “Events vs Hooks”AI AutoEvals provides two extension mechanisms with different purposes:
Events - React to evaluation lifecycle
- Dispatched during evaluation processing
- Access to evaluation data and results
- Use for: notifications, logging, post-processing, analytics
Hooks - Filter evaluation sets before they’re used
- Called during evaluation matching (pre-response)
- Access to input, tags, and evaluation sets
- Use for: routing logic, conditional evaluation, custom filtering
Event System
Section titled “Event System”The event system provides hooks at key points in the evaluation lifecycle:
- PreEvaluationEvent - Before evaluation is sent to LLM
- PostEvaluationEvent - After evaluation completes successfully
- EvaluationFailedEvent - When evaluation fails
- AgentFinishedExecutionEvent - When AI Agent completes execution (conditional)
Hook System
Section titled “Hook System”Hook Name: hook_ai_autoevals_evaluation_sets_alter()
The hook allows you to filter evaluation sets before they are triggered. It’s called after operation type and tags matching, but before keyword matching.
See Extending > Hooks for complete documentation.
Events
Section titled “Events”PreEvaluationEvent
Section titled “PreEvaluationEvent”Event Name: ai_autoevals.pre_evaluation
When Dispatched: Before the evaluation is sent to the LLM for processing.
Event Class: Drupal\ai_autoevals\Event\PreEvaluationEvent
Pre-Evaluation Use Cases
Section titled “Pre-Evaluation Use Cases”- Modify facts before evaluation
- Skip evaluation based on custom conditions
- Add additional metadata
- Implement custom filtering logic
- Override evaluation configuration
Pre-Evaluation Available Methods
Section titled “Pre-Evaluation Available Methods”public function getEvaluationResult(): EvaluationResultInterfacepublic function getEvaluationSet(): EvaluationSetInterfacepublic function getFacts(): arraypublic function setFacts(array $facts): voidpublic function skipEvaluation(): voidpublic function isSkipped(): boolpublic function getMetadata(): arraypublic function setMetadata(array $metadata): voidPre-Evaluation Example
Section titled “Pre-Evaluation Example”<?php
namespace Drupal\my_module\EventSubscriber;
use Drupal\ai_autoevals\Event\PreEvaluationEvent;use Symfony\Component\EventDispatcher\EventSubscriberInterface;
class MyModuleSubscriber implements EventSubscriberInterface {
public static function getSubscribedEvents(): array { return [ PreEvaluationEvent::EVENT_NAME => ['onPreEvaluation', 0], ]; }
public function onPreEvaluation(PreEvaluationEvent $event): void { $evaluation = $event->getEvaluationResult(); $evaluationSet = $event->getEvaluationSet();
// Skip evaluation for specific providers if ($evaluation->getProviderId() === 'test_provider') { $event->skipEvaluation(); return; }
// Modify facts $facts = $event->getFacts(); $facts[] = 'Additional fact to verify'; $event->setFacts($facts);
// Add metadata $metadata = $event->getMetadata(); $metadata['processed_by'] = 'my_module'; $event->setMetadata($metadata); }
}PostEvaluationEvent
Section titled “PostEvaluationEvent”Event Name: ai_autoevals.post_evaluation
When Dispatched: After the evaluation completes successfully.
Event Class: Drupal\ai_autoevals\Event\PostEvaluationEvent
Post-Evaluation Use Cases
Section titled “Post-Evaluation Use Cases”- Content moderation based on scores
- Notifications for low-scoring content
- Analytics and tracking
- Triggering workflows
- Integration with ai_observability
- Send alerts to monitoring systems
Post-Evaluation Available Methods
Section titled “Post-Evaluation Available Methods”public function getEvaluationResult(): EvaluationResultInterfacepublic function getEvaluationSet(): EvaluationSetInterfacepublic function getScore(): ?floatpublic function getChoice(): ?stringpublic function getAnalysis(): ?stringPost-Evaluation Example
Section titled “Post-Evaluation Example”<?php
namespace Drupal\my_module\EventSubscriber;
use Drupal\ai_autoevals\Event\PostEvaluationEvent;use Drupal\Core\Logger\LoggerChannelFactoryInterface;use Symfony\Component\EventDispatcher\EventSubscriberInterface;
class ContentModerationSubscriber implements EventSubscriberInterface {
public function __construct( protected LoggerChannelFactoryInterface $loggerFactory ) {}
public static function getSubscribedEvents(): array { return [ PostEvaluationEvent::EVENT_NAME => ['onPostEvaluation', 0], ]; }
public function onPostEvaluation(PostEvaluationEvent $event): void { $evaluation = $event->getEvaluationResult(); $score = $event->getScore(); $choice = $event->getChoice();
// Flag low-scoring content if ($score !== null && $score < 0.5) { $logger = $this->loggerFactory->get('my_module'); $logger->warning( 'Low evaluation score: @score for request @id', ['@score' => $score, '@id' => $evaluation->getRequestId()] );
// Add metadata for moderation $evaluation->setMetadata([ 'flagged_for_review' => TRUE, 'review_reason' => 'Low factuality score', ]); $evaluation->save();
// Send notification // Flag content node for review // Log to moderation queue }
// Track metrics if ($choice === 'D') { // Track disagreements for analysis } }
}EvaluationFailedEvent
Section titled “EvaluationFailedEvent”Event Name: ai_autoevals.evaluation_failed
When Dispatched: When an evaluation fails due to an error.
Event Class: Drupal\ai_autoevals\Event\EvaluationFailedEvent
Evaluation Failed Use Cases
Section titled “Evaluation Failed Use Cases”- Custom retry logic
- Alerting on failures
- Fallback evaluation strategies
- Error tracking
- Logging to external systems
Evaluation Failed Available Methods
Section titled “Evaluation Failed Available Methods”public function getEvaluationResult(): EvaluationResultInterfacepublic function getEvaluationSet(): ?EvaluationSetInterfacepublic function getErrorMessage(): stringpublic function getException(): ?\ThrowableExample
Section titled “Example”<?php
namespace Drupal\my_module\EventSubscriber;
use Drupal\ai_autoevals\Event\EvaluationFailedEvent;use Drupal\Core\Logger\LoggerChannelFactoryInterface;use Symfony\Component\EventDispatcher\EventSubscriberInterface;
class ErrorHandlingSubscriber implements EventSubscriberInterface {
public function __construct( protected LoggerChannelFactoryInterface $loggerFactory ) {}
public static function getSubscribedEvents(): array { return [ EvaluationFailedEvent::EVENT_NAME => ['onEvaluationFailed', 0], ]; }
public function onEvaluationFailed(EvaluationFailedEvent $event): void { $evaluation = $event->getEvaluationResult(); $error = $event->getErrorMessage(); $exception = $event->getException();
$logger = $this->loggerFactory->get('my_module'); $logger->error( 'Evaluation @id failed: @error', ['@id' => $evaluation->id(), '@error' => $error] );
// Log to external monitoring system if ($exception) { // Send to Sentry, New Relic, etc. }
// Implement custom retry logic if (strpos($error, 'rate limit') !== FALSE) { // Schedule retry with exponential backoff }
// Implement fallback evaluation $this->performFallbackEvaluation($evaluation); }
protected function performFallbackEvaluation($evaluation): void { // Simple rule-based fallback evaluation // ... }
}AgentFinishedExecutionEvent
Section titled “AgentFinishedExecutionEvent”Event Name: ai_agents.finished_execution
When Dispatched: When an AI Agent completes its execution (only if AI Agents module is enabled).
Event Class: \Drupal\ai_agents\Event\AgentFinishedExecutionEvent (external event)
Availability
Section titled “Availability”This event is only available when the AI Agents module is enabled. The module conditionally subscribes to this event.
Agent Finished Execution Use Cases
Section titled “Agent Finished Execution Use Cases”- Evaluate AI Agent responses with custom criteria
- Track agent performance metrics
- Monitor agent decision-making
- Implement agent-specific evaluation strategies
- Debug agent execution flow
How It Works
Section titled “How It Works”The AI AutoEvals subscriber handles this event by:
- Root Agent Check: Only evaluates root agents (agents with no caller ID)
- Input Extraction: Extracts input from agent instructions or chat history
- Output Extraction: Extracts the final response from the agent
- Automatic Tagging: Tags requests with
ai_agents,ai_agents_{agent_id},ai_agents_finished - Matching: Finds matching evaluation set based on tags and content
- Evaluation: Creates and queues evaluation for agent response
Example Handling Logic
Section titled “Example Handling Logic”// Only evaluate root agents (no caller = top-level user request)if ($event->getCallerId() !== NULL) { return;}
// Extract input from instructions or chat history$inputText = $event->getInstructions();if (empty($inputText)) { $chatHistory = $event->getChatHistory(); foreach ($chatHistory as $message) { if ($message->getRole() === 'user') { $inputText = $message->getText(); break; } }}
// Extract final response$response = $event->getResponse();$outputText = '';if (method_exists($response, 'getNormalized')) { $normalized = $response->getNormalized(); if (method_exists($normalized, 'getText')) { $outputText = $normalized->getText(); }}Event Data
Section titled “Event Data”Available data from the event:
$agentId = $event->getAgentId();$callerId = $event->getCallerId(); // NULL for root agents$agentRunnerId = $event->getAgentRunnerId();$instructions = $event->getInstructions();$chatHistory = $event->getChatHistory();$response = $event->getResponse();Automatic Tagging
Section titled “Automatic Tagging”The module automatically adds these tags to AI Agents evaluations:
ai_agents- Indicates request from AI Agentsai_agents_{agent_id}- Specific agent typeai_agents_finished- Indicates final agent execution
Example Evaluation Set for AI Agents
Section titled “Example Evaluation Set for AI Agents”$evaluationSet = EvaluationSet::create([ 'label' => 'AI Agent Content Evaluation', 'id' => 'ai_agent_content', 'enabled' => TRUE, 'operation_types' => ['chat'], 'tags' => ['ai_agents'], // Only evaluate AI Agents requests 'fact_extraction_method' => 'ai_generated', 'context_depth' => 5, 'choice_scores' => [ 'A' => 1.0, 'B' => 0.8, 'C' => 0.6, 'D' => 0.0, ],]);$evaluationSet->save();Debugging
Section titled “Debugging”The module logs messages prefixed with “AI AutoEvals:” to help debug agent evaluation flow:
AI AutoEvals: AgentFinishedExecution received | agent: @agent | callerId: @callerAI AutoEvals: Skipping - not a root agent (has callerId)AI AutoEvals: Extracted | input: @input | output: @outputAI AutoEvals: CREATED evaluation @id | agent: @agent | input: @inputImportant Notes
Section titled “Important Notes”- Global Exclusion: The
ai_agentstag is globally excluded by default to prevent duplicate evaluation through standard events - Root Agents Only: Sub-agent calls are not evaluated to avoid noise
- Event Condition: This event is only subscribed when AI Agents module is present
- Separate Flow: AI Agents use a dedicated evaluation flow different from standard chat requests
Example: Agent-Specific Evaluation Set
Section titled “Example: Agent-Specific Evaluation Set”Create evaluation sets targeting specific agent types:
// Content Writer Agent Evaluation$contentWriterSet = EvaluationSet::create([ 'label' => 'Content Writer Agent Evaluation', 'tags' => ['ai_agents_content_writer'], 'custom_knowledge' => 'Content Writer produces blog posts, articles, and marketing copy.',]);
// Support Agent Evaluation$supportAgentSet = EvaluationSet::create([ 'label' => 'Support Agent Evaluation', 'tags' => ['ai_agents_support'], 'custom_knowledge' => 'Support Agent handles customer service inquiries and troubleshooting.',]);Registering Event Subscribers
Section titled “Registering Event Subscribers”Create an event subscriber class and register it in your module’s services.yml file.
Step 1: Create Event Subscriber Class
Section titled “Step 1: Create Event Subscriber Class”<?php
namespace Drupal\my_module\EventSubscriber;
use Drupal\ai_autoevals\Event\PostEvaluationEvent;use Symfony\Component\EventDispatcher\EventSubscriberInterface;
class MyModuleSubscriber implements EventSubscriberInterface {
public static function getSubscribedEvents(): array { return [ PostEvaluationEvent::EVENT_NAME => ['onPostEvaluation', 0], ]; }
public function onPostEvaluation(PostEvaluationEvent $event): void { // Your event handling logic }
}Step 2: Register in services.yml
Section titled “Step 2: Register in services.yml”Create or edit my_module.services.yml:
services: my_module.autoevals_subscriber: class: Drupal\my_module\EventSubscriber\MyModuleSubscriber tags: - { name: 'event_subscriber' }If your subscriber requires services, add them as arguments:
services: my_module.autoevals_subscriber: class: Drupal\my_module\EventSubscriber\MyModuleSubscriber arguments: - '@logger.factory' - '@database' tags: - { name: 'event_subscriber' }Event Priority
Section titled “Event Priority”Event subscribers can specify a priority. Higher numbers execute first.
public static function getSubscribedEvents(): array { return [ PostEvaluationEvent::EVENT_NAME => ['onPostEvaluation', 100], // Higher priority ];}Priority Levels:
- 100-200: High priority (executes first)
- 0-99: Normal priority (default)
- -99 to -1: Low priority (executes last)
Best Practices
Section titled “Best Practices”1. Keep Handlers Fast
Section titled “1. Keep Handlers Fast”Event handlers should complete quickly to avoid slowing down queue processing.
// Goodpublic function onPostEvaluation(PostEvaluationEvent $event): void { $this->logger->info('Evaluation completed');}
// Badpublic function onPostEvaluation(PostEvaluationEvent $event): void { sleep(10); // Don't block queue processing}For long-running tasks, queue them:
public function onPostEvaluation(PostEvaluationEvent $event): void { // Queue for background processing \Drupal::queue('my_long_running_task')->createItem([ 'evaluation_id' => $event->getEvaluationResult()->id(), ]);}2. Handle Exceptions
Section titled “2. Handle Exceptions”Always catch exceptions in your handlers to prevent disrupting the evaluation flow.
public function onPostEvaluation(PostEvaluationEvent $event): void { try { $this->sendNotification($event); } catch (\Exception $e) { \Drupal::logger('my_module')->error('Notification failed: @message', [ '@message' => $e->getMessage(), ]); // Don't rethrow - let evaluation continue }}3. Use Appropriate Priorities
Section titled “3. Use Appropriate Priorities”Set priorities carefully if multiple modules subscribe to the same event.
public static function getSubscribedEvents(): array { return [ // Execute early to modify facts before evaluation PreEvaluationEvent::EVENT_NAME => ['modifyFacts', 100],
// Execute late after all modifications PreEvaluationEvent::EVENT_NAME => ['logChanges', -10], ];}4. Avoid Circular Dependencies
Section titled “4. Avoid Circular Dependencies”Don’t trigger new AI requests in response to evaluation events, as this can create infinite loops.
The module uses the ai_autoevals:internal tag to prevent infinite loops when making internal AI requests for fact extraction and evaluation. This tag is checked first in the event subscriber to skip internal requests from being evaluated.
// Dangerous - can cause infinite looppublic function onPostEvaluation(PostEvaluationEvent $event): void { if ($event->getScore() < 0.5) { // Don't make new AI requests that will trigger evaluations $this->aiProvider->chat(...); }}
// Better - if you must make AI requests, add the internal tagpublic function onPostEvaluation(PostEvaluationEvent $event): void { if ($event->getScore() < 0.5) { // This will be skipped by the AutoEvals event subscriber $this->aiProvider->chat($input, $modelId, ['ai_autoevals:internal']); }}Note: The ai_autoevals:internal tag should only be used for internal AI requests related to the evaluation process. See the Architecture documentation for more details.
5. Log Appropriately
Section titled “5. Log Appropriately”Use logging judiciously to aid debugging without flooding logs.
// Good - log significant eventspublic function onPostEvaluation(PostEvaluationEvent $event): void { if ($event->getScore() < 0.3) { $this->logger->warning('Very low score detected', [ 'score' => $event->getScore(), 'evaluation_id' => $event->getEvaluationResult()->id(), ]); }}
// Bad - log everythingpublic function onPostEvaluation(PostEvaluationEvent $event): void { $this->logger->info('Evaluation completed'); // Too much noise}Common Patterns
Section titled “Common Patterns”Content Moderation
Section titled “Content Moderation”public function onPostEvaluation(PostEvaluationEvent $event): void { if ($event->getScore() < 0.5) { $evaluation = $event->getEvaluationResult();
// Flag for human review $evaluation->setMetadata([ 'moderation_status' => 'pending_review', 'moderation_reason' => 'Low factuality score', ]); $evaluation->save();
// Send notification to content team $this->notificationService->sendLowScoreAlert($evaluation); }}Analytics Tracking
Section titled “Analytics Tracking”public function onPostEvaluation(PostEvaluationEvent $event): void { $evaluation = $event->getEvaluationResult();
// Track metrics $this->analyticsService->track('ai_evaluation_completed', [ 'score' => $event->getScore(), 'model' => $evaluation->getModelId(), 'provider' => $evaluation->getProviderId(), 'tags' => $evaluation->getTags(), ]);}Custom Retry Logic
Section titled “Custom Retry Logic”public function onEvaluationFailed(EvaluationFailedEvent $event): void { $evaluation = $event->getEvaluationResult(); $error = $event->getErrorMessage();
// Retry on rate limit errors if (strpos($error, 'rate limit') !== FALSE) { $delay = $this->calculateBackoff($evaluation); $this->scheduler->schedule( 'ai_autoevals_evaluation_worker', $evaluation->id(), time() + $delay ); }}Testing Event Subscribers
Section titled “Testing Event Subscribers”<?php
namespace Drupal\Tests\my_unit\Kernel;
use Drupal\ai_autoevals\Event\PostEvaluationEvent;use Drupal\KernelTests\KernelTestBase;
class MySubscriberTest extends KernelTestBase {
public function testSubscriberModifiesMetadata(): void { // Create evaluation $evaluation = EvaluationResult::create([ // ... set properties ]);
// Dispatch event $event = new PostEvaluationEvent($evaluation, $evaluationSet); $this->container->get('event_dispatcher')->dispatch($event);
// Assert modifications $this->assertTrue($evaluation->getMetadata()['flagged_for_review']); }
}Next Steps
Section titled “Next Steps”- API Reference - Complete service documentation
- Examples - Real-world implementations
- Extending the Module - Extension guide