FlowDrop Chat — Security Report¶
Date: 2026-04-04
Module: flowdrop_chat
Review type: Pre-release security review
Overview¶
This document summarises the security architecture and controls implemented in the flowdrop_chat module. The module provides LLM-powered chat endpoints for AI-assisted workflow building within the FlowDrop editor.
Authentication and Authorization¶
Access to all chat endpoints is controlled through Drupal's permission system:
- All chat API routes require the
use flowdrop_chatpermission. - The admin settings page requires
administer flowdrop_chat, which is markedrestrict access: true— it does not appear in standard permission listings for untrusted roles. - Workflow-level access is enforced individually for every operation. A user cannot send messages to, read history from, or clear the history of any workflow they are not already authorized to access through Drupal's entity access system.
Request Integrity (CSRF Protection)¶
All state-changing API endpoints (send message, clear history) require a valid Drupal CSRF token in the X-CSRF-Token request header. Read-only endpoints (get history) follow standard REST conventions and do not require a token for GET requests.
Data Isolation¶
Chat history is stored keyed to both the workflow ID and the authenticated user ID. Each user's conversation is kept strictly separate, preventing cross-user history access.
Input Validation¶
Incoming request data is validated before processing:
- The
messagefield is required, must be a non-empty string, and is capped at 10,000 characters. - The admin settings form validates that
max_history_lengthis an integer in the range 1–500. - The system prompt template, if customised, is validated to permit only the four documented placeholders (
{{AVAILABLE_NODE_TYPES}},{{WORKFLOW_STATE}},{{CHAT_HISTORY}},{{USER_MESSAGE}}). Unknown placeholders are rejected at save time.
Prompt Injection Mitigations¶
The module applies multiple mitigations against prompt injection attacks — attempts by users to embed instructions in their messages or workflow context that manipulate the AI's behaviour:
- History isolation: All prior chat messages injected into the system prompt are XML-encoded using
htmlspecialchars()withENT_XML1 | ENT_QUOTESand wrapped in structured XML tags (<message role="...">...</message>). This prevents message content from being interpreted as system-level instructions by the LLM. - User message separation: The current user message is passed to the LLM as a distinct user-role turn, not embedded in the system prompt. This architectural separation limits the authority of user-supplied input.
- No client-controlled history: The API does not accept a client-supplied conversation history. History is loaded exclusively from server-side storage, preventing fabrication or replay of prior assistant responses.
LLM Provider Configuration¶
The AI provider and model are selected from a restricted dropdown of configured and validated models, populated from the registered plugin registry. Arbitrary provider strings cannot be submitted. The system falls back gracefully to a configured system default when no provider is explicitly selected for chat.
Logging and Error Handling¶
Errors are logged to a dedicated flowdrop_chat logging channel. The API returns generic, non-descriptive error messages to clients, avoiding exposure of internal configuration details (model names, provider identifiers, stack traces) in responses.
Code Quality¶
All PHP source files use declare(strict_types=1) with explicit type annotations throughout. Responsibilities are separated across dedicated service classes (LLM client, prompt builder, chat orchestrator, memory manager), reducing the blast radius of any individual component and improving auditability.
Recommendations for Operators¶
- Restrict the
administer flowdrop_chatpermission to trusted site administrators only. - Use the
entitymemory backend (the default) for production deployments to ensure proper per-user isolation of chat history. - When customising the system prompt template, use only the four documented placeholders. Do not embed external instructions or dynamic content outside the supported placeholder syntax.
- Keep the
flowdrop_ai_providermodule up to date — the security of LLM API communication (credentials, TLS, provider responses) depends on it. - Consider applying request rate limiting to the chat endpoints at the web server or reverse proxy layer to control LLM API cost exposure.