Tools are how MCP servers expose functionality to AI assistants. A good tool has a clear description, typed parameters, and predictable output. A bad tool confuses the AI about when to use it, accepts garbage inputs, and returns unstructured blobs. The difference is in how you design the interface.
Docstrings as Descriptions
FastMCP extracts tool descriptions directly from Python docstrings. This is the text the AI sees when deciding which tool to call:
@mcp.tool()def get_table_schema(table_name: str) -> str: """Get the schema definition for a database table.
Returns column names, data types, and constraints for the specified table. Useful for understanding table structure before writing queries.
Args: table_name: Fully qualified table name (e.g., 'analytics.orders')
Returns: Schema definition as formatted text """ # Implementation here return f"Schema for {table_name}: id INT PRIMARY KEY, name VARCHAR(255)..."Write descriptions that help the model understand when and why to use the tool, not just what it does. “Get the schema definition for a database table” tells the AI what the tool does. “Useful for understanding table structure before writing queries” tells it when to use it. Both matter.
Include examples in parameter descriptions — 'analytics.orders' tells the AI the expected format far more effectively than a lengthy format specification. The AI generalizes from examples.
Pydantic Models for Structured Output
For complex return values, use Pydantic models to ensure consistent structure:
from pydantic import BaseModel, Field
class ValidationResult(BaseModel): """Result of a data quality validation check.""" table_name: str row_count: int = Field(description="Total rows examined") null_count: int = Field(description="Number of null values found") duplicate_count: int = Field(description="Number of duplicate rows") is_valid: bool = Field(description="Whether the table passed validation")
@mcp.tool()def run_data_quality_check(table_name: str) -> ValidationResult: """Run comprehensive data quality checks on a table.
Validates completeness, uniqueness, and data integrity.
Args: table_name: The table to validate
Returns: Detailed validation results """ # Run actual checks against your database return ValidationResult( table_name=table_name, row_count=10000, null_count=5, duplicate_count=0, is_valid=True )FastMCP serializes Pydantic models to JSON automatically, and the Field descriptions enrich the output schema. The AI knows it will get back a structured object with specific fields, which means it can reliably extract individual values and reason about them.
The alternative — returning a formatted string — works for simple tools but breaks down when the AI needs to compare values across multiple tool calls or feed results into another tool. Structured output makes tool composition possible.
Input Validation with Schemas
Type hints define the input schema. Use Python’s type system and Pydantic for validation:
from enum import Enumfrom typing import Optionalfrom pydantic import BaseModel, Field
class Environment(str, Enum): PRODUCTION = "production" STAGING = "staging" DEVELOPMENT = "development"
class QueryParams(BaseModel): """Parameters for database queries.""" limit: int = Field(default=100, ge=1, le=10000, description="Maximum rows to return") timeout_seconds: int = Field(default=30, ge=1, le=300, description="Query timeout")
@mcp.tool()def run_query( query: str, environment: Environment = Environment.PRODUCTION, params: Optional[QueryParams] = None) -> str: """Execute a read-only SQL query.
Args: query: SQL SELECT statement to execute environment: Target environment params: Optional query parameters
Returns: Query results as JSON """ effective_params = params or QueryParams() # Execute with validated parameters return f"Results from {environment.value} (limit: {effective_params.limit})"Three patterns worth noting here:
Enums constrain choices. The Environment enum means the AI can only pass “production”, “staging”, or “development.” In tool UIs, enums render as dropdowns. This prevents the AI from inventing environment names.
Pydantic Field constraints prevent invalid inputs. ge=1, le=10000 on the limit field means the validation rejects a limit of 0 or 50000 before your code ever runs. The error message is automatic and clear.
Defaults encode safe behavior. limit=100 means the AI doesn’t need to specify a limit for casual queries. timeout_seconds=30 means a hung query won’t block forever. Defaults should be the safe choice, with the AI explicitly opting into more aggressive parameters when the user’s request warrants it.
Design Principles
Principles that affect tool reliability in practice:
One tool, one action. A tool that searches tables and also runs quality checks is harder for the AI to reason about than two separate tools. Split composite operations into individual tools with clear names.
Fail with useful errors. When a table doesn’t exist or a query fails, return a structured error message, not an exception traceback. The AI needs to understand what went wrong to decide its next step — retry with different parameters, ask the user for clarification, or try a different approach.
Name for intent, not implementation. search_tables is better than query_catalog_api. get_pipeline_status is better than call_airflow_rest_endpoint. The AI matches user intent to tool names; implementation details in the name add noise.
Test with the AI, not just in isolation. A tool that works in unit tests may confuse the AI because of an ambiguous description or an overly complex parameter structure. The MCP Server Testing and Debugging note covers the full testing workflow; test with a real AI client as early as possible.