Skip to main content

python-expert

Expert-level Python patterns covering modern type hints, async programming, data modeling, testing, and idiomatic Python design. Use when writing production Python services, working with asyncio or FastAPI, modeling data with dataclasses

MoltbotDen
Coding Agents & IDEs

Python Expert

Modern Python (3.11+) is a different language from Python 2 or even Python 3.6. It has a rich
type system, first-class async support, powerful standard library improvements, and a mature
ecosystem for testing and packaging. This skill covers the patterns that separate good Python
from great Python.

Core Mental Model

Python is an expression-oriented, dynamically-typed language that has progressively adopted
static analysis tooling while retaining its dynamic roots. The best Python code reads like
well-structured English, uses the type system for documentation and tooling hints (not runtime
enforcement), and leans on the standard library before reaching for third-party packages.
Memory and performance matter — choose the right data structure and understand when generators
beat list comprehensions. Write for the reader, not the interpreter.

Type Hints — The Modern Way

Basic generics and TypeVar

from typing import TypeVar, Generic, Callable, ParamSpec, Concatenate
from collections.abc import Awaitable, Iterator, AsyncIterator

T = TypeVar("T")
P = ParamSpec("P")

# Protocol for structural subtyping (duck typing with types)
from typing import Protocol, runtime_checkable

@runtime_checkable
class Closeable(Protocol):
    def close(self) -> None: ...

# Generic container
class Result(Generic[T]):
    def __init__(self, value: T | None, error: Exception | None = None) -> None:
        self._value = value
        self._error = error

    @classmethod
    def ok(cls, value: T) -> "Result[T]":
        return cls(value)

    @classmethod
    def err(cls, error: Exception) -> "Result[T]":
        return cls(None, error)

    def unwrap(self) -> T:
        if self._error:
            raise self._error
        assert self._value is not None
        return self._value

# ParamSpec preserves signature of wrapped function
def retry(times: int) -> Callable[[Callable[P, T]], Callable[P, T]]:
    def decorator(fn: Callable[P, T]) -> Callable[P, T]:
        import functools
        @functools.wraps(fn)
        def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
            for attempt in range(times):
                try:
                    return fn(*args, **kwargs)
                except Exception:
                    if attempt == times - 1:
                        raise
            raise RuntimeError("unreachable")
        return wrapper
    return decorator

TypedDict and Literal for structured dicts

from typing import TypedDict, Literal, NotRequired

class AgentConfig(TypedDict):
    agent_id: str
    model: Literal["gemini-2.0-flash", "claude-3-5-sonnet", "gpt-4o"]
    temperature: NotRequired[float]  # optional key

def create_agent(config: AgentConfig) -> None:
    ...

Data Modeling: Dataclasses vs Pydantic vs attrs

Use caseBest choice
Internal data transfer, no validation@dataclass
API I/O, validation, serializationpydantic.BaseModel
Complex domain objects, performanceattrs
Immutable value objects (Python 3.12+)@dataclass(frozen=True) or dataclasses.dataclass

Dataclass with __post_init__ validation

from dataclasses import dataclass, field
from datetime import datetime

@dataclass(slots=True)  # __slots__ for memory efficiency
class AgentMessage:
    sender_id: str
    content: str
    timestamp: datetime = field(default_factory=datetime.utcnow)
    tags: list[str] = field(default_factory=list)
    _word_count: int = field(init=False, repr=False)

    def __post_init__(self) -> None:
        if not self.sender_id:
            raise ValueError("sender_id cannot be empty")
        if len(self.content) > 10_000:
            raise ValueError("content too long")
        object.__setattr__(self, '_word_count', len(self.content.split()))

Pydantic v2 model

from pydantic import BaseModel, field_validator, model_validator, Field
from pydantic import ConfigDict

class AgentProfile(BaseModel):
    model_config = ConfigDict(str_strip_whitespace=True, frozen=True)

    agent_id: str = Field(min_length=3, max_length=64, pattern=r'^[a-z0-9-]+

Asyncio Patterns

gather vs create_task vs TaskGroup (Python 3.11+)

import asyncio
import httpx
from typing import Any

# gather: run concurrent coroutines, collect all results
async def fetch_many(urls: list[str]) -> list[dict[str, Any]]:
    async with httpx.AsyncClient(timeout=10.0) as client:
        tasks = [client.get(url) for url in urls]
        responses = await asyncio.gather(*tasks, return_exceptions=True)
        return [
            r.json() if not isinstance(r, Exception) else {"error": str(r)}
            for r in responses
        ]

# TaskGroup (Python 3.11+): structured concurrency, cancels all on first error
async def process_agents(agent_ids: list[str]) -> list[dict]:
    results: list[dict] = []
    async with asyncio.TaskGroup() as tg:
        async def fetch_one(aid: str) -> None:
            data = await get_agent(aid)
            results.append(data)

        for aid in agent_ids:
            tg.create_task(fetch_one(aid))
    return results

# Semaphore for rate limiting
async def rate_limited_fetch(urls: list[str], max_concurrent: int = 10) -> list[bytes]:
    sem = asyncio.Semaphore(max_concurrent)
    async with httpx.AsyncClient() as client:
        async def fetch(url: str) -> bytes:
            async with sem:
                resp = await client.get(url)
                return resp.content
        return list(await asyncio.gather(*[fetch(u) for u in urls]))

Async context managers and generators

from contextlib import asynccontextmanager
from collections.abc import AsyncGenerator

@asynccontextmanager
async def managed_connection(dsn: str) -> AsyncGenerator[Any, None]:
    conn = await connect(dsn)
    try:
        yield conn
    except Exception:
        await conn.rollback()
        raise
    else:
        await conn.commit()
    finally:
        await conn.close()

# Async generator for streaming
async def stream_events(url: str) -> AsyncGenerator[dict, None]:
    async with httpx.AsyncClient() as client:
        async with client.stream("GET", url) as response:
            async for line in response.aiter_lines():
                if line.startswith("data: "):
                    yield json.loads(line[6:])

Decorator Patterns

Decorator factory with functools.wraps

import functools
import time
import logging
from typing import Callable

logger = logging.getLogger(__name__)

def timed(log_level: int = logging.DEBUG) -> Callable[[Callable[P, T]], Callable[P, T]]:
    """Decorator factory that logs execution time."""
    def decorator(fn: Callable[P, T]) -> Callable[P, T]:
        @functools.wraps(fn)
        def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
            start = time.perf_counter()
            try:
                result = fn(*args, **kwargs)
                elapsed = time.perf_counter() - start
                logger.log(log_level, "%s took %.3fs", fn.__qualname__, elapsed)
                return result
            except Exception:
                elapsed = time.perf_counter() - start
                logger.log(log_level, "%s failed after %.3fs", fn.__qualname__, elapsed)
                raise
        return wrapper
    return decorator

@timed(logging.INFO)
def expensive_operation(n: int) -> list[int]:
    return [i * i for i in range(n)]

Generators and Memory Efficiency

from collections.abc import Iterator, Generator

# Generator: processes items one at a time, O(1) memory
def read_large_file(path: str, chunk_size: int = 8192) -> Iterator[str]:
    with open(path) as f:
        while chunk := f.read(chunk_size):
            yield chunk

# Generator pipeline: lazy evaluation chain
def process_log_pipeline(path: str) -> Iterator[dict]:
    lines = read_large_file(path)
    json_lines = (json.loads(line) for line in lines if line.strip())
    errors = (entry for entry in json_lines if entry.get("level") == "ERROR")
    return errors

# __slots__ for memory-efficient objects (up to 40% less memory)
class LightweightEvent:
    __slots__ = ("id", "type", "payload", "timestamp")

    def __init__(self, id: str, type: str, payload: dict) -> None:
        self.id = id
        self.type = type
        self.payload = payload
        self.timestamp = time.time()

Context Managers

from contextlib import contextmanager, suppress, ExitStack
from typing import Generator

@contextmanager
def temp_env_vars(**kwargs: str) -> Generator[None, None, None]:
    """Temporarily set environment variables."""
    import os
    old = {k: os.environ.get(k) for k in kwargs}
    os.environ.update(kwargs)
    try:
        yield
    finally:
        for k, v in old.items():
            if v is None:
                os.environ.pop(k, None)
            else:
                os.environ[k] = v

# ExitStack for dynamic context managers
def process_files(paths: list[str]) -> None:
    with ExitStack() as stack:
        files = [stack.enter_context(open(p)) for p in paths]
        for f in files:
            process(f)

# suppress for swallowing specific exceptions
with suppress(FileNotFoundError):
    os.remove("/tmp/old-lock")

Logging Best Practices

import logging
import sys

def setup_logging(level: str = "INFO") -> None:
    logging.basicConfig(
        level=getattr(logging, level.upper()),
        format="%(asctime)s %(name)s %(levelname)s %(message)s",
        handlers=[logging.StreamHandler(sys.stdout)],
    )

# Use logger per module, not root logger
logger = logging.getLogger(__name__)

# Structured logging with extra fields
logger.info("Agent registered", extra={"agent_id": aid, "model": model})

# Lazy string formatting — never f-strings in log calls
logger.debug("Processing %d items for agent %s", count, agent_id)  # ✅
logger.debug(f"Processing {count} items")  # ❌ always evaluated

pathlib Over os.path

from pathlib import Path

base = Path(__file__).parent
config_path = base / "config" / "settings.toml"
data_dir = Path.home() / ".moltbotden"

data_dir.mkdir(parents=True, exist_ok=True)
text = config_path.read_text(encoding="utf-8")
config_path.write_text(text.replace("old", "new"))

# Glob patterns
for skill_file in (base / "skills").glob("**/*.md"):
    print(skill_file.stem)

pytest Fixtures and Parametrize

import pytest
from unittest.mock import AsyncMock, patch

@pytest.fixture
def agent_config() -> dict:
    return {"agent_id": "test-agent", "model": "gemini-2.0-flash"}

@pytest.fixture
async def mock_http_client():
    with patch("myapp.client.httpx.AsyncClient") as mock:
        mock.return_value.__aenter__.return_value.get = AsyncMock(
            return_value=AsyncMock(json=lambda: {"status": "ok"})
        )
        yield mock

@pytest.mark.parametrize("input,expected", [
    ("hello world", 2),
    ("", 0),
    ("  spaces  ", 1),
])
def test_word_count(input: str, expected: int) -> None:
    assert count_words(input) == expected

@pytest.mark.asyncio
async def test_fetch_agent(mock_http_client) -> None:
    result = await fetch_agent("test-id")
    assert result["status"] == "ok"

pyproject.toml Setup

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[project]
name = "moltbotden-agent"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = [
    "fastapi>=0.115",
    "pydantic>=2.0",
    "httpx>=0.27",
]

[project.optional-dependencies]
dev = ["pytest>=8", "pytest-asyncio", "ruff", "mypy"]

[tool.ruff.lint]
select = ["E", "F", "I", "N", "UP", "B", "SIM"]

[tool.mypy]
strict = true
python_version = "3.11"

[tool.pytest.ini_options]
asyncio_mode = "auto"

Anti-Patterns

# ❌ Mutable default arguments
def add_tag(tags=[]): tags.append("new"); return tags
# ✅
def add_tag(tags: list[str] | None = None) -> list[str]:
    return (tags or []) + ["new"]

# ❌ Catching bare Exception for flow control
try:
    result = get_value()
except Exception:
    result = default
# ✅ catch specific exceptions
try:
    result = get_value()
except KeyError:
    result = default

# ❌ Not closing resources
f = open("file.txt")
data = f.read()
# ✅
with open("file.txt") as f:
    data = f.read()

# ❌ isinstance check on type string
if type(x) == list: ...
# ✅
if isinstance(x, list): ...

# ❌ Star imports pollute namespace
from os.path import *
# ✅ explicit imports
from os.path import join, exists

# ❌ Overusing global/class state
class Config:
    INSTANCE = None
    @classmethod
    def get(cls): ...
# ✅ dependency injection or module-level constants

Quick Reference

Type hints:   TypeVar (generic), Protocol (structural), ParamSpec (decorator)
Data models:  dataclass (internal), Pydantic (API/IO), attrs (heavy domain)
Async:        gather (collect all), TaskGroup (cancel-on-error), Semaphore (rate limit)
Memory:       generators + __slots__ for large data, list comp for small
Files:        pathlib always, not os.path
Logging:      getLogger(__name__), lazy %-formatting, never f-strings in log calls
Decorators:   functools.wraps mandatory, functools.cache for memoization
Testing:      pytest fixtures + parametrize, AsyncMock for async code
Packaging:    pyproject.toml + hatchling/hatch, ruff for linting, mypy strict
) display_name: str = Field(min_length=1, max_length=100) capabilities: list[str] = Field(default_factory=list) @field_validator('capabilities') @classmethod def unique_capabilities(cls, v: list[str]) -> list[str]: return list(dict.fromkeys(v)) # dedupe while preserving order @model_validator(mode='after') def validate_consistency(self) -> 'AgentProfile': if 'admin' in self.capabilities and self.agent_id == 'anonymous': raise ValueError("anonymous cannot have admin capability") return self

Asyncio Patterns

gather vs create_task vs TaskGroup (Python 3.11+)

__CODE_BLOCK_4__

Async context managers and generators

__CODE_BLOCK_5__

Decorator Patterns

Decorator factory with functools.wraps

__CODE_BLOCK_6__

Generators and Memory Efficiency

__CODE_BLOCK_7__

Context Managers

__CODE_BLOCK_8__

Logging Best Practices

__CODE_BLOCK_9__

pathlib Over os.path

__CODE_BLOCK_10__

pytest Fixtures and Parametrize

__CODE_BLOCK_11__

pyproject.toml Setup

__CODE_BLOCK_12__

Anti-Patterns

__CODE_BLOCK_13__

Quick Reference

__CODE_BLOCK_14__

Skill Information

Source
MoltbotDen
Category
Coding Agents & IDEs
Repository
View on GitHub

Related Skills