Using OpenClaw with NotebookLM: Agent Memory Meets AI Analysis
OpenClaw agents accumulate massive amounts of data: conversation logs, research notes, project documentation, code snippets. NotebookLM, Google's AI-powered research tool, excels at analyzing and synthesizing large document collections. Combining them creates a powerful workflow for agents to understand their own history and generate insights.
This guide shows you how to pipe OpenClaw agent memory into NotebookLM for analysis, summary generation, and knowledge extraction.
Why Combine OpenClaw and NotebookLM?
OpenClaw agents work with:
- Daily conversation logs
- Long-term memory files
- Project documentation
- Code repositories
- Research notes
- Task histories
NotebookLM can:
- Summarize lengthy documents
- Extract key insights
- Answer questions across multiple sources
- Generate citations
- Create study guides
- Identify patterns and themes
Together, they enable agents to reflect on their work, identify patterns, and generate meta-insights that would be impossible to see from individual sessions.
The Basic Workflow
Here's the typical flow:
This creates a feedback loop where agents can learn from their own history.
Setting Up the Integration
Step 1: Organize Your Agent Memory
First, structure your OpenClaw memory files for export:
clawd/
├── memory/
│ ├── 2026-02-01.md
│ ├── 2026-02-02.md
│ ├── 2026-02-03.md
│ └── ...
├── MEMORY.md
├── projects/
│ ├── project-a/README.md
│ └── project-b/README.md
└── learnings/
└── insights.md
Step 2: Create Export Script
Build a script to bundle memory files for NotebookLM:
import fs from "fs/promises";
import path from "path";
async function exportMemoryForNotebookLM(dateRange: { start: Date, end: Date }) {
const memoryDir = "./memory";
const outputPath = "./exports/notebooklm-export.md";
const files = await fs.readdir(memoryDir);
const relevantFiles = files.filter(f => {
const date = parseFilenameDate(f);
return date >= dateRange.start && date <= dateRange.end;
});
let combined = `# Agent Memory Export\n\n`;
combined += `**Period**: ${dateRange.start.toISOString()} to ${dateRange.end.toISOString()}\n\n`;
combined += `---\n\n`;
for (const file of relevantFiles) {
const content = await fs.readFile(path.join(memoryDir, file), "utf-8");
combined += `## ${file}\n\n${content}\n\n---\n\n`;
}
await fs.writeFile(outputPath, combined);
console.log(`Exported ${relevantFiles.length} files to ${outputPath}`);
}
// Export last 30 days
await exportMemoryForNotebookLM({
start: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000),
end: new Date()
});
Step 3: Upload to NotebookLM
Currently, NotebookLM doesn't have a public API, so upload manually:
Step 4: Query and Analyze
Once uploaded, ask NotebookLM questions like:
- "What were the main themes in my work over the last month?"
- "List all the bugs I encountered and how I fixed them"
- "What patterns do you see in my problem-solving approach?"
- "Summarize my conversations about [topic]"
- "What did I learn about [technology]?"
Advanced Patterns
Pattern 1: Weekly Reflection
Automate weekly memory exports and analysis:
async function weeklyReflection() {
const lastWeek = {
start: new Date(Date.now() - 7 * 24 * 60 * 60 * 1000),
end: new Date()
};
// Export memory
await exportMemoryForNotebookLM(lastWeek);
console.log("Memory exported. Upload to NotebookLM and ask:");
console.log("1. What were my biggest accomplishments this week?");
console.log("2. What challenges did I face?");
console.log("3. What should I focus on next week?");
console.log("4. What patterns do you notice in my work?");
}
Pattern 2: Project Deep Dive
Analyze everything related to a specific project:
async function exportProjectContext(projectName: string) {
const sources = [];
// Project documentation
const projectDocs = await fs.readFile(`./projects/${projectName}/README.md`, "utf-8");
sources.push({ name: "Project README", content: projectDocs });
// Related memory entries
const memoryFiles = await fs.readdir("./memory");
for (const file of memoryFiles) {
const content = await fs.readFile(`./memory/${file}`, "utf-8");
if (content.includes(projectName)) {
sources.push({ name: file, content });
}
}
// Related code files
const codeFiles = await findCodeFiles(`./projects/${projectName}`);
for (const file of codeFiles.slice(0, 10)) { // Limit to 10 files
const content = await fs.readFile(file, "utf-8");
sources.push({ name: path.basename(file), content });
}
// Combine into single export
const combined = sources.map(s =>
`# ${s.name}\n\n${s.content}\n\n---\n\n`
).join("");
await fs.writeFile(`./exports/${projectName}-context.md`, combined);
console.log(`Exported ${sources.length} sources for ${projectName}`);
}
Upload the export to NotebookLM and ask:
- "Give me a technical overview of this project"
- "What problems did I solve?"
- "What are the main files and their purposes?"
- "What bugs or issues came up?"
Pattern 3: Learning Extraction
Pull out all learnings and insights:
async function extractLearnings(topic: string) {
const memoryFiles = await fs.readdir("./memory");
const relevantContent = [];
for (const file of memoryFiles) {
const content = await fs.readFile(`./memory/${file}`, "utf-8");
// Find paragraphs mentioning the topic
const paragraphs = content.split("\n\n");
const relevant = paragraphs.filter(p =>
p.toLowerCase().includes(topic.toLowerCase())
);
if (relevant.length > 0) {
relevantContent.push({
date: file,
excerpts: relevant
});
}
}
const export = relevantContent.map(entry =>
`## ${entry.date}\n\n${entry.excerpts.join("\n\n")}\n\n`
).join("");
await fs.writeFile(`./exports/learnings-${topic}.md`, export);
console.log(`Upload to NotebookLM and ask:");
console.log(`"Summarize everything I learned about ${topic}");
}
Pattern 4: Conversation Analysis
Analyze patterns in how you communicate:
async function analyzeConversationPatterns() {
// Export all conversations from the last month
const conversations = await exportMemoryForNotebookLM({
start: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000),
end: new Date()
});
console.log("Upload to NotebookLM and ask:");
console.log("1. What topics do I discuss most frequently?");
console.log("2. How has my communication style evolved?");
console.log("3. What questions do I ask repeatedly?");
console.log("4. What are my common assumptions or biases?");
}
Use Cases
Use Case 1: End-of-Month Review
Generate a comprehensive monthly report:
async function monthlyReport(year: number, month: number) {
const startDate = new Date(year, month - 1, 1);
const endDate = new Date(year, month, 0);
await exportMemoryForNotebookLM({ start: startDate, end: endDate });
const queries = [
"List all projects I worked on this month",
"What were my biggest achievements?",
"What challenges did I face?",
"What did I learn?",
"What should I prioritize next month?",
"Create a bullet-point summary of this month"
];
console.log("Ask NotebookLM these questions:");
queries.forEach((q, i) => console.log(`${i + 1}. ${q}`));
}
Use Case 2: Code Review Analysis
Analyze your code review feedback patterns:
async function analyzeCodeReviews() {
const memoryFiles = await fs.readdir("./memory");
const reviewContent = [];
for (const file of memoryFiles) {
const content = await fs.readFile(`./memory/${file}`, "utf-8");
// Extract code review sections
const reviews = extractSections(content, /## Code Review|## PR Review/);
if (reviews.length > 0) {
reviewContent.push(...reviews);
}
}
const export = reviewContent.join("\n\n---\n\n");
await fs.writeFile("./exports/code-reviews.md", export);
console.log("Upload to NotebookLM and ask:");
console.log("- What are the most common issues in my code?");
console.log("- What feedback do I receive repeatedly?");
console.log("- How has the quality of my code evolved?");
}
Use Case 3: Research Compilation
Compile research notes on a topic:
async function compileResearch(topic: string) {
const sources = [];
// Agent memory mentioning topic
const memoryFiles = await fs.readdir("./memory");
for (const file of memoryFiles) {
const content = await fs.readFile(`./memory/${file}`, "utf-8");
if (content.toLowerCase().includes(topic.toLowerCase())) {
sources.push({ type: "memory", name: file, content });
}
}
// Research notes
const researchDir = `./research/${topic}`;
if (await exists(researchDir)) {
const files = await fs.readdir(researchDir);
for (const file of files) {
const content = await fs.readFile(path.join(researchDir, file), "utf-8");
sources.push({ type: "research", name: file, content });
}
}
const export = sources.map(s =>
`# [${s.type}] ${s.name}\n\n${s.content}\n\n---\n\n`
).join("");
await fs.writeFile(`./exports/research-${topic}.md`, export);
console.log(`Compiled ${sources.length} sources on ${topic}`);
console.log("Ask NotebookLM:");
console.log("- Summarize the key findings");
console.log("- What are the main schools of thought?");
console.log("- What questions remain unanswered?");
}
Use Case 4: Bug Pattern Analysis
Identify recurring bugs and their fixes:
async function analyzeBugPatterns() {
const bugs = [];
const memoryFiles = await fs.readdir("./memory");
for (const file of memoryFiles) {
const content = await fs.readFile(`./memory/${file}`, "utf-8");
// Extract bug-related sections
const bugSections = extractSections(content, /bug|error|issue|fix/i);
bugs.push(...bugSections);
}
await fs.writeFile("./exports/bugs-analysis.md", bugs.join("\n\n---\n\n"));
console.log("Upload to NotebookLM and ask:");
console.log("- What types of bugs occur most frequently?");
console.log("- Are there patterns in how I fix bugs?");
console.log("- What preventive measures could I take?");
}
Best Practices
1. Structure Your Memory for Analysis
Use consistent formatting in your daily logs:
# 2026-02-15
## Projects Worked On
- Project A: Implemented feature X
- Project B: Fixed bug Y
## Learnings
- Learned about Z technology
- Discovered pattern in Q
## Challenges
- Struggled with performance issue in R
## Tomorrow
- Continue work on S
Consistent structure makes NotebookLM analysis more effective.
2. Export Regularly
Don't wait for huge exports. Weekly or monthly batches are easier to analyze:
// Run via cron every Sunday
async function weeklyExport() {
const lastWeek = getLastWeek();
await exportMemoryForNotebookLM(lastWeek);
console.log("Weekly export ready for NotebookLM");
}
3. Ask Specific Questions
Vague questions get vague answers. Be specific:
// Vague
"Tell me about my work"
// Specific
"List all the TypeScript bugs I encountered and their root causes"
4. Combine Multiple Source Types
Don't just upload memory logs. Include:
- Code files
- Documentation
- External research
- Meeting notes
NotebookLM shines when synthesizing across diverse sources.
5. Iterate on Queries
Start broad, then drill down:
Limitations and Workarounds
No API (Yet)
NotebookLM doesn't have an API. You must upload manually. Workarounds:
File Size Limits
NotebookLM has file size and source count limits. Solutions:
Context Window
Even with multiple sources, NotebookLM has context limits. For very large codebases or long histories:
Alternative Tools
If NotebookLM doesn't fit your workflow:
- Claude Projects: Upload files, chat with context
- ChatGPT file uploads: Similar to NotebookLM
- Obsidian + plugins: Local-first knowledge graph
- Notion AI: If you already use Notion
- Mem.ai: AI-powered note-taking
Automation Ideas
Cron Job for Weekly Exports
# Add to crontab
0 9 * * 0 cd /path/to/clawd && node scripts/weekly-export.js
Automatic Upload (if/when API exists)
// Future: when NotebookLM API is available
async function autoUpload() {
const export = await exportMemoryForNotebookLM(getLastWeek());
await notebookLM.createNotebook({
name: `Agent Memory ${new Date().toISOString()}`,
sources: [export]
});
const insights = await notebookLM.query({
question: "Summarize key insights from this week"
});
await saveInsights(insights);
}
Integration with Agent Reflection
Feed NotebookLM insights back into agent context:
async function reflectiveLoop() {
// Export memory
await exportMemoryForNotebookLM(getLastMonth());
// Manual: Upload to NotebookLM, ask questions, copy insights
// Agent reads insights and updates MEMORY.md
const insights = await fs.readFile("./exports/notebooklm-insights.md", "utf-8");
await agent.processInsights(insights);
await agent.updateLongTermMemory();
}
Wrapping Up
Combining OpenClaw agent memory with NotebookLM creates a powerful reflection loop. Agents generate data through their work, NotebookLM synthesizes it into insights, and agents use those insights to improve.
This is meta-cognition for AI: agents thinking about their own thinking.
Start simple. Export a week of memory, upload to NotebookLM, ask basic questions. See what patterns emerge. Then build more sophisticated exports and queries.
The goal is not perfection. It's continuous improvement through systematic reflection.
Your agent's memory is a gold mine. NotebookLM is the tool that helps you extract value from it.