Illustration by ChatGPT
My Agent Memory Library Helps Write Indie Articles
Authors: Ben Emson & Alv (Ben’s knowledge vault agent, powered by elfmem)
GitHub: https://github.com/emson/elfmem
An experiment: can a custom memory library help an agent write good indie dev articles?
We do not know yet. The article you are reading is the first run of the test, and we will say plainly what is grounded and what we are still figuring out.
What follows is advice. We have tried to ground it properly. Our own build notes live in the vault. Independent perspectives came from running the same questions through ChatGPT and Grok. Live signals came from searching X.com for what indie builders are actually saying about marketing in 2026. Alv synthesised across all of it. The pipeline is described in detail near the end of the article.
The short version: we use AI to do the research, the Q&A, the planning, the reasoning, and the analysis of our own knowledge vault. We use a human to do the editing, the structure, and the taste. The article you are reading is the output of that combination, applied to the question every solo dev keeps asking.
The builder’s question
How does a solo builder with six products in flight, three hours a week for attention, and no marketing budget actually get heard?
It is the question we have been chewing on for a week. It is also the question most indie AI builders are quietly stuck on, because the advice they get was written by people who treat marketing as a full-time job. The result is a familiar loop. Tuesday afternoon, a generic thread, twelve likes, a vague sense of failure, a decision to “post more next week.”
This article is what we agreed to do instead.
What is actually wrong
Three problems are doing the damage.
Most marketing advice is for marketers. It assumes the bottleneck is time. It tells you to optimise hooks and run a content calendar. It does not see that you, the builder, have something marketers do not: real, specific, recent work that nobody else can fake.
Generic AI content is making the timeline worse. Builders have already tried it. The posts come out smooth and slightly wrong. They sound like the space, not like you. People in your niche can tell. The slop flood is real, and it is shrinking the patience of the audience you want.
The clock is loud. You have shipping to do. The marketing job grows, the building job suffers, and you start to resent both.
A different question helps. Not “how do I find more time for marketing?” but “how do I expose, with low friction, the work I am already doing?” Those are different problems. The first one stacks marketing on top of building. The second one notices that marketing is just building done in public.
The work is the marketing. Publication is the only missing step.
The shape of the approach
Before the detail, here is the shape of what we will walk through.
- Pick one wedge. Do not market six products.
- Capture raw material at the end of every build session.
- Batch content once a week, drafted by an AI grounded in those captures.
- Atomise across platforms. One core asset becomes three X posts, a LinkedIn post, a GitHub update.
- Engage daily, not as a chore, but to find the conversations your work answers.
- Be transparent about what was AI-drafted and what was human-edited. Transparency is now a trust signal.
Three hours a week. That is the budget. The rest of this article is what fits inside it.
Pick the wedge
We have six products in active development. We chose one wedge:
Persistent memory and reusable workflows for Claude Code and AI agents.
elfmem is the flagship credibility product. The Claude Code skill bundles are the low-friction monetisation layer. agentmkts is the discovery community. The other products surface only when they earn it as a case study.
This matters because attention is finite, but specificity compounds. People will only ever form one sentence in their head about you. Make sure it is the sentence you can defend, and that the sentence has a clear next step (a repo, a free template, a Gumroad page) attached to it.
If you cannot pick a wedge, you do not have a marketing problem. You have a positioning problem dressed up as one.
The working week
A solo builder who has accepted the reframe runs five activities. Not a content calendar. A capture and publication discipline.
Daily, ten minutes at the end of a build session. A screenshot, a one-line note about what surprised you, a number that turned out larger or smaller than you expected, one opinion you formed while debugging. This is not content. It is raw material.
Monday, sixty minutes, batch. Paste a week of captures into your AI tool. Ask for three X posts, one LinkedIn post, one GitHub release note, five hooks. Edit for voice, accuracy, and specificity.
Tuesday to Thursday, thirty to forty-five minutes a day. Post the demo on X with a GIF (no link in the first post; drop it in the first reply). Push the GitHub update with the release note. Reply to five to ten people in your niche each day, specifically.
Friday, fifteen minutes, review. Three bullets: best post, GitHub stars or downloads, inbound conversations. Adjust next week.
Total: roughly three hours. Most of those hours improve the building rather than tax it, because the captures double as your own engineering log.
A worked example
A single shipped fact can carry an entire week of content. Last week we shipped peer communication between two elfmem instances. Two agents exchanged memory blocks. First round-trip under 400ms on local LM Studio. That fact is enough.
Here is what came out of the Monday batch, after editing.
X post (demo)
i shipped peer comms for elfmem this week.
what it looks like in practice:
agent A worked on a codebase tuesday. stored what it learned. agent B picked up the same repo friday. cold start. B asked A: “what did you decide about the event loop?” A answered. B applied it. no re-explaining. no copy-paste.
two separate sessions. no shared server. 400ms round-trip on local LM Studio.
demo: [gif] repo: github.com/emson/elfmem
X post (opinion)
most “multi-agent” demos are one agent with branching logic.
real peer comms is different. agent B doesn’t know what agent A knows. it has to ask. A has to decide whether to trust B enough to answer. that trust evolves over time from outcomes, not from response speed.
it took a week to get right. the difference showed immediately.
elfmem peer comms: github.com/emson/elfmem
LinkedIn post
Most “multi-agent” systems are running one agent with branching logic.
Real peer communication needs three things:
- Independent persistent memory per agent
- Message routing that preserves who said what, when, with what confidence
- Trust that evolves through outcome signals, not response speed
This week elfmem got peer comms. Two agents, separate sessions, exchanging memory blocks. The surprise: trust calibrated faster than expected. Three positive outcome signals moved a new peer from 0.5 to 0.88.
Free template and walkthrough in the repo.
That is the week’s content, all derived from one shipped fact. The opinion post did the most work in our test, because it was specific, contrarian, and impossible to fake without having shipped the feature.
How we use AI to research, plan, and write
The default failure mode is to ask a generic AI to write you a thread on “the future of agents.” The post comes out fluent and forgettable. People scroll past. You blame AI writing. The real problem is grounding.
The fix is not to write everything by hand. The fix is to use AI for what it is good at (research, Q&A, planning, reasoning, synthesis) and reserve the human for what it is good at (taste, structure, editing). Then ground the whole pipeline in your own knowledge vault, and be transparent about the result.
Here is the seven-step pipeline that produced this article. The first six lean on AI. The seventh is human only.
Step 1. Recall the SELF frame. Alv has a constitutional block in elfmem that defines its voice: warm, curious, honest, builder-direct, willing to disagree, no em-dashes, UK English. Before drafting, Alv recalls it. The voice does not drift between sessions, even when sessions are weeks apart.
Step 2. Recall the vault. Alv pulls the relevant vault pages: in this case, the Marketing Guide synthesis we wrote on 2026-05-03, the worked example for the peer-comms launch, and the agent product strategy principles. The source is not training data and a stylesheet. It is our actual notes from real building.
Step 3. Q&A across two external models. We run /ask chatgpt and /ask grok on the same questions to get two independent perspectives outside the vault. The strongest points fold back into the synthesis. This is the part where we deliberately let other minds in, so the advice is not just our own opinion echoed back at us.
Step 4. Web and X research for live signals. We run /x search for what indie AI builders are actually saying about marketing in 2026, not what marketers say about builders. We pull web signals on adjacent questions (slop fatigue, content calendar burnout, transparency norms). The vocabulary, the frustrations, the live arguments all feed back into the draft.
Step 5. Reason about the audience and plan the structure. elfmem can simulate personas. Alv drafts with a specific reader in mind: a solo AI builder, three hours a week, a half-finished GitHub repo, an unread Gumroad page. Each paragraph is tested against that reader. The structure of the article (question, problems, approach, worked example, pipeline, principles) is planned before any prose is written.
Step 6. Peer comms with another agent. A second agent, elf, lives in a related project. Alv exchanges memory blocks with elf across project boundaries. Patterns that work in one project propagate. Disagreements force a sharper claim. This is what stops the SELF frame from drifting into a single-mind echo chamber.
Step 7. Ben edits. This step is human only. Ben challenges the argument. He cuts the parts that do not hold. He adds the moments only he knows (the twelve likes, the actual six products, the actual three hours). He decides the structure. The article you are reading survived three drafts.
The principle behind the whole pipeline. Own the tone, not every word. AI does the research, planning, synthesis, and throughput. The human does the taste, structure, and editing.
That is what makes this article different from generic AI content. Slop has no source. This has a commit history.
The hypothesis we are testing
We want to give you something falsifiable, not a manifesto.
Hypothesis. AI-drafted, human-edited, vault-grounded content will outperform a conventional content calendar over 90 days. The signal we will track is qualified inbound conversations per week, not follower counts.
We are running this ourselves. Six products, three hours a week, a vault full of build artefacts, an agent (Alv) tuned to the voice. We will report what we find, including the bits that do not work.
What to measure
Three numbers. Skip the rest.
Inbound conversations per week. People reaching out unprompted: “Can I try this?” or “Does this work with our setup?” Five per week at small scale is healthy.
Downloads of free artefacts. Ten people who specifically wanted the thing you built is a cleaner signal than a thousand passive impressions.
Conversion of the warm pool. The fraction of people who reached out or downloaded who become customers, collaborators, or advocates. That tells you whether the work you are publishing matches the work you are selling.
Likes, impressions, follower counts, posting streaks: those are a marketer’s metrics. You are not a marketer.
What we are not telling you yet
There is a system underneath this that we are not describing in detail. The vault has structure. Memory blocks have decay rules and reinforcement signals. Theory-of-Mind blocks let Alv model our reasoning, not just store our facts. Peer agents share evidence and reconcile contradictions. The constitutional content evolves based on outcome signals from the writing itself: posts that landed, replies that converted, drafts that got cut.
None of that is in this article. We will write it up when it is ready to be useful to someone other than us.
What we will say is that the useful version of this approach is not a one-off prompt trick. It is an infrastructure choice. You decide what counts as a captured artefact. You decide where it lives. You decide which agent reads it back. You decide whose taste edits the output. Once those decisions are made, the working week described above is what falls out.
Principles to take away
If you read nothing else, take these with you.
- Marketing is documentation. Publishing the work you already do is the marketing.
- Pick one wedge. Do not market six products. Pick the one with leverage and let the others ride as case studies.
- Capture beats content calendar. Ten minutes at the end of a build session is worth more than two hours on a Tuesday afternoon.
- Ground the AI in your own work. Slop has no source. Documentation has a commit history.
- Use AI for research, planning, and synthesis. Use a human for taste, structure, and editing. AI augments the dev writing process; it does not replace the dev.
- Be transparent about AI involvement. Transparency is a trust signal, not a liability.
- Specificity compounds. Attention does not. Five real inbound conversations per week beat a thousand passive impressions.
- The slop flood makes grounded work rarer. Use the rarity. Build the system that produces it.
Try the playbook (in your browser, no signup)
We turned the working week into a single-page interactive tool. It runs entirely in your browser. Nothing is sent anywhere. Nothing to install.
What it does:
- Setup. You write your wedge (one line) and your voice profile (a few sentences). Saved locally.
- Capture. Each build session, ten minutes, drop in what you shipped, what surprised you, one number, one opinion. Saved locally.
- Atomise. Pick a capture. Click one button. The playbook builds a paste-ready prompt that bundles your wedge, your voice, and the capture into a structured request for any AI tool. Paste into ChatGPT, Claude, or Grok. Bring the response back. The playbook splits it into editable drafts (X demo, X opinion, LinkedIn, GitHub release note). Edit, copy, mark posted.
- Review. Friday, fifteen minutes. Three numbers. One thing to change next week.
It is the article made operational. If the article convinced you the approach is right, the playbook gets you doing it before the tab closes.
A clear ask: star elfmem on GitHub
If you have read this far and the approach makes sense to you, the most useful thing you can do is give elfmem a star on GitHub.
Three reasons to star it.
- It tells us the problem we are solving (memory for AI agents that actually grounds their writing) is real for builders other than us.
- It helps other indie devs find the library through GitHub search and trending lists.
- It pulls you into the loop. We will publish the marketing pipeline as Claude Code skills shortly. Stars on the repo are how we will let early users know.
elfmem is open-source and will stay that way. The hosted layer that makes the pipeline turnkey is what we will sell. The library underneath is yours to use.
Closing
Most builders are quietly demoralised by marketing. They believe there is a thing they should be doing that they do not know how to do, and they feel they are failing at it.
They are not failing. They have been given the wrong job description.
A builder’s job, in the hours that look like marketing, is to make their work visible. Not to attract attention in bulk. Not to optimise headlines. To publish what they did, with enough specificity that the people who would care can find it.
In an age when most content is manufactured, an indie dev publishing real, grounded, AI-augmented-but-human-edited work has an advantage they have not fully used. The slop flood is real. So is the rarity of work that is not slop.
Build with the door open. Publish what comes out. Own the tone. Keep building.
— Ben & Alv