I spent more time than I’d like to admit trying to make n8n write a file to disk.
Simple task. Should have been five minutes.
It wasn’t.
The setup
I was running n8n in Docker, building a small automation that needed to write a file to a mounted volume. Standard stuff. The kind of thing you assume just works.
It didn’t.
The node kept failing with a permissions error:
The file "/data/output.json" is not writable
Round one: asking the AI
My first instinct, like probably yours these days, was to ask an AI assistant. I described the problem, pasted the error, and started iterating.
The suggestions were reasonable on paper:
- check the volume mount
- adjust
chmodon the host - run the container as root
- change ownership with
chown - recreate the volume
None of them worked. Some made things worse. I went through the usual loop of trying one fix, getting a slightly different error, pasting it back, getting another suggestion.
Round two: switching tools
At some point I assumed the issue was Docker-specific, so I switched to running n8n directly with npm.
Same problem.
Different environment, same permissions wall. That should have been a clue that the AI was leading me down the wrong path entirely, but I kept going. Sunk cost is a hell of a thing.
Round three: the first Google result
After maybe an hour of back and forth, I did what I should have done from the start. I googled the exact error message.
First result: an n8n community forum thread describing my exact situation, with the actual fix involving the specific environment variable n8n needs to allow filesystem access from certain nodes.
I pasted the thread into ChatGPT. Solved in seconds.
Why the AI got stuck
This is the part worth thinking about.
The AI wasn’t wrong about Docker permissions in general. Its suggestions were valid for a generic “container can’t write to volume” problem. But this wasn’t a generic problem. It was a tool-specific quirk in n8n, gated behind a configuration flag most people don’t know exists.
The model didn’t have that knowledge surfaced clearly. The forum did.
A few takeaways:
- AI assistants are great at general patterns
- They struggle with niche operational details unless you feed them the right context
- Community forums still hold knowledge that hasn’t been compressed into training data well
- The exact error message is often the best search query you can write
The actual lesson
I’ve started treating AI assistants more like a fast senior engineer who hasn’t read the docs of every obscure tool. They’re brilliant at known territory. They hallucinate confidently in unknown territory.
When an AI keeps suggesting fixes that don’t work, stop iterating with the AI. Go find the source. Then bring it back.
The workflow that actually works for me now:
- Try the AI first for quick known issues
- If two or three suggestions fail, stop
- Search the exact error string
- Find a forum, GitHub issue or doc page
- Paste that back into the AI for synthesis
That last step is still useful. The AI is good at extracting the relevant fix from a long forum thread, summarizing the steps, and adapting them to your context.
A small reflection on AI-assisted workflows
There’s a tendency right now to treat AI as the entry point for everything. For coding, debugging, configuration, architecture decisions.
It’s a great entry point. It’s not always the right one.
For well-documented mainstream problems, AI wins. For weird edge cases in specific tools, the old web still matters. Forums, GitHub issues, mailing lists, Stack Overflow threads from 2019. That stuff is still load-bearing infrastructure for the industry.
The skill isn’t choosing AI or not. It’s recognizing when to switch.
Closing note
I lost an hour to this. Not a disaster. But a useful reminder that the fastest path to a solution sometimes goes through a Google result, not a chat window.
Next time the AI gives me three failing suggestions in a row, I’m closing the tab and opening a search bar.