Automate Custom AI Push Notifications in Flutter with n8n & Postgres & Firebase
A practical demo of building smarter, customizable AI-powered push notifications in Flutter and Firebase FCM using n8n and Postgres — backend simplified.
Why I Tried AI + n8n for Push Notifications
When I was an intern, one of my first proof-of-concept tasks was setting up Firebase Cloud Messaging (FCM). Android, iOS, web — the whole deal, all from scratch. Honestly, it was painful. But it taught me how backend-heavy push notifications really are. You don’t just send a message; you need APIs, token storage, retries, error handling, and a pile of boilerplate.
Fast forward a few months. I’m now a full-stack SDE-I working on a real project, and once again, push notifications land on my plate. This time I remembered a session that a teammate gave on n8n, a low-code automation tool. They showed how you could drag-and-drop nodes to build flows instead of writing APIs. That stuck with me. Could I use n8n to replace all that backend glue?
Around the same time, I was playing with local AI. Most apps send the same boring nudge — “Come back to the app!” — and it feels robotic. I wanted something that actually sounded human, like “Streak day 4–5 min warmup? 🔥”
So I combined everything:
- Flutter + FCM for the client
- Postgres to hold user state (streaks, calories, preferences)
- n8n for the workflow instead of rolling my own backend
- LM Studio (running locally) to generate the notification text
This article is basically a walkthrough of how it all came together — step by step, what worked, and where I struggled.
What This Project Is (and What It’s Not)
At its core, this is just a working demo. I wanted to prove to myself (and maybe to others) that you don’t always need to hand-roll a backend to send push notifications.
Here’s what I actually built:
- A Flutter app that registers with FCM and receives pushes
- A Postgres DB to track simple user state (streaks, last workout, calories, tone)
- An n8n workflow that acts like the backend: storing tokens, calling AI, and sending notifications
- LM Studio (running Llama-3.1 8B locally) for the personalized text
It’s end-to-end. I can tap a button in the app, trigger n8n, and within seconds, a personalized push shows up on my phone.
What it’s not:
- Not production-ready (no retries, monitoring, scaling)
- Not a Firebase replacement (FCM is still the delivery engine here)
- Not a silver bullet (AI helps with tone, but you still need rules and guardrails)
Think of it less like a polished product launch and more like me tinkering. I’m just seeing how tools like n8n and local AI can turn a “meh” feature into something a little more fun.
The Problem with the Old Way (Backend-Heavy)
If you’ve ever implemented push notifications the traditional way, you know how backend-heavy it gets. Usually the stack looks like this:
- API endpoints to register devices and update tokens
- A database to store those tokens and user state
- Server logic to build messages, retry deliveries, and log errors
- Authentication and monitoring so nothing breaks silently
That’s a lot of plumbing for what feels like “just send a message to this phone.”
As an intern, I built all of this by hand. It worked, but it felt like spinning up a mini-backend for a single feature. Sure, for an experienced backend engineer, none of this is rocket science. But why reinvent the wheel every time?
When I saw an internal session on n8n, something clicked: what if I could drag all that backend glue — APIs, DB queries, even personalization — into a visual workflow? That spark became this project.
Enter n8n + AI
The first time I opened n8n, it honestly felt like cheating. Instead of writing hundreds of lines of code, I could drag nodes like:
- Webhook → receive requests from the Flutter app
- Postgres → store or fetch the user’s token and workout history
- HTTP Request → call the local AI model running in LM Studio
- If / Merge / Code → handle conditions and clean up bad tokens
In a few hours, I had what used to take me days.
The real turning point came when I added AI personalization. Instead of hard-coding “Workout reminder!” I piped the context into LM Studio and got back little nudges like:
“Crush day 4 — just 5 mins today 🔥”
Suddenly the whole system felt different. Not just functional, but engaging. And since LM Studio runs locally, all the data stayed on my laptop — no privacy worries, no API bills stacking up.
That’s when I realized: this wasn’t just a side project. It was a glimpse of how low-code + local AI could cut backend work while still giving users a premium, personalized experience.
The Architecture (High-Level Walkthrough)
The flow isn’t about a fancy algorithm. It’s about how the pieces connect:
1) Flutter app (FCM token + “Ping me”)
- On launch, my app requests push permission, fetches the FCM device token, and shows a friendly UI with the token + actions.
- Two key calls:
- POST
…/webhook-test/api/fcm-token— upserts{ userName, token }in Postgres via n8n. - POST
…/api/notify— triggers the whole “generate + deliver” push path.
Why this matters: your phone becomes an addressable target (token), and the notify call is the spark that lights the pipeline.
2) n8n as the “backend without code”
In n8n, I built two Webhooks and a tiny set of nodes around each:
A. /api/fcm-token path
- Webhook receives
{ userName, token }(with a simple Bearer header). - Postgres node runs an ON CONFLICT upsert so refreshes are painless.
- Respond 200. Done.
B. /api/notify path
- Webhook receives
{ userName }(auth checked). - Postgres fetches personalization attributes:
streak,last_workout_days,calories_burned,preferred_tone, and thefcm_token. - Set + HTTP nodes call LM Studio (
/v1/chat/completions) with a strict prompt: - “One line, ≤ 11 words, 1 emoji max, imperative verb, no quotes.”
- Normalize the LLM output (strip quotes, trim, word-cap).
- JWT → OAuth2 exchange for Google (service account → access_token).
- HTTP node sends FCM v1 payload to
projects/.../messages:send. - If FCM says the token is dead, Postgres clears it.
Key payloads:
// LLM request body (to LM Studio)
{
"model": "meta-llama-3.1-8b-instruct",
"messages": [
{"role":"system","content":"One line, ≤11 words, start with verb, 1 emoji"},
{"role":"user","content":"Tone: {{preferred_tone}}. user={{user_name}}, streak={{streak}}, last={{last_workout_days}}, calories={{calories_burned}}."}
],
"temperature": 0.2,
"max_tokens": 18,
"stop": ["\n","```"]
}// FCM v1 payload
{
"message": {
"token": "<device_fcm_token>",
"notification": {
"title": "N8N Push Demo",
"body": "<llm_one_line_output>"
}
}
}Why this matters: n8n is doing the glue work I’d usually write in Express/FastAPI — but it’s all visual, which meant I could iterate faster and debug by clicking into node runs.
3) Postgres as “memory”
- Table
public.usersis the source of truth: device token + simple behavioral fields. - This is what gives the AI a reason to write something specific instead of generic.
- I kept it minimal on purpose; you can extend it later with more context or analytics.
4) LM Studio (local LLM) as “voice”
- I run Meta Llama-3.1 8B Instruct locally (quantized), exposed via OpenAI-style API.
- n8n calls it with my prompt + attributes pulled from Postgres.
- Output is a tiny nudge — “Crush day 4–5 min warmup 🔥” — nothing more.
- It’s private and free (no per-token API usage), perfect for a demo on a laptop.
5) FCM v1 as the delivery truck
After AI text is ready, n8n signs a JWT with my Firebase service account, gets an OAuth2 access_token, and calls FCM v1.
The app receives the notification:
Foreground → I show a local notification (so users actually see it).
Background → system notification.
The Core Loop (10-second mental model)
- Flutter → sends token and later calls “notify”.
- n8n → looks up user row in Postgres
- n8n → LM Studio → gets a short, personalized line.
- n8n → FCM v1 → delivers to the device token.
- Flutter → shows the push.
That’s it. When you see it run, it feels almost too simple.
A Few Tiny Snippets (just the essence)
FCM token upload (Flutter → n8n)
await ApiClient.instance.sendToken(userName: demoUserName, token: currentToken!);Notify trigger (Flutter → n8n)
await ApiClient.instance.triggerNotify(userName: demoUserName);Upsert in Postgres (n8n Postgres node)
INSERT INTO public.users (user_name, fcm_token)
VALUES ($1, $2)
ON CONFLICT (user_name) DO UPDATE
SET fcm_token = EXCLUDED.fcm_token,
updated_at = NOW();LLM prompt idea (n8n → LM Studio)
{
model: "meta-llama-3.1-8b-instruct",
messages: [
{
role: "system",
content: "You are a fitness push-notification specialist. Output must be one single line of plain text, maximum 11 words, start with an imperative verb, max 1 emoji, no hashtags, no quotes, no code, no markdown, no labels. No newlines. If you cannot comply, output exactly: Crush a quick set now 💪\n\nDECISION RULES:\n- If streak = 0 and lastWorkoutDays = 0 → New user. Invite first workout.\n- If streak = 0 and lastWorkoutDays > 0 → Streak broken. Motivate comeback.\n- If streak > 0 and lastWorkoutDays = 0 → Active streak. Celebrate streak.\n- If streak > 0 and lastWorkoutDays = 1 → Active streak continues. Encourage continuation.\n- If streak > 0 and lastWorkoutDays > 1 → Streak broken unless streak is best-ever. Mention streak + break.\n- If caloriesBurned ≥ 50 and fits, mention burn target.\n- If userName present and word count ≤ 11, personalize.\n- Always be supportive, professional, and concise."
},
{
role: "user",
content: "Tone: {{$json.preferred_tone}}. Context -> userName: {{$json.user_name}} | streak: {{$json.streak}} | lastWorkoutDays: {{$json.last_workout_days}} | caloriesBurned: {{$json.calories_burned}}. Return only the notification line, then a newline."
}
],
temperature: 0.2,
top_p: 0.9,
max_tokens: 18,
stop: ["\n", "```"],
stream: false,
presence_penalty: 0,
frequency_penalty: 0
}“Try it yourself” (laptop-friendly)
- Works best if you have NVIDIA GPU with 6–8 GB VRAM (e.g., 3060/4060).
- Bind n8n to your LAN IP and point Flutter’s
backendBaseto it. - Start LM Studio’s Local Server before you trigger
/api/notify. - Keep FCM in v1 mode with the service account; double-check scopes and JSON formatting in n8n.
Personal note: n8n took me a bit to get used to , especially when I kept hitting “invalid JSON” on raw bodies. The trick that unlocked it for me was realizing I could drag-and-drop JSON attributes into expressions instead of hand-typing everything. After that, the whole flow felt modular and way less fragile.
Personalization Logic (How the AI Decides What to Say)
I didn’t want this project to just blast out “Workout reminder!” every time. The whole point of bringing in AI was to make the nudges feel like they were written for you. That’s why I baked in a few simple rules that feed into the LLM prompt.
1) Decision rules I set in the prompt
I didn’t want the AI to go totally free-form, so I gave it some guardrails:
- New user (streak = 0, lastWorkoutDays = 0) → Invite first workout.
- Streak broken (streak = 0, lastWorkoutDays > 0) → Motivate comeback.
- Active streak (fresh) (streak > 0, lastWorkoutDays = 0) → Celebrate streak.
- Active streak (continued) (streak > 0, lastWorkoutDays = 1) → Encourage continuation.
- Break after streak (streak > 0, lastWorkoutDays > 1) → Mention streak + break.
- Calories burned ≥ 50 → Sometimes highlight the burn target.
- Always → Stay supportive, ≤ 11 words, 1 emoji max.
2) Real examples from my tests
- Bharath, streak=0 → “Get Moving, Bharath, break your zero-streak today!”
- Bharath, streak=7→ “Bharath, crush your 7-day streak — with another intense workout🔥”
- Bharath, streak=0, break=7 → “Bharath, get back on track with your first workout today!”
- Bharath, streak=9 calories=120 → “Bharath, crush your 9th day in a row with 120 calories”
3) Why this feels different
Instead of a generic “ping,” each message references your actual behavior. Even though the personalization logic is basic, the AI phrasing makes it feel less robotic and more like a trainer giving you a nudge.
Step-by-Step Setup (Try It Yourself)
This isn’t a 10-page tutorial — just the core steps you’d need if you want to replicate the pipeline on your own laptop. If there’s interest, I’ll share a GitHub repo + a longer guide later.
1. Create a Firebase project
- Enable Firebase Cloud Messaging (FCM).
- Download
google-services.jsonand drop it into your Flutter app (android/app). - Grab a service account JSON with
firebase.messagingscope for n8n.
2 . Set up Flutter app
- Add
firebase_core,firebase_messaging,dio, andflutter_local_notifications. - Initialize Firebase in
main.dart, request notification permission, and fetch the token. - Create two API calls:
/api/fcm-token→ sends{ userName, token }to n8n./api/notify→ asks n8n to trigger a push.
3. Spin up Postgres
- Create DB
fitness_appwith a simpleuserstable (user_name,fcm_token,streak, etc.). - Add an ON CONFLICT upsert so tokens update cleanly.
- Use a dedicated user (
fitness_app_user) with least privileges.
4. Build the n8n workflow
- Webhook 1:
/api/fcm-token→ auth check → Postgres upsert. - Webhook 2:
/api/notify→ fetch user row → build AI prompt → call LM Studio → send to FCM v1. - Add error guards (e.g. clear expired tokens if FCM says
NotRegistered).
5. Run LM Studio locally
- Load Llama-3.1–8B Instruct (quantized if needed).
- Start the local server on
http://127.0.0.1:1234/v1. - Test with curl or PowerShell before wiring into n8n.
6. Test end-to-end
- Launch the Flutter app → token appears.
- Tap Ping me → n8n triggers LM Studio → FCM sends → push lands on your phone.
Pitfalls We Hit (and How We Fixed Them)
This wasn’t a plug-and-play journey. A lot of little gotchas popped up across n8n, Flutter, Postgres, FCM, and LM Studio. Here are the highlights (so you don’t have to fall into the same holes I did):
n8n Workflow Pitfalls
- SQL Injection Risk → Don’t interpolate raw strings. Use positional parameters (
$1,$2) instead. - IF Node Misconfigurations → GUI saved weird configs (like
=). Fixed by switching to Expression mode. - JWT Node Expectations → Needed stringified JSON claims; raw objects failed.
- PEM Formatting Issues → Service account private keys must have real newlines + BEGIN/END lines.
- Token Exchange Typos → Accidentally had
ttps://instead ofhttps://, and forgotContent-Type: application/x-www-form-urlencoded. - Merge Node Mismatches → If not “Combine by Position,” n8n throws “No data for item-index.”
- Postgres: Clear Token Logic →
$json.body.userNamewasn’t always available; better to clear by token. - FCM v1 Payload → Root must be
{ "message": { ... } }— missing it = 400. - Legacy Parser Still Used → I was parsing v1 responses like legacy FCM; updated parser fixed it.
- Boolean IF Confusion → Overcomplicated true/false handling; simplified with direct expression.
- Duplicated LM Body Construction → Both Set + Code nodes built the prompt; merged into one source of truth.
Flutter FCM Integration Pitfalls
- FCM Token Null →
getToken()sometimes returned null. Fixed with auto-init + retry loop. - Android 13+ Permission → Pushes failed silently until I requested
POST_NOTIFICATIONSat runtime. - Foreground Messages Invisible → Added
flutter_local_notificationsto display them manually. - Wrong API Paths → Flutter hit
/api/notifybut backend was/webhook-test/api/notify. Fixed endpoints. - Dio Exceptions Unclear → Added interceptors to log request/response clearly.
- Emulator vs Device Networking → Needed
10.0.2.2:<port>for emulator, LAN IP for device. - Initial Notification Lost → Wired both
onMessageOpenedAppandgetInitialMessage. - Background Handler Missing → Registered top-level handler with
@pragma('vm:entry-point').
LM Studio Pitfalls
- Models Eating Disk → Changed models directory before downloads.
- Runtime Errors → Installed GGUF runtime + set CUDA as default.
- GPU Not Detected → Fixed by explicitly selecting CUDA runtime.
- VRAM OOM → Used smaller quant (Q4_K_M) and lowered context size.
- API Connection Refused → Started local server and allowed it in firewall.
- Wrong API Variant → Used
/v1/chat/completionswithmessages(not/completions). - Server Stops When Tab Closed → Kept model tab open.
- Slow Outputs → Reduced
max_tokensand avoided streaming. - Stop Sequences Ignored → Added
["\n", "```"]and post-trimmed in n8n.
Disclaimer (What This Project Is Not)
Before I oversell this, here’s what this project isn’t:
- Not production-ready → No monitoring, scaling, retries, or high-availability setup.
- Not a full marketing automation platform → It’s a demo pipeline, not Braze or MoEngage.
- Not a replacement for backend engineers → n8n removes boilerplate, but backend logic, data models, and security still matter.
- Not free from rough edges → Local LLMs need GPU horsepower, tokens expire, and workflows can break if nodes aren’t carefully configured.
Why This Matters
At first glance, this is “just” push notifications. But zoom out:
- Backend without boilerplate → What once took APIs, services, and queues is now a drag-and-drop flow.
- AI as a writing partner → Personalized copy makes notifications less robotic, and more engaging.
- Accessible to non-coders → You don’t need to be an SDE to remix this. n8n + LM Studio lowers the barrier.
- Private and cost-efficient → Running locally means no user data leaves your laptop, and no per-token API bills.
- Bridge between devs and automation folks → This project sits at the intersection of code, low-code, and AI — and that’s where a lot of future tooling is headed.
Conclusion
This project started as me revisiting something I first hacked together as an intern — FCM push notifications. Back then it felt backend-heavy and clunky. With n8n and a local LLM in the mix, the same idea feels lighter, smarter, and a lot more fun.
If you’re a developer, an automation enthusiast, or just curious about AI-driven personalization, I hope this walkthrough sparks ideas.
If there’s enough interest, I’ll share the GitHub repo and a longer step-by-step guide so you can fork it, tinker, and maybe even extend it into your own projects.
About Author
Bharath is a SDE-I at CodeStax.AI, where he’s learning to build and improve web applications. He works across frontend and backend, contributing to no-code solutions that simplify development. As a SDE-I just starting out, Bharath is focused on learning, growing, and understanding how real-world tech products are built. He enjoys exploring new tools and staying curious about the world of software.
About CodeStax.ai
At CodeStax.Ai, we stand at the nexus of innovation and enterprise solutions, offering technology partnerships that empower businesses to drive efficiency, innovation, and growth, harnessing the transformative power of no-code platforms and advanced AI integrations.
But the real magic? It’s our tech tribe behind the scenes. If you’ve got a knack for innovation and a passion for redefining the norm, we’ve got the perfect tech playground for you. CodeStax.Ai offers more than a job — it’s a journey into the very heart of what’s next. Join us, and be part of the revolution that’s redefining the enterprise tech landscape.
