For years, serverless was sold as the future.
No servers to manage. Just write a function, deploy it, and let the platform scale it for you. AWS Lambda kicked it off, and then everyone followed: Vercel, Cloudflare Workers, Deno Deploy.
But the dream is falling apart.
Last week, Deno quietly announced that they're scaling back Deploy's global footprint from 35 regions to just 6. And performance actually improved.
It's not a sign that serverless is dead, but it is a clear signal: trying to stretch Functions-as-a-Service (FaaS) into a general purpose app platform hasn't worked.
What Serverless Promised
The original pitch was simple.
Write a small function. Don't worry about infrastructure. It scales when you need it. You only pay when it runs.
And it works well for the right use cases:
- Webhooks
- Scheduled tasks
- Background jobs
- Small APIs
- Bursty traffic
But once you start building full products, the cracks show fast.
Real Apps Aren't Stateless
Most real-world apps:
- Talk to a database
- Rely on fast and consistent latency
- Need sessions, auth, background processing
- Require long-running or multi-step logic
Trying to force that into a stateless function that spins up in a random region leads to cold starts, latency spikes, and awkward workarounds.
Your app isn't a webhook. You need control. You need state. You need things that serverless just doesn't do well out of the box.
Deno's Pivot: Back to the Basics
Deno just confirmed this.
They scaled back to fewer regions because edge compute wasn't helping most use cases. Almost every app needed to call a database, usually pinned to a single region. Cold regions caused latency spikes. Routing to a warm region even farther away was often faster.
So they're pivoting.
Instead of trying to be an everywhere-at-once function platform, Deno is moving toward a full app hosting platform. They're adding:
- Region pinning
- KV storage
- Durable Objects-style state+compute
- Background tasks
- Subprocesses
- Build pipelines
In other words, they're slowly recreating the things servers and containers have offered for years.
This Isn't a Serverless Problem It's a Misuse Problem
Let's be clear: serverless isn't bad. Misusing it is.
In UserJot, I use Cloudflare Workers for stateless compute. It works great for things like:
- Running logic close to the user
- No database access or only a single upstream request
- High-burst traffic patterns
This is where serverless makes sense. It's lightweight, fast, and scales to zero. But that's also where the boundary should be.
Once you start layering on state, background jobs, and region awareness, you're not writing a function anymore you're building an app. And the moment you try to build an app on top of a FaaS model, you run into limitations.
Serverless Platforms Are Quietly Rebuilding the Server
This isn't just Deno.
- Cloudflare has KV, R2, Durable Objects, and their own database (D1).
- Vercel recently introduced Fluid Compute, allowing multiple requests inside a single Lambda instance to avoid cold start overhead.
- Deno is going full app-hosting with long-lived processes, caching, background tasks, and region control.
They're all recreating the things that made servers and containers useful. But now it's buried behind layers of proprietary tooling.
The Lock-In Trap
Here's the other issue: vendor lock-in.
All these platforms are introducing proprietary services that only work on their own stack:
- Cloudflare's KV, Durable Objects, Queues
- Deno's upcoming persistent compute and storage
- Vercel's edge and now Fluid Compute
Most of these are closed or "open core" at best. You can't self-host them. Even if they're technically open source, they rely on infrastructure scale that only the platform can provide.
FaaS was supposed to free you from infrastructure. Instead, you're now tightly coupled to someone else's idea of how apps should work.
Most Apps Don't Need to Be Global
The majority of apps are perfectly fine living in a single region.
Forcing global distribution adds complexity for very little gain. You're suddenly dealing with consistency, replication, latency trade-offs, and CAP theorem headaches without actually needing to.
We've seen attempts to make databases work seamlessly across multiple regions, but most of them come with rough edges. Transactions, failover, and consistency all get harder the moment you go global.
Unless you really need it, you're better off staying in one region and scaling out later if you have to. Most apps never reach that point.
Serverless Has Hard Limits
There are real limitations that don't show up until your app grows:
- Execution time limits (varies by platform, but always short)
- Memory limits per request
- Concurrency issues (shared state between invocations is tricky)
- Cold starts that affect performance on first request
- Function size limits and deploy quirks
- Incomplete Node.js compatibility (especially on Cloudflare Workers)
For example, on Cloudflare Workers, you might import a package in dev, deploy it, and find out it silently fails or throws weird errors. Then you're hunting for polyfills or rewriting logic just to make it work.
These friction points add up. What starts as "less to manage" turns into "harder to debug."
The Takeaway
Serverless isn't dead. It's just shrinking back to the role it should've always had: a specialized tool.
Use it for:
- Quick APIs
- Background jobs
- Scheduled tasks
- Stateless edge compute
But for full products? Stick to what works:
- Containers
- App servers
- Single-region setups
- Predictable infrastructure
Trying to force every workload into a serverless model leads to complexity, tech debt, and a mess of proprietary services that lock you in.
One Last Thing
I'm building UserJot, a feedback platform, and I use serverless where it makes sense stateless logic at the edge, quick compute tasks, stuff that scales on burst.
But for the core product? I stick to servers and containers.
Serverless is a tool. Not a foundation.
Use it smartly.
Top comments (20)
Literally every point listed here is solved for serverless in exactly the same way as for traditional server apps (except #2).
Vendor lock-in is a totally valid point though - but it's the only one I can relate to.
My point isn't "you can't." It's that forcing full apps into a FaaS model often means more complexity, worse DX, and deeper lock-in, all while you're essentially rebuilding (often poorly) what servers already did well.
Sure you can stich together state with external caches, bring in hyperdrive to handle database connection pooling, build a complex job system to bypass the max function execution time, and setup cron jobs to warm up a few instances which goes against the whole "scale to zero" mantra. All to end up with a unpredicatable latency, relying on proprietary glue to make it work, worse DX, almost impossible to debug.
Even the platforms are showing this: Deno cut regions because global FaaS wasn't the magic bullet, Vercel built Fluid to get closer to server-like behavior, Cloudflare added stateful Durable Objects. They're all rebuilding the server, just with more proprietary layers.
You're essentially ignoring the nuance in my post by just saying, "Well, FaaS can do that too."
I'm also not sure what point the poster is trying to make. FaaS can do all of these things just like any other solution. I guess it's a lack of experience with the technology.
tl;dr: "Just because you can doesn't mean you should."
You prove my point exactly with this comment, thanks.
Are you actually completely missing, or deliberately ignoring the entire point of the article? Can't tell which it is, lol.
To me the article doesn't make sense, it's lack of experience with the technology.
For instance, you talk about stateless being something bad or exotic while it's completely normal. State is kept in a database, not in the memory of your application. What would happen if your server goes down?
It's fine if you don't like FaaS or Serverless in general but the reasons you mention are not facts, it's just your opinion based on your own experience.
More info:
1 more addition; you tried one of the available solutions and assume they all work the same. Lambda for instance can run globally at edge (Lambda@Edge) for specific tasks but my serverless apps just run in a single region. Could it be that you are shooting down FaaS/Serverless compute for the wrong reasons perhaps?
Totally agree that trying to build complex, stateful apps on serverless always turns into a headache. Curious if you’ve found any tools that actually make region-pinning and state easier - or does it just feel like recreating servers anyway?
Yeah, that's exactly the problem. Once you start needing region-pinning and stateful logic, you're already halfway back to just running a server.
Some tools try to help with this (like Durable Objects on Cloudflare or Deno's upcoming compute-bound state), but in practice they come with their own trade-offs and lock you into that platform’s way of doing things. You're not really avoiding complexity, just shifting it somewhere else.
For most apps, I’ve found it’s easier to just pick one region close to your users and colocate everything there. You avoid most of the pain without giving up too much performance.
Yeah, this hits home for me - always felt like building real stuff on serverless turned into a mess, but it's still clutch for all my quick jobs.
Same here. I still reach for serverless when I need to run something fast, stateless, and event-driven, it’s great for that. But the moment you try to build anything with real complexity or state, it stops being simple. It’s best used as a sharp, focused tool, not the whole toolbox.
🪦
Interesting, I'd say the biggest problem with serverless is a lot of people don't know how to use it properly. Don't get me wrong, serverless is a tool just like any other tool, but it hurts to read all these articles of people using it wrong.
Serverless services is yet another great example of a purpose built tool being used for right everything without understanding the fit.
I recently built a registration form on Azure functions and this is pretty much word for word my own thoughts on it. It worked reasonably well but it was clear we were pushing against the limits of what it was reasonable to do with it.
been cool seeing steady progress in this space - always make me think if picking the “right tool for the job” is really about the tech itself, or just how much headache i’m willing to deal with long run
What are the ideal use cases for serverless today?
Thanks for sharing !!!
Serverless should work for the following:
And other "quick tasks".
Great points, Hammed! 👏
You could also add a few more ideal use cases for serverless:
All these scenarios benefit from serverless scalability and cost-efficiency. 🚀
let's connect
github.com/astrod333
Some comments may only be visible to logged-in visitors. Sign in to view all comments.