Vercel Edge — what is it and how is it
Edge runtime. One of the main functionality of Vercel — the company that developed and maintains next.js. However, its influence on the edge runtime has gone far beyond its frameworks and utilities. The edge runtime works in the recently acquired by Vercel Svelte, in nuxt, and in more than 30 other frontend frameworks. This article will focus on the edge runtime — what it is, how it is used in Vercel, what features it adds to next.js, expected changes and what solutions I made to expand these features.
Vercel Edge Network
Simply put, the edge runtime is a content delivery network (CDN/distributed infrastructure), i.e., multiple points around the world. Thus, the user interacts not with a single server (which may be located in the company’s office on the other side of the world), but with the nearest network point.
At the same time, these are not copies of the application, but separate functionalities that can work between the client and the server. In other words, these are mini-servers with their own features (which will be described later).
This system allows users to access not your far server right away, but a nearby point. Decisions on A/B tests are made at this point, authorization checks are performed, requests are cached, errors are returned, and much more. After this, if necessary, the request will go to the server for the required information. Otherwise, the user receives an error or, for example, a redirect in the shortest time.
Of course, this concept itself is not Vercel’s achievement. CloudFlare, Google Cloud CDN, and many other solutions can do this as well. However, Vercel, with its influence on frameworks, has taken this to a new level, deploying not just an intermediate router at the CDN level but creating mini-applications capable of even rendering pages at the nearest point to the user. And most importantly, this can be done simply by adding familiar JS files to the project.
Edge runtime in next.js
In next.js, perhaps the main functionality of this environment is the middleware file. Any segment (API or page) can also be executed in the edge runtime. But before describing them, a little bit about the next.js server.
Next.js is a full-stack framework. That is, it contains both the client application and the server. When running next.js (next start
) - it is the server that starts and is responsible for serving pages, working with the API, caching, rewrites, etc.
It all works in the following order:
headers
fromnext.config.js
;redirects
fromnext.config.js
;- Middleware;
beforeFiles
rewrites fromnext.config.js
;- Files and static segments (
public/
,_next/static/
,pages/
,app/
, etc.); afterFiles
rewrites fromnext.config.js
;- Dynamic segments (
/blog/[slug]
); fallback
rewrites fromnext.config.js
.
When it is determined that the current request reaches is the segment (not, for example, a redirect), its processing begins (this is either the return of a statically assembled segment, reading from the cache, or its execution and return of the result).
In Vercel, likely, this entire cycle can run in the edge runtime. However, points 3, 5, and 7 are particularly interesting here.
The middleware in its basic implementation looks like this:
import { NextResponse, type NextRequest } from 'next/server';
export function middleware(request: NextRequest) {
return NextResponse.redirect(new URL('/home', request.url));
}
In it, for example, you can:
- Make requests (e.g., to get data from third-party services);
- Perform a rewrite or redirect (e.g., to run an A/B test or check authorization);
- Return some body (e.g., to display a basic stub in certain situations);
- Read and/or modify headers and cookies (e.g., save or read access information).
You can read more about the areas of application in the next.js middleware documentation.
The same can be done in segments (i.e., API and pages). To make a segment work in the edge runtime, you need to export from the segment file:
export const runtime = 'edge';
Thus, the segment will be executed in the edge runtime, not on the server itself.
However, it is important to make an important caveat here. Everything described above is not a full-fledged edge runtime by itself. In Edge Network, this will be distributed only when the service is deployed in Vercel.
Also, in addition to all these capabilities, the edge runtime has several limitations. For example, despite the fact that when running the application outside of Vercel, the edge runtime is part of the server — it will not be possible to interact with this server. And this is done because it was developed specifically for the Vercel Edge Network.
Edge runtime concept in Vercel
As mentioned, the edge runtime can be called mini-applications. They are mini because they run on node.js V8 (which runs, for example, Google Chrome and Electron). This is their key detail, on which not only the features of the previous section depend but also the restrictions.
Namely, in the edge runtime, you cannot:
- Perform actions with the file system;
- Interact with the server environment;
- Call
require
. Only ES modules can be used. This imposes additional restrictions on third-party solutions.
The full list of supported APIs and restrictions can be found on the next.js documentation page.
Thus, Vercel Edge Network can be responsible for, for example:
- Routing;
- Page rendering;
- Executing API routes;
- Caching.
The edge runtime acts as the first stage of segment processing and is most effective in situations where all processing can take place inside the edge container. For example, for redirects or returning cached data. The entire processing process in Vercel usually works in the following order:
After the build, Vercel sends the new edge runtime code (which now compiles to machine code) to the servers, and they immediately start working with the new code.
Vercel itself uses the edge runtime for all applications and all requests. That is, after connecting the domain, Vercel immediately configures that this domain is also available on these points around the world. The next time a user visits the page, the provider will ask the network where this domain is located, and in response, it will get available locations, choose the nearest point, and go to it.
These Edge points always have caching logic, and if there are rewrites, redirects, middleware, or segments in the edge runtime in the project — after the build, it will send all of this to the edge servers.
Then the edge runtime will process it: check rewrites and redirects -> pass through middleware -> check if it is cached -> if the segment is in the edge runtime — execute it there, if not — send the request to the original server (but Vercel does not document these orders and internals of the Edge runtime anywhere, but this is how I see it).
In summary, it is beneficial to use the Edge runtime when all processing can be done within the Edge environment (The request will be Client -> Edge). If you need to access the main server (for example, a database connected within the project, or to read files for some reason) — it is not advantageous. The request will still be Client -> Edge -> Server. And since you still need to access the server — it is better to do all the processing there — it has all the cache, the database is nearby, the whole system is nearby, and overall, it has more capabilities.
Expected changes in the edge runtime
Despite the fact that the edge runtime is one of the key features of Vercel as a hosting, the team is actively revising it. Not only its application but also its necessity as a whole. Recently, Vercel VP Lee Robinson in his tweet shared that Vercel [as a company] stopped using the edge runtime in all its services and returned to the nodejs runtime. The team also expects that the experimental partial pre-render (PPR) will be so effective that edge runtime generation will lose its value.
And it was PPR along with advanced caching that pushed the edge runtime into the background. Previously, the entire page was rendered either on the server or in the edge runtime. The edge runtime won precisely because of its closer location. Now, pages are mostly pre-generated. Then, upon request, individual dynamic parts are rendered and cached. The cache, in turn, is unique for each point in the edge runtime, whereas on the server it is the same for all users.
And, of course, the server has access to the environment, database, and file system. Therefore, if the page needs this data, the nodejs runtime wins significantly (gathering everything in one environment is faster than making requests to the server from the edge environment each time).
Vercel is likely to introduce new priorities in its pricing, restructuring them around partial pre-render. Perhaps with these changes, tweets with bills of tens of thousands of dollars will become fewer (but this is not certain).
In addition, the Next.js team recently shared a tweet about middleware revision. It is very likely that, like the segments, it will be given an execution environment choice. Again, considering that outside Vercel middleware works as part of the server, this is a very logical decision. It is also possible that with these changes, a separate middleware for API routes will be added.
Expanding the Edge runtime
I am the author of several packages for next.js nimpl.tech. I have already mentioned getters with information about the current page in the article “Next.js App Router. Experience of use. Path to the future or a wrong turn”, translation library in “More libraries to the god of libraries or how I rethought i18n [next.js v14]”, caching packages in “Caching in next.js. Gift or curse”. But in this family, there are also packages specifically built for the edge runtime — router and middleware-chain.
@nimpl/router
As mentioned, the edge runtime works best if it can handle the entire request in a self-contained mini-application. In all other cases, this is an unnecessary step since the request will still go to the server but via a longer path.
One of these tasks is routing. Routing also includes rewrites, redirects, basePath, and i18n from next.config.js
.
Their main problem is that they are set only once — in the configuration file — for the entire application, and also, i18n is full of bugs. Therefore, including in the App Router, there is no information about the i18n option, and the documentation recommends using middleware for this case. But such a separation means that redirects from the config and i18n routing from middleware are processed separately. This can cause double redirects (first a redirect from the config will be performed, then a redirect from the middleware) and various unexpected artifacts can appear.
To avoid this, all this functionality should be gathered in one place. And, as the documentation recommends for i18n, this place should be middleware.
import { createMiddleware } from '@nimpl/router';
export const middleware = createMiddleware({
redirects: [
{
source: '/old',
destination: '/',
permanent: false,
},
],
rewrites: [
{
source: '/home',
destination: '/',
locale: false,
},
],
basePath: '/doc',
i18n: {
defaultLocale: 'en',
locales: ['en', 'de'],
},
});
Familiar Next.js redirects, rewrites, basePath, and i18n settings but at the edge runtime level. Documentation for the @nimpl/router package.
@nimpl/middleware-chain
Working with ready-made solutions or creating my own, time and time again I encountered the problem of combining them in one middleware. That is when you need to connect two or more ready-made middleware to one project.
The problem is that middleware in next.js is not the same as in express or koa — it immediately returns the final result. Therefore, each package just creates the final middleware. For example, in next-intl, it looks like this:
import createMiddleware from 'next-intl/middleware';
export default createMiddleware({
locales: ['en', 'de'],
defaultLocale: 'en',
});
I am not the first to encounter this problem, and ready-made solutions can be found on npm. They all work through their own APIs — made in the style of express or in their own vision. They are useful, well-implemented, and convenient. But only in cases where you can update each used middleware.
However, there are many situations where you need to add already existing solutions. Usually, in the issues of these solutions, you can find “add support for adding package chain A”, “work with package chain B”. It is for such situations that @nimpl/middleware-chain is created.
This package allows you to create a chain of native next.js middleware without any modifications (that is, you can add any ready-made middleware to the chain).
import { default as authMiddleware } from "next-auth/middleware";
import createMiddleware from "next-intl/middleware";
import { chain } from "@nimpl/middleware-chain";
const intlMiddleware = createMiddleware({
locales: ["en", "dk"],
defaultLocale: "en",
});
export default chain([
intlMiddleware,
authMiddleware,
]);
The chain processes each middleware sequentially. During processing, all modifications are collected until the chain is completed or until any element in the chain returns FinalNextResponse.
export default chain([
intlMiddleware,
(req) => {
if (req.summary.type === "redirect") return FinalNextResponse.next();
},
authMiddleware,
]);
This is not Koa or Express, this is a package for next.js, in its unique style and format of its API. Documentation for the @nimpl/middleware-chain package.
And to end, let me leave a few links here.
My Medium with other useful articles | nimpl.tech with package documentation | github with star button | X with rare posts | in just to have it
The dot map used as the background for images at the beginning of the article is made by mocrovector from freepik.