Cache
Cached handlers
To cache an event handler, you simply need to use the defineCachedHandler method.
It works like defineHandler but with an second parameter for the cache options.
import { defineCachedHandler } from "nitro/cache";
export default defineCachedHandler((event) => {
return "I am cached for an hour";
}, { maxAge: 60 * 60 });
With this example, the response will be cached for 1 hour and a stale value will be sent to the client while the cache is being updated in the background. If you want to immediately return the updated response set swr: false.
See the options section for more details about the available options.
varies option to consider specific headers when caching and serving the responses.Automatic HTTP headers
When using defineCachedHandler, Nitro automatically manages HTTP cache headers on cached responses:
etag-- A weak ETag (W/"...") is generated from the response body hash if not already set by the handler.last-modified-- Set to the current time when the response is first cached, if not already set.cache-control-- Automatically set based on theswr,maxAge, andstaleMaxAgeoptions:- With
swr: true:s-maxage=<maxAge>, stale-while-revalidate=<staleMaxAge> - With
swr: false:max-age=<maxAge>
- With
Conditional requests (304 Not Modified)
Cached handlers automatically support conditional requests. When a client sends if-none-match or if-modified-since headers matching the cached response, Nitro returns a 304 Not Modified response without a body.
Request method filtering
Only GET and HEAD requests are cached. All other HTTP methods (POST, PUT, DELETE, etc.) automatically bypass the cache and call the handler directly.
Request deduplication
When multiple concurrent requests hit the same cache key while the cache is being resolved, only one invocation of the handler runs. All concurrent requests wait for and share the same result.
Cached functions
You can also cache a function using the defineCachedFunction function. This is useful for caching the result of a function that is not an event handler, but is part of one, and reusing it in multiple handlers.
For example, you might want to cache the result of an API call for one hour:
import { defineCachedFunction } from "nitro/cache";
import { defineHandler, type H3Event } from "nitro/h3";
export default defineHandler(async (event) => {
const { repo } = event.context.params;
const stars = await cachedGHStars(repo).catch(() => 0)
return { repo, stars }
});
const cachedGHStars = defineCachedFunction(async (repo: string) => {
const data = await fetch(`https://api.github.com/repos/${repo}`).then(res => res.json());
return data.stargazers_count;
}, {
maxAge: 60 * 60,
name: "ghStars",
getKey: (repo: string) => repo
});
The stars will be cached in development inside .nitro/cache/functions/ghStars/<owner>/<repo>.json with value being the number of stars.
{"expires":1677851092249,"value":43991,"mtime":1677847492540,"integrity":"ZUHcsxCWEH"}
In edge workers, the instance is destroyed after each request. Nitro automatically uses event.waitUntil to keep the instance alive while the cache is being updated while the response is sent to the client.
To ensure that your cached functions work as expected in edge workers, you should always pass the event as the first argument to the function using defineCachedFunction.
import { defineCachedFunction } from "nitro/cache";
export default defineHandler(async (event) => {
const { repo } = event.context.params;
const stars = await cachedGHStars(event, repo).catch(() => 0)
return { repo, stars }
});
const cachedGHStars = defineCachedFunction(async (event: H3Event, repo: string) => {
const data = await fetch(`https://api.github.com/repos/${repo}`).then(res => res.json());
return data.stargazers_count;
}, {
maxAge: 60 * 60,
name: "ghStars",
getKey: (event: H3Event, repo: string) => repo
});
This way, the function will be able to keep the instance alive while the cache is being updated without slowing down the response to the client.
Using route rules
This feature enables you to add caching routes based on a glob pattern directly in the main configuration file. This is especially useful to have a global cache strategy for a part of your application.
Cache all the blog routes for 1 hour with stale-while-revalidate behavior:
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
routeRules: {
"/blog/**": { cache: { maxAge: 60 * 60 } },
},
});
If we want to use a custom cache storage mount point, we can use the base option.
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
storage: {
redis: {
driver: "redis",
url: "redis://localhost:6379",
},
},
routeRules: {
"/blog/**": { cache: { maxAge: 60 * 60, base: "redis" } },
},
});
Route rules shortcuts
You can use the swr shortcut for enabling stale-while-revalidate caching on route rules. When set to true, SWR is enabled with the default maxAge. When set to a number, it is used as the maxAge value in seconds.
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
routeRules: {
"/blog/**": { swr: true },
"/api/**": { swr: 3600 },
},
});
To explicitly disable caching on a route, set cache: false:
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
routeRules: {
"/api/realtime/**": { cache: false },
},
});
'nitro/route-rules' instead of the default 'nitro/handlers'.Cache storage
Nitro stores the data in the cache storage mount point.
- In production, it will use the memory driver by default.
- In development, it will use the filesystem driver, writing to a temporary dir (
.nitro/cache).
To overwrite the production storage, set the cache mount point using the storage option:
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
storage: {
cache: {
driver: 'redis',
/* redis connector options */
}
}
})
In development, you can also overwrite the cache mount point using the devStorage option:
import { defineNitroConfig } from "nitro/config";
export default defineNitroConfig({
storage: {
cache: {
// production cache storage
},
},
devStorage: {
cache: {
// development cache storage
}
}
})
Options
The defineCachedHandler and defineCachedFunction functions accept the following options:
Shared options
These options are available for both defineCachedHandler and defineCachedFunction:
Default to
cache.'_' otherwise.'nitro/handlers' for handlers and 'nitro/functions' for functions.String). If not provided, a built-in hash function will be used to generate a key based on the function arguments. For cached handlers, the key is derived from the request URL path and search params.
By default, it is computed from function code, used in development to invalidate the cache when the function code changes.
Default to
1 (second).-1 a stale value will still be sent to the client while the cache updates in the background. Defaults to
0 (disabled).stale-while-revalidate behavior to serve a stale cached response while asynchronously revalidating it. When enabled, stale cached values are returned immediately while revalidation happens in the background. When disabled, the caller waits for the fresh value before responding (the stale entry is cleared).
Defaults to
true.boolean to invalidate the current cache and create a new one.boolean to bypass the current cache without invalidating the existing entry.By default, errors are logged to the console and captured by the Nitro error handler.
Handler-only options
These options are only available for defineCachedHandler:
true, skip full response caching and only handle conditional request headers (if-none-match, if-modified-since) for 304 Not Modified responses. The handler is called on every request but benefits from conditional caching.Headers not listed in
varies are stripped from the request before calling the handler to ensure consistent cache hits. For multi-tenant environments, you may want to pass
['host', 'x-forwarded-host'] to ensure these headers are not discarded and that the cache is unique per tenant.Function-only options
These options are only available for defineCachedFunction:
false to treat the entry as invalid and trigger re-resolution.SWR behavior
The stale-while-revalidate (SWR) pattern is enabled by default (swr: true). Understanding how it interacts with other options:
swr | maxAge | Behavior |
|---|---|---|
true (default) | 1 (default) | Cache for 1 second, serve stale while revalidating |
true | 3600 | Cache for 1 hour, serve stale while revalidating |
false | 3600 | Cache for 1 hour, wait for fresh value when expired |
true | 3600 with staleMaxAge: 600 | Cache for 1 hour, serve stale for up to 10 minutes while revalidating |
When swr is enabled and a cached value exists but has expired:
The stale cached value is returned immediately to the client.
The function/handler is called in the background to refresh the cache.
On edge workers, event.waitUntil is used to keep the background refresh alive.
When swr is disabled and a cached value has expired:
The stale entry is cleared.
The client waits for the function/handler to resolve with a fresh value.
Cache keys and invalidation
When using the defineCachedFunction or defineCachedHandler functions, the cache key is generated using the following pattern:
`${options.base}:${options.group}:${options.name}:${options.getKey(...args)}.json`
For example, the following function:
import { defineCachedFunction } from "nitro/cache";
const getAccessToken = defineCachedFunction(() => {
return String(Date.now())
}, {
maxAge: 10,
name: "getAccessToken",
getKey: () => "default"
});
Will generate the following cache key:
cache:nitro/functions:getAccessToken:default.json
You can invalidate the cached function entry with:
import { useStorage } from "nitro/storage";
await useStorage('cache').removeItem('nitro/functions:getAccessToken:default.json')
varies option, hashes of the specified header values appended to the key.>= 400 or with an undefined body are not cached. This prevents caching error responses.