# Demos and architectures URL: https://developers.cloudflare.com/workers/demos/ import { ExternalResources, GlossaryTooltip, ResourcesBySelector } from "~/components" Learn how you can use Workers within your existing application and architecture. ## Demos Explore the following demo applications for Workers. ## Reference architectures Explore the following reference architectures that use Workers: --- # Glossary URL: https://developers.cloudflare.com/workers/glossary/ import { Glossary } from "~/components"; Review the definitions for terms used across Cloudflare's Workers documentation. --- # Cloudflare Workers URL: https://developers.cloudflare.com/workers/ import { Description, RelatedProduct, LinkButton } from "~/components"; A serverless platform for building, deploying, and scaling apps across [Cloudflare's global network](https://www.cloudflare.com/network/) with a single command — no infrastructure to manage, no complex configuration With Cloudflare Workers, you can expect to: - Deliver fast performance with high reliability anywhere in the world - Build full-stack apps with your framework of choice, including [React](/workers/frameworks/framework-guides/react-router/), [Vue](/workers/frameworks/framework-guides/vue/), [Svelte](/workers/frameworks/framework-guides/svelte/), [Next](/workers/frameworks/framework-guides/nextjs/), [Astro](/workers/frameworks/framework-guides/astro/), [React Router](/workers/frameworks/framework-guides/react-router/), [and more](/workers/frameworks/) - Use your preferred language, including [JavaScript](/workers/languages/javascript/), [TypeScript](/workers/languages/typescript/), [Python](/workers/languages/python/), [Rust](/workers/languages/rust/), [and more](/workers/runtime-apis/webassembly/) - Gain deep visibility and insight with built-in [observability](/workers/observability/logs/) - Get started for free and grow with flexible [pricing](/workers/platform/pricing/), affordable at any scale Get started with your first project: Deploy a template Deploy with Wrangler CLI --- ## Build with Workers
#### Front-end applications Deploy [static assets](/workers/static-assets/) to Cloudflare's [CDN & cache](/cache/) for fast rendering
#### Back-end applications Build APIs and connect to data stores with [Smart Placement](/workers/configuration/smart-placement/) to optimize latency
#### Serverless AI inference Run LLMs, generate images, and more with [Workers AI](/workers-ai/)
#### Background jobs Schedule [cron jobs](/workers/configuration/cron-triggers/), run durable [Workflows](/workflows/), and integrate with [Queues](/queues/)
--- ## Integrate with Workers Connect to external services like databases, APIs, and storage via [Bindings](/workers/runtime-apis/bindings/), enabling functionality with just a few lines of code: **Storage** Scalable stateful storage for real-time coordination. Serverless SQL database built for fast, global queries. Low-latency key-value storage for fast, edge-cached reads. Guaranteed delivery with no charges for egress bandwidth. Connect to your external database with accelerated queries, cached at the edge. **Compute** Machine learning models powered by serverless GPUs. Durable, long-running operations with automatic retries. Vector database for AI-powered semantic search. Zero-egress object storage for cost-efficient data access. Programmatic serverless browser instances. **Media** Global caching for high-performance, low-latency delivery. Streamlined image infrastructure from a single API. --- Want to connect with the Workers community? [Join our Discord](https://discord.cloudflare.com) --- # Playground URL: https://developers.cloudflare.com/workers/playground/ import { LinkButton } from "~/components"; :::note[Browser support] The Cloudflare Workers Playground is currently only supported in Firefox and Chrome desktop browsers. In Safari, it will show a `PreviewRequestFailed` error message. ::: The quickest way to experiment with Cloudflare Workers is in the [Playground](https://workers.cloudflare.com/playground). It does not require any setup or authentication. The Playground is a sandbox which gives you an instant way to preview and test a Worker directly in the browser. The Playground uses the same editor as the authenticated experience. The Playground provides the ability to [share](#share) the code you write as well as [deploy](#deploy) it instantly to Cloudflare's global network. This way, you can try new things out and deploy them when you are ready. Launch the Playground ## Hello Cloudflare Workers When you arrive in the Playground, you will see this default code: ```js import welcome from "welcome.html"; /** * @typedef {Object} Env */ export default { /** * @param {Request} request * @param {Env} env * @param {ExecutionContext} ctx * @returns {Response} */ fetch(request, env, ctx) { console.log("Hello Cloudflare Workers!"); return new Response(welcome, { headers: { "content-type": "text/html", }, }); }, }; ``` This is an example of a multi-module Worker that is receiving a [request](/workers/runtime-apis/request/), logging a message to the console, and then returning a [response](/workers/runtime-apis/response/) body containing the content from `welcome.html`. Refer to the [Fetch handler documentation](/workers/runtime-apis/handlers/fetch/) to learn more. ## Use the Playground As you edit the default code, the Worker will auto-update such that the preview on the right shows your Worker running just as it would in a browser. If your Worker uses URL paths, you can enter those in the input field on the right to navigate to them. The Playground provides type-checking via JSDoc comments and [`workers-types`](https://www.npmjs.com/package/@cloudflare/workers-types). The Playground also provides pretty error pages in the event of application errors. To test a raw HTTP request (for example, to test a `POST` request), go to the **HTTP** tab and select **Send**. You can add and edit headers via this panel, as well as edit the body of a request. ## DevTools For debugging Workers inside the Playground, use the developer tools at the bottom of the Playground's preview panel to view `console.logs`, network requests, memory and CPU usage. The developer tools for the Workers Playground work similarly to the developer tools in Chrome or Firefox, and are the same developer tools users have access to in the [Wrangler CLI](/workers/wrangler/install-and-update/) and the authenticated dashboard. ### Network tab **Network** shows the outgoing requests from your Worker — that is, any calls to `fetch` inside your Worker code. ### Console Logs The console displays the output of any calls to `console.log` that were called for the current preview run as well as any other preview runs in that session. ### Sources **Sources** displays the sources that make up your Worker. Note that KV, text, and secret bindings are only accessible when authenticated with an account. This means you must be logged in to the dashboard, or use [`wrangler dev`](/workers/wrangler/commands/#dev) with your account credentials. ## Share To share what you have created, select **Copy Link** in the top right of the screen. This will copy a unique URL to your clipboard that you can share with anyone. These links do not expire, so you can bookmark your creation and share it at any time. Users that open a shared link will see the Playground with the shared code and preview. ## Deploy You can deploy a Worker from the Playground. If you are already logged in, you can review the Worker before deploying. Otherwise, you will be taken through the first-time user onboarding flow before you can review and deploy. Once deployed, your Worker will get its own unique URL and be available almost instantly on Cloudflare's global network. From here, you can add [Custom Domains](/workers/configuration/routing/custom-domains/), [storage resources](/workers/platform/storage-options/), and more. --- # Agents URL: https://developers.cloudflare.com/workers-ai/agents/ import { LinkButton } from "~/components"

Build AI assistants that can perform complex tasks on behalf of your users using Cloudflare Workers AI and Agents.

Go to Agents documentation
--- # Changelog URL: https://developers.cloudflare.com/workers-ai/changelog/ import { ProductReleaseNotes } from "~/components"; {/* */} --- # Cloudflare Workers AI URL: https://developers.cloudflare.com/workers-ai/ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, Render, LinkButton, Flex } from "~/components" Run machine learning models, powered by serverless GPUs, on Cloudflare's global network. Workers AI allows you to run AI models in a serverless way, without having to worry about scaling, maintaining, or paying for unused infrastructure. You can invoke models running on GPUs on Cloudflare's network from your own code — from [Workers](/workers/), [Pages](/pages/), or anywhere via [the Cloudflare API](/api/resources/ai/methods/run/). Workers AI gives you access to: - **50+ [open-source models](/workers-ai/models/)**, available as a part of our model catalog - Serverless, **pay-for-what-you-use** [pricing model](/workers-ai/platform/pricing/) - All as part of a **fully-featured developer platform**, including [AI Gateway](/ai-gateway/), [Vectorize](/vectorize/), [Workers](/workers/) and more...
Get started Watch a Workers AI demo
*** ## Features Workers AI comes with a curated set of popular open-source models that enable you to do tasks such as image classification, text generation, object detection and more. *** ## Related products Observe and control your AI applications with caching, rate limiting, request retries, model fallback, and more. Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM. Build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale. Create full-stack applications that are instantly deployed to the Cloudflare global network. Store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. Create new serverless SQL databases to query from your Workers and Pages projects. A globally distributed coordination API with strongly consistent storage. Create a global, low-latency, key-value data storage. *** ## More resources Build and deploy your first Workers AI application. Learn about Free and Paid plans. Learn about Workers AI limits. Learn how you can build and deploy ambitious AI applications to Cloudflare's global network. Learn which storage option is best for your project. Connect with the Workers community on Discord to ask questions, share what you are building, and discuss the platform with other developers. Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers. --- # CI/CD URL: https://developers.cloudflare.com/workers/ci-cd/ You can set up continuous integration and continuous deployment (CI/CD) for your Workers by using either the integrated build system, [Workers Builds](#workers-builds), or using [external providers](#external-cicd) to optimize your development workflow. ## Why use CI/CD? Using a CI/CD pipeline to deploy your Workers is a best practice because it: - Automates the build and deployment process, removing the need for manual `wrangler deploy` commands. - Ensures consistent builds and deployments across your team by using the same source control management (SCM) system. - Reduces variability and errors by deploying in a uniform environment. - Simplifies managing access to production credentials. ## Which CI/CD should I use? Choose [Workers Builds](/workers/ci-cd/builds) if you want a fully integrated solution within Cloudflare's ecosystem that requires minimal setup and configuration for GitHub or GitLab users. We recommend using [external CI/CD providers](/workers/ci-cd/external-cicd) if: - You have a self-hosted instance of GitHub or GitLabs, which is currently not supported in Workers Builds' [Git integration](/workers/ci-cd/builds/git-integration/) - You are using a Git provider that is not GitHub or GitLab ## Workers Builds [Workers Builds](/workers/ci-cd/builds) is Cloudflare's native CI/CD system that allows you to integrate with GitHub or GitLab to automatically deploy changes with each new push to a selected branch (e.g. `main`). ![Workers Builds Workflow Diagram](~/assets/images/workers/platform/ci-cd/workers-builds-workflow.png) Ready to streamline your Workers deployments? Get started with [Workers Builds](/workers/ci-cd/builds/#get-started). ## External CI/CD You can also choose to set up your CI/CD pipeline with an external provider. - [GitHub Actions](/workers/ci-cd/external-cicd/github-actions/) - [GitLab CI/CD](/workers/ci-cd/external-cicd/gitlab-cicd/) --- # Connect to databases URL: https://developers.cloudflare.com/workers/databases/connecting-to-databases/ Cloudflare Workers can connect to and query your data in both SQL and NoSQL databases, including: - Cloudflare's own [D1](/d1/), a serverless SQL-based database. - Traditional hosted relational databases, including Postgres and MySQL, using [Hyperdrive](/hyperdrive/) (recommended) to significantly speed up access. - Serverless databases, including Supabase, MongoDB Atlas, PlanetScale, and Prisma. ### D1 SQL database D1 is Cloudflare's own SQL-based, serverless database. It is optimized for global access from Workers, and can scale out with multiple, smaller (10GB) databases, such as per-user, per-tenant or per-entity databases. Similar to some serverless databases, D1 pricing is based on query and storage costs. | Database | Library or Driver | Connection Method | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------- | | [D1](/d1/) | [Workers binding](/d1/worker-api/), integrates with [Prisma](https://www.prisma.io/), [Drizzle](https://orm.drizzle.team/), and other ORMs | [Workers binding](/d1/worker-api/), [REST API](/api/resources/d1/subresources/database/methods/create/) | ### Traditional SQL databases Traditional databases use SQL drivers that use [TCP sockets](/workers/runtime-apis/tcp-sockets/) to connect to the database. TCP is the de-facto standard protocol that many databases, such as PostgreSQL and MySQL, use for client connectivity. These drivers are also widely compatible with your preferred ORM libraries and query builders. This also includes serverless databases that are PostgreSQL or MySQL-compatible like [Supabase](/hyperdrive/examples/connect-to-postgres/neon/), [Neon](/hyperdrive/examples/connect-to-postgres/neon/) or [PlanetScale](/hyperdrive/examples/connect-to-mysql/planetscale/), which can be connected to using both native [TCP sockets and Hyperdrive](/hyperdrive/) or [serverless HTTP-based drivers](/workers/databases/connecting-to-databases/#serverless-databases) (detailed below). | Database | Integration | Library or Driver | Connection Method | | ---------------------------------------- | ----------------- | ----------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | | [Postgres](/workers/tutorials/postgres/) | Direct connection | [Postgres.js](https://github.com/porsager/postgres),[node-postgres](https://node-postgres.com/) | [TCP Socket](/workers/runtime-apis/tcp-sockets/) via database driver, using [Hyperdrive](/hyperdrive/) for optimal performance (optional, recommended) | | [MySQL](/workers/tutorials/mysql/) | Direct connection | [mysql2](https://github.com/sidorares/node-mysql2), [mysql](https://github.com/mysqljs/mysql) | [TCP Socket](/workers/runtime-apis/tcp-sockets/) via database driver, using [Hyperdrive](/hyperdrive/) for optimal performance (optional, recommended) | :::note[Speed up database connectivity with Hyperdrive] Connecting to SQL databases with TCP sockets requires multiple roundtrips to establish a secure connection before a query to the database is made. Since a connection must be re-established on every Worker invocation, this adds unnecessary latency. [Hyperdrive](/hyperdrive/) solves this by pooling database connections globally to eliminate unnecessary roundtrips and speed up your database access. Learn more about [how Hyperdrive works](/hyperdrive/configuration/how-hyperdrive-works/). ::: ### Serverless databases Serverless databases provide HTTP-based proxies and drivers, also known as serverless drivers. These address the lack of connection reuse between Worker invocation similarly to [Hyperdrive](/hyperdrive/) for traditional SQL databases. By providing a way to query your database with HTTP, these serverless databases and drivers eliminate several roundtrips needed to establish a secure connection. | Database | Integration | Library or Driver | Connection Method | | --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------- | ---------------------------------------------------------------------------------- | ----------------------- | | [PlanetScale](https://planetscale.com/blog/introducing-the-planetscale-serverless-driver-for-javascript) | [Yes](/workers/databases/native-integrations/planetscale/) | [@planetscale/database](https://github.com/planetscale/database-js) | API via client library | | [Supabase](https://github.com/supabase/supabase/tree/master/examples/with-cloudflare-workers) | [Yes](/workers/databases/native-integrations/supabase/) | [@supabase/supabase-js](https://github.com/supabase/supabase-js) | API via client library | | [Prisma](https://www.prisma.io/docs/guides/deployment/deployment-guides/deploying-to-cloudflare-workers) | No | [prisma](https://github.com/prisma/prisma) | API via client library | | [Neon](https://blog.cloudflare.com/neon-postgres-database-from-workers/) | [Yes](/workers/databases/native-integrations/neon/) | [@neondatabase/serverless](https://neon.tech/blog/serverless-driver-for-postgres/) | API via client library | | [Hasura](https://hasura.io/blog/building-applications-with-cloudflare-workers-and-hasura-graphql-engine/) | No | API | GraphQL API via fetch() | | [Upstash Redis](https://blog.cloudflare.com/cloudflare-workers-database-integration-with-upstash/) | [Yes](/workers/databases/native-integrations/upstash/) | [@upstash/redis](https://github.com/upstash/upstash-redis) | API via client library | | [TiDB Cloud](https://docs.pingcap.com/tidbcloud/integrate-tidbcloud-with-cloudflare) | No | [@tidbcloud/serverless](https://github.com/tidbcloud/serverless-js) | API via client library | :::note[Easier setup with database integrations] [Database Integrations](/workers/databases/native-integrations/) simplify the authentication for serverless database drivers by managing credentials on your behalf and includes support for PlanetScale, Neon and Supabase. If you do not see an integration listed or have an integration to add, complete and submit the [Cloudflare Developer Platform Integration form](https://forms.gle/iaUqLWE8aezSEhgd6). ::: Once you have installed the necessary packages, use the APIs provided by these packages to connect to your database and perform operations on it. Refer to detailed links for service-specific instructions. ## Authentication If your database requires authentication, use Wrangler secrets to securely store your credentials. To do this, create a secret in your Cloudflare Workers project using the following [`wrangler secret`](/workers/wrangler/commands/#secret) command: ```sh wrangler secret put ``` Then, retrieve the secret value in your code using the following code snippet: ```js const secretValue = env.; ``` Use the secret value to authenticate with the external service. For example, if the external service requires an API key or database username and password for authentication, include these in using the relevant service's library or API. For services that require mTLS authentication, use [mTLS certificates](/workers/runtime-apis/bindings/mtls) to present a client certificate. ## Next steps - Learn how to connect to [an existing PostgreSQL database](/hyperdrive/) with Hyperdrive. - Discover [other storage options available](/workers/platform/storage-options/) for use with Workers. - [Create your first database](/d1/get-started/) with Cloudflare D1. --- # Databases URL: https://developers.cloudflare.com/workers/databases/ import { DirectoryListing } from "~/components"; Explore database integrations for your Worker projects. --- # Compatibility dates URL: https://developers.cloudflare.com/workers/configuration/compatibility-dates/ import { WranglerConfig } from "~/components"; Cloudflare regularly updates the Workers runtime. These updates apply to all Workers globally and should never cause a Worker that is already deployed to stop functioning. Sometimes, though, some changes may be backwards-incompatible. In particular, there might be bugs in the runtime API that existing Workers may inadvertently depend upon. Cloudflare implements bug fixes that new Workers can opt into while existing Workers will continue to see the buggy behavior to prevent breaking deployed Workers. The compatibility date and flags are how you, as a developer, opt into these runtime changes. [Compatibility flags](/workers/configuration/compatibility-flags) will often have a date in which they are enabled by default, and so, by specifying a `compatibility_date` for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date. ## Setting compatibility date When you start your project, you should always set `compatibility_date` to the current date. You should occasionally update the `compatibility_date` field. When updating, you should refer to the [compatibility flags](/workers/configuration/compatibility-flags) page to find out what has changed, and you should be careful to test your Worker to see if the changes affect you, updating your code as necessary. The new compatibility date takes effect when you next run the [`npx wrangler deploy`](/workers/wrangler/commands/#deploy) command. There is no need to update your `compatibility_date` if you do not want to. The Workers runtime will support old compatibility dates forever. If, for some reason, Cloudflare finds it is necessary to make a change that will break live Workers, Cloudflare will actively contact affected developers. That said, Cloudflare aims to avoid this if at all possible. However, even though you do not need to update the `compatibility_date` field, it is a good practice to do so for two reasons: 1. Sometimes, new features can only be made available to Workers that have a current `compatibility_date`. To access the latest features, you need to stay up-to-date. 2. Generally, other than the [compatibility flags](/workers/configuration/compatibility-flags) page, the Workers documentation may only describe the current `compatibility_date`, omitting information about historical behavior. If your Worker uses an old `compatibility_date`, you will need to continuously refer to the compatibility flags page in order to check if any of the APIs you are using have changed. #### Via Wrangler The compatibility date can be set in a Worker's [Wrangler configuration file](/workers/wrangler/configuration/). ```toml # Opt into backwards-incompatible changes through April 5, 2022. compatibility_date = "2022-04-05" ``` #### Via the Cloudflare Dashboard When a Worker is created through the Cloudflare Dashboard, the compatibility date is automatically set to the current date. The compatibility date can be updated in the Workers settings on the [Cloudflare dashboard](https://dash.cloudflare.com/). #### Via the Cloudflare API The compatibility date can be set when uploading a Worker using the [Workers Script API](/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field. If a compatibility date is not specified on upload via the API, it defaults to the oldest compatibility date, before any flags took effect (2021-11-02). When creating new Workers, it is highly recommended to set the compatibility date to the current date when uploading via the API. --- # Compatibility flags URL: https://developers.cloudflare.com/workers/configuration/compatibility-flags/ import { CompatibilityFlags, WranglerConfig, Render } from "~/components"; Compatibility flags enable specific features. They can be useful if you want to help the Workers team test upcoming changes that are not yet enabled by default, or if you need to hold back a change that your code depends on but still want to apply other compatibility changes. Compatibility flags will often have a date in which they are enabled by default, and so, by specifying a [`compatibility_date`](/workers/configuration/compatibility-dates) for your Worker, you can quickly enable all of these various compatibility flags up to, and including, that date. ## Setting compatibility flags You may provide a list of `compatibility_flags`, which enable or disable specific changes. #### Via Wrangler Compatibility flags can be set in a Worker's [Wrangler configuration file](/workers/wrangler/configuration/). This example enables the specific flag `formdata_parser_supports_files`, which is described [below](/workers/configuration/compatibility-flags/#formdata-parsing-supports-file). As of the specified date, `2021-09-14`, this particular flag was not yet enabled by default, but, by specifying it in `compatibility_flags`, we can enable it anyway. `compatibility_flags` can also be used to disable changes that became the default in the past. ```toml # Opt into backwards-incompatible changes through September 14, 2021. compatibility_date = "2021-09-14" # Also opt into an upcoming fix to the FormData API. compatibility_flags = [ "formdata_parser_supports_files" ] ``` #### Via the Cloudflare Dashboard Compatibility flags can be updated in the Workers settings on the [Cloudflare dashboard](https://dash.cloudflare.com/). #### Via the Cloudflare API Compatibility flags can be set when uploading a Worker using the [Workers Script API](/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) in the request body's `metadata` field. ## Node.js compatibility flag :::note [The `nodejs_compat` flag](/workers/runtime-apis/nodejs/) also enables `nodejs_compat_v2` as long as your compatibility date is 2024-09-23 or later. The v2 flag improves runtime Node.js compatibility by bundling additional polyfills and globals into your Worker. However, this improvement increases bundle size. If your compatibility date is 2024-09-22 or before and you want to enable v2, add the `nodejs_compat_v2` in addition to the `nodejs_compat` flag. If your compatibility date is after 2024-09-23, but you want to disable v2 to avoid increasing your bundle size, add the `no_nodejs_compat_v2` in addition to the `nodejs_compat flag`. ::: A [growing subset](/workers/runtime-apis/nodejs/) of Node.js APIs are available directly as [Runtime APIs](/workers/runtime-apis/nodejs), with no need to add polyfills to your own code. To enable these APIs in your Worker, add the `nodejs_compat` compatibility flag to your [Wrangler configuration file](/workers/wrangler/configuration/): A [growing subset](/workers/runtime-apis/nodejs/) of Node.js APIs are available directly as [Runtime APIs](/workers/runtime-apis/nodejs), with no need to add polyfills to your own code. To enable these APIs in your Worker, only the `nodejs_compat` compatibility flag is required: ```toml title="wrangler.toml" compatibility_flags = [ "nodejs_compat" ] ``` As additional Node.js APIs are added, they will be made available under the `nodejs_compat` compatibility flag. Unlike most other compatibility flags, we do not expect the `nodejs_compat` to become active by default at a future date. The Node.js `AsyncLocalStorage` API is a particularly useful feature for Workers. To enable only the `AsyncLocalStorage` API, use the `nodejs_als` compatibility flag. ```toml title="wrangler.toml" compatibility_flags = [ "nodejs_als" ] ``` ## Flags history Newest flags are listed first. ## Experimental flags These flags can be enabled via `compatibility_flags`, but are not yet scheduled to become default on any particular date. --- # Cron Triggers URL: https://developers.cloudflare.com/workers/configuration/cron-triggers/ import { Render, WranglerConfig, TabItem, Tabs } from "~/components"; ## Background Cron Triggers allow users to map a cron expression to a Worker using a [`scheduled()` handler](/workers/runtime-apis/handlers/scheduled/) that enables Workers to be executed on a schedule. Cron Triggers are ideal for running periodic jobs, such as for maintenance or calling third-party APIs to collect up-to-date data. Workers scheduled by Cron Triggers will run on underutilized machines to make the best use of Cloudflare's capacity and route traffic efficiently. :::note Cron Triggers can also be combined with [Workflows](/workflows/) to trigger multi-step, long-running tasks. You can [bind to a Workflow](/workflows/build/workers-api/) from directly from your Cron Trigger to execute a Workflow on a schedule. ::: Cron Triggers execute on UTC time. ## Add a Cron Trigger ### 1. Define a scheduled event listener To respond to a Cron Trigger, you must add a [`"scheduled"` handler](/workers/runtime-apis/handlers/scheduled/) to your Worker. ```js export default { async scheduled(controller, env, ctx) { console.log("cron processed"); }, }; ``` ```ts interface Env {} export default { async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { console.log("cron processed"); }, }; ``` ```python from workers import handler @handler async def on_scheduled(controller, env, ctx): print("cron processed") ``` Refer to the following additional examples to write your code: - [Setting Cron Triggers](/workers/examples/cron-trigger/) - [Multiple Cron Triggers](/workers/examples/multiple-cron-triggers/) ### 2. Update configuration :::note[Cron Trigger changes take time to propagate.] Changes such as adding a new Cron Trigger, updating an old Cron Trigger, or deleting a Cron Trigger may take several minutes (up to 15 minutes) to propagate to the Cloudflare global network. ::: After you have updated your Worker code to include a `"scheduled"` event, you must update your Worker project configuration. #### Via the [Wrangler configuration file](/workers/wrangler/configuration/) If a Worker is managed with Wrangler, Cron Triggers should be exclusively managed through the [Wrangler configuration file](/workers/wrangler/configuration/). Refer to the example below for a Cron Triggers configuration: ```toml [triggers] # Schedule cron triggers: # - At every 3rd minute # - At 15:00 (UTC) on first day of the month # - At 23:59 (UTC) on the last weekday of the month crons = [ "*/3 * * * *", "0 15 1 * *", "59 23 LW * *" ] ``` You also can set a different Cron Trigger for each [environment](/workers/wrangler/environments/) in your [Wrangler configuration file](/workers/wrangler/configuration/). You need to put the `[triggers]` table under your chosen environment. For example: ```toml [env.dev.triggers] crons = ["0 * * * *"] ``` #### Via the dashboard To add Cron Triggers in the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings** > **Triggers** > **Cron Triggers**. ## Supported cron expressions Cloudflare supports cron expressions with five fields, along with most [Quartz scheduler](http://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html#introduction)-like cron syntax extensions: | Field | Values | Characters | | ------------- | ------------------------------------------------------------------ | ------------ | | Minute | 0-59 | \* , - / | | Hours | 0-23 | \* , - / | | Days of Month | 1-31 | \* , - / L W | | Months | 1-12, case-insensitive 3-letter abbreviations ("JAN", "aug", etc.) | \* , - / | | Weekdays | 1-7, case-insensitive 3-letter abbreviations ("MON", "fri", etc.) | \* , - / L # | :::note Days of the week go from 1 = Sunday to 7 = Saturday, which is different on some other cron systems (where 0 = Sunday and 6 = Saturday). To avoid ambiguity you may prefer to use the three latter abbreviations (e.g. `SUN` rather than 1). ::: ### Examples Some common time intervals that may be useful for setting up your Cron Trigger: - `* * * * *` - At every minute - `*/30 * * * *` - At every 30th minute - `45 * * * *` - On the 45th minute of every hour - `0 17 * * sun` or `0 17 * * 1` - 17:00 (UTC) on Sunday - `10 7 * * mon-fri` or `10 7 * * 2-6` - 07:10 (UTC) on weekdays - `0 15 1 * *` - 15:00 (UTC) on first day of the month - `0 18 * * 6L` or `0 18 * * friL` - 18:00 (UTC) on the last Friday of the month - `59 23 LW * *` - 23:59 (UTC) on the last weekday of the month ## Test Cron Triggers locally Test Cron Triggers using Wrangler with [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/cdn-cgi/handler/scheduled` route which can be used to test using a HTTP request. ```sh curl "http://localhost:8787/cdn-cgi/handler/scheduled" ``` To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*" ``` Optionally, you can also pass a `time` query parameter to override `controller.scheduledTime` in your scheduled event listener. ```sh curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*&time=1745856238" ``` ## View past events To view the execution history of Cron Triggers, view **Cron Events**: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, go to **Workers & Pages**. 3. In **Overview**, select your **Worker**. 4. Select **Settings**. 5. Under **Trigger Events**, select **View events**. Cron Events stores the 100 most recent invocations of the Cron scheduled event. [Workers Logs](/workers/observability/logs/workers-logs) also records invocation logs for the Cron Trigger with a longer retention period and a filter & query interface. If you are interested in an API to access Cron Events, use Cloudflare's [GraphQL Analytics API](/analytics/graphql-api). :::note It can take up to 30 minutes before events are displayed in **Past Cron Events** when creating a new Worker or changing a Worker's name. ::: Refer to [Metrics and Analytics](/workers/observability/metrics-and-analytics/) for more information. ## Remove a Cron Trigger ### Via the dashboard To delete a Cron Trigger on a deployed Worker via the dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages**, and select your Worker. 3. Go to **Triggers** > select the three dot icon next to the Cron Trigger you want to remove > **Delete**. :::note You can only delete Cron Triggers using the Cloudflare dashboard (and not through your Wrangler file). ::: ## Limits Refer to [Limits](/workers/platform/limits/) to track the maximum number of Cron Triggers per Worker. ## Green Compute With Green Compute enabled, your Cron Triggers will only run on Cloudflare points of presence that are located in data centers that are powered purely by renewable energy. Organizations may claim that they are powered by 100 percent renewable energy if they have procured sufficient renewable energy to account for their overall energy use. Renewable energy can be purchased in a number of ways, including through on-site generation (wind turbines, solar panels), directly from renewable energy producers through contractual agreements called Power Purchase Agreements (PPA), or in the form of Renewable Energy Credits (REC, IRECs, GoOs) from an energy credit market. Green Compute can be configured at the account level: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In the **Account details** section, find **Compute Setting**. 4. Select **Change**. 5. Select **Green Compute**. 6. Select **Confirm**. ## Related resources - [Triggers](/workers/wrangler/configuration/#triggers) - Review Wrangler configuration file syntax for Cron Triggers. - Learn how to access Cron Triggers in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # Environment variables URL: https://developers.cloudflare.com/workers/configuration/environment-variables/ import { Render, TabItem, Tabs, WranglerConfig } from "~/components"; ## Background You can add environment variables, which are a type of binding, to attach text strings or JSON values to your Worker. Environment variables are available on the [`env` parameter](/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [`fetch` event handler](/workers/runtime-apis/handlers/fetch/). Text strings and JSON values are not encrypted and are useful for storing application configuration. ## Add environment variables via Wrangler To add env variables using Wrangler, define text and JSON via the `[vars]` configuration in your Wrangler file. In the following example, `API_HOST` and `API_ACCOUNT_ID` are text values and `SERVICE_X_DATA` is a JSON value. Refer to the following example on how to access the `API_HOST` environment variable in your Worker code: ```js export default { async fetch(request, env, ctx) { return new Response(`API host: ${env.API_HOST}`); }, }; ``` ```ts export interface Env { API_HOST: string; } export default { async fetch(request, env, ctx): Promise { return new Response(`API host: ${env.API_HOST}`); }, } satisfies ExportedHandler; ``` ### Configuring different environments in Wrangler [Environments in Wrangler](/workers/wrangler/environments) let you specify different configurations for the same Worker, including different values for `vars` in each environment. As `vars` is a [non-inheritable key](/workers/wrangler/configuration/#non-inheritable-keys), they are not inherited by environments and must be specified for each environment. The example below sets up two environments, `staging` and `production`, with different values for `API_HOST`. ```toml name = "my-worker-dev" # top level environment [vars] API_HOST = "api.example.com" [env.staging.vars] API_HOST = "staging.example.com" [env.production.vars] API_HOST = "production.example.com" ``` To run Wrangler commands in specific environments, you can pass in the `--env` or `-e` flag. For example, you can develop the Worker in an environment called `staging` by running `npx wrangler dev --env staging`, and deploy it with `npx wrangler deploy --env staging`. Learn about [environments in Wrangler](/workers/wrangler/environments). ## Add environment variables via the dashboard To add environment variables via the dashboard: 1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Settings**. 5. Under **Variables and Secrets**, select **Add**. 6. Select a **Type**, input a **Variable name**, and input its **Value**. This variable will be made available to your Worker. 7. (Optional) To add multiple environment variables, select **Add variable**. 8. Select **Deploy** to implement your changes. :::caution[Plaintext strings and secrets] Select the **Secret** type if your environment variable is a [secret](/workers/configuration/secrets/). Alternatively, consider [Cloudflare Secrets Store](/secrets-store/), for account-level secrets. ::: ## Related resources - Migrating environment variables from [Service Worker format to ES modules syntax](/workers/reference/migrate-to-module-workers/#environment-variables). --- # Configuration URL: https://developers.cloudflare.com/workers/configuration/ import { DirectoryListing } from "~/components"; Configure your Worker project with various features and customizations. --- # Multipart upload metadata URL: https://developers.cloudflare.com/workers/configuration/multipart-upload-metadata/ import { Type, MetaInfo } from "~/components"; If you're using the [Workers Script Upload API](/api/resources/workers/subresources/scripts/methods/update/) or [Version Upload API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) directly, `multipart/form-data` uploads require you to specify a `metadata` part. This metadata defines the Worker's configuration in JSON format, analogue to the [wrangler.toml file](/workers/wrangler/configuration/). ## Sample `metadata` ```json { "main_module": "main.js", "bindings": [ { "type": "plain_text", "name": "MESSAGE", "text": "Hello, world!" } ], "compatibility_date": "2021-09-14" } ``` ## Attributes The following attributes are configurable at the top-level. :::note At a minimum, the `main_module` key is required to upload a Worker. ::: * `main_module` * The part name that contains the module entry point of the Worker that will be executed. For example, `main.js`. * `assets` * [Asset](/workers/static-assets/) configuration for a Worker. * `config` * [html_handling](/workers/static-assets/routing/advanced/html-handling/) determines the redirects and rewrites of requests for HTML content. * [not_found_handling](/workers/static-assets/routing/) determines the response when a request does not match a static asset. * `jwt` field provides a token authorizing assets to be attached to a Worker. * `keep_assets` * Specifies whether assets should be retained from a previously uploaded Worker version; used in lieu of providing a completion token. * `bindings` array\[object] optional * [Bindings](#bindings) to expose in the Worker. * `placement` * [Smart placement](/workers/configuration/smart-placement/) object for the Worker. * `mode` field only supports `smart` for automatic placement. * `compatibility_date` * [Compatibility Date](/workers/configuration/compatibility-dates/#setting-compatibility-date) indicating targeted support in the Workers runtime. Backwards incompatible fixes to the runtime following this date will not affect this Worker. Highly recommended to set a `compatibility_date`, otherwise if on upload via the API, it defaults to the oldest compatibility date before any flags took effect (2021-11-02). * `compatibility_flags` array\[string] optional * [Compatibility Flags](/workers/configuration/compatibility-flags/#setting-compatibility-flags) that enable or disable certain features in the Workers runtime. Used to enable upcoming features or opt in or out of specific changes not included in a `compatibility_date`. ## Additional attributes: [Workers Script Upload API](/api/resources/workers/subresources/scripts/methods/update/) For [immediately deployed uploads](/workers/configuration/versions-and-deployments/#upload-a-new-version-and-deploy-it-immediately), the following **additional** attributes are configurable at the top-level. :::note These attributes are **not available** for version uploads. ::: * `migrations` array\[object] optional * [Durable Objects migrations](/durable-objects/reference/durable-objects-migrations/) to apply. * `logpush` * Whether [Logpush](/cloudflare-for-platforms/cloudflare-for-saas/hostname-analytics/#logpush) is turned on for the Worker. * `tail_consumers` array\[object] optional * [Tail Workers](/workers/observability/logs/tail-workers/) that will consume logs from the attached Worker. * `tags` array\[string] optional * List of strings to use as tags for this Worker. ## Additional attributes: [Version Upload API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/) For [version uploads](/workers/configuration/versions-and-deployments/#upload-a-new-version-to-be-gradually-deployed-or-deployed-at-a-later-time), the following **additional** attributes are configurable at the top-level. :::note These attributes are **not available** for immediately deployed uploads. ::: * `annotations` * Annotations object specific to the Worker version. * `workers/message` specifies a custom message for the version. * `workers/tag` specifies a custom identifier for the version. ## Bindings Workers can interact with resources on the Cloudflare Developer Platform using [bindings](/workers/runtime-apis/bindings/). Refer to the JSON example below that shows how to add bindings in the `metadata` part. ```json { "bindings": [ { "type": "ai", "name": "" }, { "type": "analytics_engine", "name": "", "dataset": "" }, { "type": "assets", "name": "" }, { "type": "browser_rendering", "name": "" }, { "type": "d1", "name": "", "id": "" }, { "type": "durable_object_namespace", "name": "", "class_name": "" }, { "type": "hyperdrive", "name": "", "id": "" }, { "type": "kv_namespace", "name": "", "namespace_id": "" }, { "type": "mtls_certificate", "name": "", "certificate_id": "" }, { "type": "plain_text", "name": "", "text": "" }, { "type": "queue", "name": "", "queue_name": "" }, { "type": "r2_bucket", "name": "", "bucket_name": "" }, { "type": "secret_text", "name": "", "text": "" }, { "type": "service", "name": "", "service": "", "environment": "production" }, { "type": "tail_consumer", "service": "" }, { "type": "vectorize", "name": "", "index_name": "" }, { "type": "version_metadata", "name": "" } ] } ``` --- # Preview URLs URL: https://developers.cloudflare.com/workers/configuration/previews/ import { Render, WranglerConfig } from "~/components"; Preview URLs allow you to preview new versions of your Worker without deploying it to production. Every time you create a new [version](/workers/configuration/versions-and-deployments/#versions) of your Worker a unique preview URL is generated. Preview URLs take the format: `-..workers.dev`. New [versions](/workers/configuration/versions-and-deployments/#versions) of a Worker are created on [`wrangler deploy`](/workers/wrangler/commands/#deploy), [`wrangler versions upload`](/workers/wrangler/commands/#upload) or when you make edits on the Cloudflare dashboard. By default, preview URLs are enabled and available publicly. Preview URLs can be: - Integrated into CI/CD pipelines, allowing automatic generation of preview environments for every pull request. - Used for collaboration between teams to test code changes in a live environment and verify updates. - Used to test new API endpoints, validate data formats, and ensure backward compatibility with existing services. When testing zone level performance or security features for a version, we recommend using [version overrides](/workers/configuration/versions-and-deployments/gradual-deployments/#version-overrides) so that your zone's performance and security settings apply. :::note Preview URLs are only available for Worker versions uploaded after 2024-09-25. Minimum required Wrangler version: 3.74.0. Check your version by running `wrangler --version`. To update Wrangler, refer to [Install/Update Wrangler](/workers/wrangler/install-and-update/). ::: ## View preview URLs using wrangler The [`wrangler versions upload`](/workers/wrangler/commands/#upload) command uploads a new [version](/workers/configuration/versions-and-deployments/#versions) of your Worker and returns a preview URL for each version uploaded. ## View preview URLs on the Workers dashboard 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers) and select your project. 2. Go to the **Deployments** tab, and find the version you would like to view. ## Manage access to Preview URLs By default, preview URLs are enabled and available publicly. You can use [Cloudflare Access](/cloudflare-one/policies/access/) to require visitors to authenticate before accessing preview URLs. You can limit access to yourself, your teammates, your organization, or anyone else you specify in your [access policy](/cloudflare-one/policies/access). To limit your preview URLs to authorized emails only: 1. Log in to the [Cloudflare Access dashboard](https://one.dash.cloudflare.com/?to=/:account/access/apps). 2. Select your account. 3. Add an application. 4. Select **Self Hosted**. 5. Name your application (for example, "my-worker") and add your `workers.dev` subdomain as the **Application domain**. For example, if you want to secure preview URLs for a Worker running on `my-worker.my-subdomain.workers.dev`. - Subdomain: `*-my-worker` - Domain: `my-subdomain.workers.dev` :::note You must press enter after you input your Application domain for it to save. You will see a "Zone is not associated with the current account" warning that you may ignore. ::: 6. Go to the next page. 7. Add a name for your access policy (for example, "Allow employees access to preview URLs for my-worker"). 8. In the **Configure rules** section create a new rule with the **Emails** selector, or any other attributes which you wish to gate access to previews with. 9. Enter the emails you want to authorize. View [access policies](/cloudflare-one/policies/access/#selectors) to learn about configuring alternate rules. 10. Go to the next page. 11. Add application. ## Disabling Preview URLs ### Disabling Preview URLs in the dashboard To disable Preview URLs for a Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to **Workers & Pages** and in **Overview**, select your Worker. 3. Go to **Settings** > **Domains & Routes**. 4. On "Preview URLs" click "Disable". 5. Confirm you want to disable. ### Disabling Preview URLs in the [Wrangler configuration file](/workers/wrangler/configuration/) :::note Wrangler 3.91.0 or higher is required to use this feature. ::: To disable Preview URLs for a Worker, include the following in your Worker's Wrangler file: ```toml preview_urls = false ``` When you redeploy your Worker with this change, Preview URLs will be disabled. :::caution If you disable Preview URLs in the Cloudflare dashboard but do not update your Worker's Wrangler file with `preview_urls = false`, then Preview URLs will be re-enabled the next time you deploy your Worker with Wrangler. ::: ## Limitations - Preview URLs are not generated for Workers that implement a [Durable Object](/durable-objects/). - Preview URLs are not currently generated for [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/) [user Workers](/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#user-workers). This is a temporary limitation, we are working to remove it. - You cannot currently configure Preview URLs to run on a subdomain other than [`workers.dev`](/workers/configuration/routing/workers-dev/). --- # Secrets URL: https://developers.cloudflare.com/workers/configuration/secrets/ import { Render } from "~/components"; ## Background Secrets are a type of binding that allow you to attach encrypted text values to your Worker. You cannot see secrets after you set them and can only access secrets via [Wrangler](/workers/wrangler/commands/#secret) or programmatically via the [`env` parameter](/workers/runtime-apis/handlers/fetch/#parameters). Secrets are used for storing sensitive information like API keys and auth tokens. Secrets are available on the [`env` parameter](/workers/runtime-apis/handlers/fetch/#parameters) passed to your Worker's [`fetch` event handler](/workers/runtime-apis/handlers/fetch/). :::note[Secrets Store (beta)] Secrets described on this page are defined and managed on a per-Worker level. If you want to use account-level secrets, refer to [Secrets Store](/secrets-store/). Account-level secrets are configured on your Worker as a [Secrets Store binding](/secrets-store/integrations/workers/). ::: ## Local Development with Secrets ## Secrets on deployed Workers ### Adding secrets to your project #### Via Wrangler Secrets can be added through [`wrangler secret put`](/workers/wrangler/commands/#secret) or [`wrangler versions secret put`](/workers/wrangler/commands/#secret-put) commands. `wrangler secret put` creates a new version of the Worker and deploys it immediately. ```sh npx wrangler secret put ``` If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret put` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2). :::note Wrangler versions before 3.73.0 require you to specify a `--x-versions` flag. ::: ```sh npx wrangler versions secret put ``` #### Via the dashboard To add a secret via the dashboard: 1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings**. 4. Under **Variables and Secrets**, select **Add**. 5. Select the type **Secret**, input a **Variable name**, and input its **Value**. This secret will be made available to your Worker but the value will be hidden in Wrangler and the dashboard. 6. (Optional) To add more secrets, select **Add variable**. 7. Select **Deploy** to implement your changes. ### Delete secrets from your project #### Via Wrangler Secrets can be deleted through [`wrangler secret delete`](/workers/wrangler/commands/#delete-1) or [`wrangler versions secret delete`](/workers/wrangler/commands/#secret-delete) commands. `wrangler secret delete` creates a new version of the Worker and deploys it immediately. ```sh npx wrangler secret delete ``` If using [gradual deployments](/workers/configuration/versions-and-deployments/gradual-deployments/), instead use the `wrangler versions secret delete` command. This will only create a new version of the Worker, that can then be deploying using [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2). ```sh npx wrangler versions secret delete ``` #### Via the dashboard To delete a secret from your Worker project via the dashboard: 1. Log in to [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings**. 4. Under **Variables and Secrets**, select **Edit**. 5. In the **Edit** drawer, select **X** next to the secret you want to delete. 6. Select **Deploy** to implement your changes. 7. (Optional) Instead of using the edit drawer, you can click the delete icon next to the secret. ## Related resources - [Wrangler secret commands](/workers/wrangler/commands/#secret) - Review the Wrangler commands to create, delete and list secrets. - [Cloudflare Secrets Store](/secrets-store/) - Encrypt and store sensitive information as secrets that are securely reusable across your account. --- # Smart Placement URL: https://developers.cloudflare.com/workers/configuration/smart-placement/ import { WranglerConfig } from "~/components"; By default, [Workers](/workers/) and [Pages Functions](/pages/functions/) are invoked in a data center closest to where the request was received. If you are running back-end logic in a Worker, it may be more performant to run that Worker closer to your back-end infrastructure rather than the end user. Smart Placement automatically places your workloads in an optimal location that minimizes latency and speeds up your applications. ## Background The following example demonstrates how moving your Worker close to your back-end services could decrease application latency: You have a user in Sydney, Australia who is accessing an application running on Workers. This application makes multiple round trips to a database located in Frankfurt, Germany in order to serve the user’s request. ![A user located in Sydney, AU connecting to a Worker in the same region which then makes multiple round trips to a database located in Frankfurt, DE. ](~/assets/images/workers/platform/workers-smart-placement-disabled.png) The issue is the time that it takes the Worker to perform multiple round trips to the database. Instead of the request being processed close to the user, the Cloudflare network, with Smart Placement enabled, would process the request in a data center closest to the database. ![A user located in Sydney, AU connecting to a Worker in Frankfurt, DE which then makes multiple round trips to a database also located in Frankfurt, DE. ](~/assets/images/workers/platform/workers-smart-placement-enabled.png) ## Understand how Smart Placement works Smart Placement is enabled on a per-Worker basis. Once enabled, Smart Placement analyzes the [request duration](/workers/observability/metrics-and-analytics/#request-duration) of the Worker in different Cloudflare locations around the world on a regular basis. Smart Placement decides where to run the Worker by comparing the estimated request duration in the location closest to where the request was received (the default location where the Worker would run) to a set of candidate locations around the world. For each candidate location, Smart Placement considers the performance of the Worker in that location as well as the network latency added by forwarding the request to that location. If the estimated request duration in the best candidate location is significantly faster than the location where the request was received, the request will be forwarded to that candidate location. Otherwise, the Worker will run in the default location closest to where the request was received. Smart Placement only considers candidate locations where the Worker has previously run, since the estimated request duration in each candidate location is based on historical data from the Worker running in that location. This means that Smart Placement cannot run the Worker in a location that it does not normally receive traffic from. Smart Placement only affects the execution of [fetch event handlers](/workers/runtime-apis/handlers/fetch/). Smart Placement does not affect the execution of [RPC methods](/workers/runtime-apis/rpc/) or [named entrypoints](/workers/runtime-apis/bindings/service-bindings/rpc/#named-entrypoints). Workers without a fetch event handler will be ignored by Smart Placement. For Workers with both fetch and non-fetch event handlers, Smart Placement will only affect the execution of the fetch event handler. Similarly, Smart Placement will not affect where [static assets](/workers/static-assets/) are served from. Static assets will continue to be served from the location nearest to the incoming request. If a Worker is invoked and your code retrieves assets via the [static assets binding](https://developers.cloudflare.com/workers/static-assets/binding/), then assets will be served from the location that your Worker runs in. ## Enable Smart Placement Smart Placement is available to users on all Workers plans. ### Enable Smart Placement via Wrangler To enable Smart Placement via Wrangler: 1. Make sure that you have `wrangler@2.20.0` or later [installed](/workers/wrangler/install-and-update/). 2. Add the following to your Worker project's Wrangler file: ```toml [placement] mode = "smart" ``` 3. Wait for Smart Placement to analyze your Worker. This process may take up to 15 minutes. 4. View your Worker's [request duration analytics](/workers/observability/metrics-and-analytics/#request-duration). ### Enable Smart Placement via the dashboard To enable Smart Placement via the dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**,select your Worker. 4. Select **Settings** > **General**. 5. Under **Placement**, choose **Smart**. 6. Wait for Smart Placement to analyze your Worker. Smart Placement requires consistent traffic to the Worker from multiple locations around the world to make a placement decision. The analysis process may take up to 15 minutes. 7. View your Worker's [request duration analytics](/workers/observability/metrics-and-analytics/#request-duration) ## Observability ### Placement Status A Worker's metadata contains details about a Worker's placement status. Query your Worker's placement status through the following Workers API endpoint: ```bash curl -X GET https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/workers/services/{WORKER_NAME} \ -H "Authorization: Bearer " \ -H "Content-Type: application/json" | jq . ``` Possible placement states include: - _(not present)_: The Worker has not been analyzed for Smart Placement yet. The Worker will always run in the default Cloudflare location closest to where the request was received. - `SUCCESS`: The Worker was successfully analyzed and will be optimized by Smart Placement. The Worker will run in the Cloudflare location that minimizes expected request duration, which may be the default location closest to where the request was received or may be a faster location elsewhere in the world. - `INSUFFICIENT_INVOCATIONS`: The Worker has not received enough requests to make a placement decision. Smart Placement requires consistent traffic to the Worker from multiple locations around the world. The Worker will always run in the default Cloudflare location closest to where the request was received. - `UNSUPPORTED_APPLICATION`: Smart Placement began optimizing the Worker and measured the results, which showed that Smart Placement made the Worker slower. In response, Smart Placement reverted the placement decision. The Worker will always run in the default Cloudflare location closest to where the request was received, and Smart Placement will not analyze the Worker again until it's redeployed. This state is rare and accounts for less that 1% of Workers with Smart Placement enabled. ### Request Duration Analytics Once Smart Placement is enabled, data about request duration gets collected. Request duration is measured at the data center closest to the end user. By default, one percent (1%) of requests are not routed with Smart Placement. These requests serve as a baseline to compare to. ### `cf-placement` header Once Smart Placement is enabled, Cloudflare adds a `cf-placement` header to all requests. This can be used to check whether a request has been routed with Smart Placement and where the Worker is processing the request (which is shown as the nearest airport code to the data center). For example, the `cf-placement: remote-LHR` header's `remote` value indicates that the request was routed using Smart Placement to a Cloudflare data center near London. The `cf-placement: local-EWR` header's `local` value indicates that the request was not routed using Smart Placement and the Worker was invoked in a data center closest to where the request was received, close to Newark Liberty International Airport (EWR). :::caution[Beta use only] We may remove the `cf-placement` header before Smart Placement enters general availability. ::: ## Best practices If you are building full-stack applications on Workers, we recommend splitting up the front-end and back-end logic into different Workers and using [Service Bindings](/workers/runtime-apis/bindings/service-bindings/) to connect your front-end logic and back-end logic Workers. ![Smart Placement and Service Bindings](~/assets/images/workers/platform/smart-placement-service-bindings.png) Enabling Smart Placement on your back-end Worker will invoke it close to your back-end service, while the front-end Worker serves requests close to the user. This architecture maintains fast, reactive front-ends while also improving latency when the back-end Worker is called. ## Give feedback on Smart Placement Smart Placement is in beta. To share your thoughts and experience with Smart Placement, join the [Cloudflare Developer Discord](https://discord.cloudflare.com). --- # Page Rules URL: https://developers.cloudflare.com/workers/configuration/workers-with-page-rules/ Page Rules trigger certain actions whenever a request matches one of the URL patterns you define. You can define a page rule to trigger one or more actions whenever a certain URL pattern is matched. Refer to [Page Rules](/rules/page-rules/) to learn more about configuring Page Rules. ## Page Rules with Workers Cloudflare acts as a [reverse proxy](https://www.cloudflare.com/learning/what-is-cloudflare/) to provide services, like Page Rules, to Internet properties. Your application's traffic will pass through a Cloudflare data center that is closest to the visitor. There are hundreds of these around the world, each of which are capable of running services like Workers and Page Rules. If your application is built on Workers and/or Pages, the [Cloudflare global network](https://www.cloudflare.com/learning/serverless/glossary/what-is-edge-computing/) acts as your origin server and responds to requests directly from the Cloudflare global network. When using Page Rules with Workers, the following workflow is applied. 1. Request arrives at Cloudflare data center. 2. Cloudflare decides if this request is a Worker route. Because this is a Worker route, Cloudflare evaluates and disabled a number of features, including some that would be set by Page Rules. 3. Page Rules run as part of normal request processing with some features now disabled. 4. Worker executes. 5. Worker makes a same-zone or other-zone subrequest. Because this is a Worker route, Cloudflare disables a number of features, including some that would be set by Page Rules. Page Rules are evaluated both at the client-to-Worker request stage (step 2) and the Worker subrequest stage (step 5). If you are experiencing Page Rule errors when running Workers, contact your Cloudflare account team or [Cloudflare Support](/support/contacting-cloudflare-support/). ## Affected Page Rules The following Page Rules may not work as expected when an incoming request is matched to a Worker route: * Always Online * [Always Use HTTPS](/workers/configuration/workers-with-page-rules/#always-use-https) * [Automatic HTTPS Rewrites](/workers/configuration/workers-with-page-rules/#automatic-https-rewrites) * [Browser Cache TTL](/workers/configuration/workers-with-page-rules/#browser-cache-ttl) * [Browser Integrity Check](/workers/configuration/workers-with-page-rules/#browser-integrity-check) * [Cache Deception Armor](/workers/configuration/workers-with-page-rules/#cache-deception-armor) * [Cache Level](/workers/configuration/workers-with-page-rules/#cache-level) * Disable Apps * [Disable Zaraz](/workers/configuration/workers-with-page-rules/#disable-zaraz) * [Edge Cache TTL](/workers/configuration/workers-with-page-rules/#edge-cache-ttl) * [Email Obfuscation](/workers/configuration/workers-with-page-rules/#email-obfuscation) * [Forwarding URL](/workers/configuration/workers-with-page-rules/#forwarding-url) * Host Header Override * [IP Geolocation Header](/workers/configuration/workers-with-page-rules/#ip-geolocation-header) * Mirage * [Origin Cache Control](/workers/configuration/workers-with-page-rules/#origin-cache-control) * [Rocket Loader](/workers/configuration/workers-with-page-rules/#rocket-loader) * [Security Level](/workers/configuration/workers-with-page-rules/#security-level) * [SSL](/workers/configuration/workers-with-page-rules/#ssl) This is because the default setting of these Page Rules will be disabled when Cloudflare recognizes that the request is headed to a Worker. :::caution[Testing] Due to ongoing changes to the Workers runtime, detailed documentation on how these rules will be affected are updated following testing. ::: To learn what these Page Rules do, refer to [Page Rules](/rules/page-rules/). :::note[Same zone versus other zone] A same zone subrequest is a request the Worker makes to an orange-clouded hostname in the same zone the Worker runs on. Depending on your DNS configuration, any request that falls outside that definition may be considered an other zone request by the Cloudflare network. ::: ### Always Use HTTPS | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### Automatic HTTPS Rewrites | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Browser Cache TTL | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Browser Integrity Check | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### Cache Deception Armor | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Cache Level | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Disable Zaraz | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Edge Cache TTL | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Email Obfuscation | Source | Target | Behavior | | -------------------------|------------|------------| | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Forwarding URL | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### IP Geolocation Header | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Origin Cache Control | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | ### Rocket Loader | Source | Target | Behavior | | ------ | ---------- | ------------ | | Client | Worker | Rule Ignored | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### Security Level | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Ignored | | Worker | Other Zone | Rule Ignored | ### SSL | Source | Target | Behavior | | ------ | ---------- | -------------- | | Client | Worker | Rule Respected | | Worker | Same Zone | Rule Respected | | Worker | Other Zone | Rule Ignored | --- # Frameworks URL: https://developers.cloudflare.com/workers/frameworks/ import { Badge, Description, DirectoryListing, InlineBadge, Render, TabItem, Tabs, PackageManagers, Feature, } from "~/components"; Run front-end websites — static or dynamic — directly on Cloudflare's global network. The following frameworks have support for Cloudflare Workers and the new [Workers Assets](/workers/static-assets/). Refer to the individual guides below for instructions on how to get started. :::note **Static Assets for Workers is currently in open beta.** If you are looking for a framework not on this list: - It may be supported in [Cloudflare Pages](/pages/). Refer to [Pages Frameworks guides](/pages/framework-guides/) for a full list. - Tell us which framework you would like to see supported on Workers in our [Cloudflare's Developer Discord](https://discord.gg/dqgZUwcD). ::: --- # Dashboard URL: https://developers.cloudflare.com/workers/get-started/dashboard/ import { Render } from "~/components"; Follow this guide to create a Workers application using [the Cloudflare dashboard](https://dash.cloudflare.com). ## Prerequisites [Create a Cloudflare account](/fundamentals/setup/account/create-account/), if you have not already. ## Setup To get started with a new Workers application: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Go to the **Workers & Pages** section of the dashboard. 2. Select [Create](https://dash.cloudflare.com/?to=/:account/workers-and-pages/create). From here, you can: * You can select from the gallery of production-ready templates * Import an existing Git repository on your own account * Let Cloudflare clone and bootstrap a public repository containing a Workers application. 3. Once you've connected to your chosen [Git provider](/workers/ci-cd/builds/git-integration/github-integration/), configure your project and click `Deploy`. 4. Cloudflare will kick off a new build and deployment. Once deployed, preview your Worker at its provided `workers.dev` subdomain. ## Continue development Applications started in the dashboard are set up with Git to help kickstart your development workflow. To continue developing on your repository, you can run: ```bash # clone you repository locally git clone # be sure you are in the root directory cd ``` Now, you can preview and test your changes by [running Wrangler in your local development environment](/workers/local-development/). Once you are ready to deploy you can run: ```bash # adds the files to git tracking git add . # commits the changes git commit -m "your message" # push the changes to your Git provider git push origin main ``` To do more: - Review our [Examples](/workers/examples/) and [Tutorials](/workers/tutorials/) for inspiration. - Set up [bindings](/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality. - Learn how to [test and debug](/workers/testing/) your Workers. - Read about [Workers limits and pricing](/workers/platform/). --- # CLI URL: https://developers.cloudflare.com/workers/get-started/guide/ import { Details, Render, PackageManagers } from "~/components"; Set up and deploy your first Worker with Wrangler, the Cloudflare Developer Platform CLI. This guide will instruct you through setting up and deploying your first Worker. ## Prerequisites ## 1. Create a new Worker project Open a terminal window and run C3 to create your Worker project. [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Now, you have a new project set up. Move into that project folder. ```sh cd my-first-worker ```
In your project directory, C3 will have generated the following: * `wrangler.jsonc`: Your [Wrangler](/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file. * `index.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](/workers/reference/migrate-to-module-workers/) syntax. * `package.json`: A minimal Node dependencies configuration file. * `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json). * `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules).
In addition to creating new projects from C3 templates, C3 also supports creating new projects from existing Git repositories. To create a new project from an existing Git repository, open your terminal and run: ```sh npm create cloudflare@latest -- --template ``` `` may be any of the following: - `user/repo` (GitHub) - `git@github.com:user/repo` - `https://github.com/user/repo` - `user/repo/some-template` (subdirectories) - `user/repo#canary` (branches) - `user/repo#1234abcd` (commit hash) - `bitbucket:user/repo` (Bitbucket) - `gitlab:user/repo` (GitLab) Your existing template folder must contain the following files, at a minimum, to meet the requirements for Cloudflare Workers: - `package.json` - `wrangler.jsonc` [See sample Wrangler configuration](/workers/wrangler/configuration/#sample-wrangler-configuration) - `src/` containing a worker script referenced from `wrangler.jsonc`
## 2. Develop with Wrangler CLI C3 installs [Wrangler](/workers/wrangler/install-and-update/), the Workers command-line interface, in Workers projects by default. Wrangler lets you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#deploy) your Workers projects. After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) command in the project directory to start a local server for developing your Worker. This will allow you to preview your Worker locally during development. ```sh npx wrangler dev ``` If you have never used Wrangler before, it will open your web browser so you can login to your Cloudflare account. Go to [http://localhost:8787](http://localhost:8787) to view your Worker.
If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/#login) documentation.
## 3. Write code With your new project generated and running, you can begin to write and edit your code. Find the `src/index.js` file. `index.js` will be populated with the code below: ```js title="Original index.js" export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ```
This code block consists of a few different parts. ```js title="Updated index.js" {1} export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` `export default` is JavaScript syntax required for defining [JavaScript modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules#default_exports_versus_named_exports). Your Worker has to have a default export of an object, with properties corresponding to the events your Worker should handle. ```js title="index.js" {2} export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` This [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) will be called when your Worker receives an HTTP request. You can define additional event handlers in the exported object to respond to different types of events. For example, add a [`scheduled()` handler](/workers/runtime-apis/handlers/scheduled/) to respond to Worker invocations via a [Cron Trigger](/workers/configuration/cron-triggers/). Additionally, the `fetch` handler will always be passed three parameters: [`request`, `env` and `context`](/workers/runtime-apis/handlers/fetch/). ```js title="index.js" {3} export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` The Workers runtime expects `fetch` handlers to return a `Response` object or a Promise which resolves with a `Response` object. In this example, you will return a new `Response` with the string `"Hello World!"`.
Replace the content in your current `index.js` file with the content below, which changes the text output. ```js title="index.js" {3} export default { async fetch(request, env, ctx) { return new Response("Hello Worker!"); }, }; ``` Then, save the file and reload the page. Your Worker's output will have changed to the new text.
If the output for your Worker does not change, make sure that: 1. You saved the changes to `index.js`. 2. You have `wrangler dev` running. 3. You reloaded your browser.
## 4. Deploy your project Deploy your Worker via Wrangler to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/). ```sh npx wrangler deploy ``` If you have not configured any subdomain or domain, Wrangler will prompt you during the publish process to set one up. Preview your Worker at `..workers.dev`.
If you see [`523` errors](/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/error-523/) when pushing your `*.workers.dev` subdomain for the first time, wait a minute or so and the errors will resolve themselves.
## Next steps To do more: - Push your project to a GitHub or GitLab respoitory then [connect to builds](/workers/ci-cd/builds/#get-started) to enable automatic builds and deployments. - Visit the [Cloudflare dashboard](https://dash.cloudflare.com/) for simpler editing. - Review our [Examples](/workers/examples/) and [Tutorials](/workers/tutorials/) for inspiration. - Set up [bindings](/workers/runtime-apis/bindings/) to allow your Worker to interact with other resources and unlock new functionality. - Learn how to [test and debug](/workers/testing/) your Workers. - Read about [Workers limits and pricing](/workers/platform/). --- # Getting started URL: https://developers.cloudflare.com/workers/get-started/ import { DirectoryListing, Render } from "~/components"; Build your first Worker. --- # Prompting URL: https://developers.cloudflare.com/workers/get-started/prompting/ import { Tabs, TabItem, GlossaryTooltip, Type, Badge, TypeScriptExample } from "~/components"; import { Code } from "@astrojs/starlight/components"; import BasePrompt from '~/content/partials/prompts/base-prompt.txt?raw'; One of the fastest ways to build an application is by using AI to assist with writing the boiler plate code. When building, iterating on or debugging applications using AI tools and Large Language Models (LLMs), a well-structured and extensive prompt helps provide the model with clearer guidelines & examples that can dramatically improve output. Below is an extensive example prompt that can help you build applications using Cloudflare Workers and your preferred AI model. ### Build Workers using a prompt To use the prompt: 1. Use the click-to-copy button at the top right of the code block below to copy the full prompt to your clipboard 2. Paste into your AI tool of choice (for example OpenAI's ChatGPT or Anthropic's Claude) 3. Make sure to enter your part of the prompt at the end between the `` and `` tags. Base prompt: The prompt above adopts several best practices, including: * Using `` tags to structure the prompt * API and usage examples for products and use-cases * Guidance on how to generate configuration (e.g. `wrangler.jsonc`) as part of the models response. * Recommendations on Cloudflare products to use for specific storage or state needs ### Additional uses You can use the prompt in several ways: * Within the user context window, with your own user prompt inserted between the `` tags (**easiest**) * As the `system` prompt for models that support system prompts * Adding it to the prompt library and/or file context within your preferred IDE: * Cursor: add the prompt to [your Project Rules](https://docs.cursor.com/context/rules-for-ai) * Zed: use [the `/file` command](https://zed.dev/docs/assistant/assistant-panel) to add the prompt to the Assistant context. * Windsurf: use [the `@-mention` command](https://docs.codeium.com/chat/overview) to include a file containing the prompt to your Chat. * GitHub Copilot: create the [`.github/copilot-instructions.md`](https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot) file at the root of your project and add the prompt. :::note The prompt(s) here are examples and should be adapted to your specific use case. We'll continue to build out the prompts available here, including additional prompts for specific products. Depending on the model and user prompt, it may generate invalid code, configuration or other errors, and we recommend reviewing and testing the generated code before deploying it. ::: ### Passing a system prompt If you are building an AI application that will itself generate code, you can additionally use the prompt above as a "system prompt", which will give the LLM additional information on how to structure the output code. For example: ```ts import workersPrompt from "./workersPrompt.md" // Llama 3.3 from Workers AI const PREFERRED_MODEL = "@cf/meta/llama-3.3-70b-instruct-fp8-fast" export default { async fetch(req: Request, env: Env, ctx: ExecutionContext) { const openai = new OpenAI({ apiKey: env.WORKERS_AI_API_KEY }); const stream = await openai.chat.completions.create({ messages: [ { role: "system", content: workersPrompt, }, { role: "user", // Imagine something big! content: "Build an AI Agent using Workflows. The Workflow should be triggered by a GitHub webhook on a pull request, and ..." } ], model: PREFERRED_MODEL, stream: true, }); // Stream the response so we're not buffering the entire response in memory, // since it could be very large. const transformStream = new TransformStream(); const writer = transformStream.writable.getWriter(); const encoder = new TextEncoder(); (async () => { try { for await (const chunk of stream) { const content = chunk.choices[0]?.delta?.content || ''; await writer.write(encoder.encode(content)); } } finally { await writer.close(); } })(); return new Response(transformStream.readable, { headers: { 'Content-Type': 'text/plain; charset=utf-8', 'Transfer-Encoding': 'chunked' } }); } } ``` ## Use docs in your editor AI-enabled editors, including Cursor and Windsurf, can index documentation. Cursor includes the Cloudflare Developer Docs by default: you can use the [`@Docs`](https://docs.cursor.com/context/@-symbols/@-docs) command. In other editors, such as Zed or Windsurf, you can paste in URLs to add to your context. Use the _Copy Page_ button to paste in Cloudflare docs directly, or fetch docs for each product by appending `llms-full.txt` to the root URL - for example, `https://developers.cloudflare.com/agents/llms-full.txt` or `https://developers.cloudflare.com/workflows/llms-full.txt`. You can combine these with the Workers system prompt on this page to improve your editor or agent's understanding of the Workers APIs. ## Additional resources To get the most out of AI models and tools, we recommend reading the following guides on prompt engineering and structure: * OpenAI's [prompt engineering](https://platform.openai.com/docs/guides/prompt-engineering) guide and [best practices](https://platform.openai.com/docs/guides/reasoning-best-practices) for using reasoning models. * The [prompt engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) guide from Anthropic * Google's [quick start guide](https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf) for writing effective prompts * Meta's [prompting documentation](https://www.llama.com/docs/how-to-guides/prompting/) for their Llama model family. * GitHub's guide for [prompt engineering](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat) when using Copilot Chat. --- # Quickstarts URL: https://developers.cloudflare.com/workers/get-started/quickstarts/ import { LinkButton, WorkerStarter } from "~/components"; Quickstarts are GitHub repositories that are designed to be a starting point for building a new Cloudflare Workers project. To start any of the projects below, run: ```sh npm create cloudflare@latest -- --template ``` - `new-project-name` - A folder with this name will be created with your new project inside, pre-configured to [your Workers account](/workers/wrangler/configuration/). - `template` - This is the URL of the GitHub repo starter, as below. Refer to the [create-cloudflare documentation](/pages/get-started/c3/) for a full list of possible values. ## Example Projects --- ## Frameworks --- ## Built with Workers Get inspiration from other sites and projects out there that were built with Cloudflare Workers. Built with Workers --- # 103 Early Hints URL: https://developers.cloudflare.com/workers/examples/103-early-hints/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/103-early-hints) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; `103` Early Hints is an HTTP status code designed to speed up content delivery. When enabled, Cloudflare can cache the `Link` headers marked with preload and/or preconnect from HTML pages and serve them in a `103` Early Hints response before reaching the origin server. Browsers can use these hints to fetch linked assets while waiting for the origin’s final response, dramatically improving page load speeds. To ensure Early Hints are enabled on your zone: 1. Log in to the [Cloudflare Dashboard](https://dash.cloudflare.com) and select your account and website. 2. Go to **Speed** > **Optimization** > **Content Optimization**. 3. Enable the **Early Hints** toggle to on. You can return `Link` headers from a Worker running on your zone to speed up your page load times. ```js const CSS = "body { color: red; }"; const HTML = ` Early Hints test

Early Hints test page

OSZAR »
`; export default { async fetch(req) { // If request is for test.css, serve the raw CSS if (/test\.css$/.test(req.url)) { return new Response(CSS, { headers: { "content-type": "text/css", }, }); } else { // Serve raw HTML using Early Hints for the CSS file return new Response(HTML, { headers: { "content-type": "text/html", link: "; rel=preload; as=style", }, }); } }, }; ```
```js const CSS = "body { color: red; }"; const HTML = ` Early Hints test

Early Hints test page

OSZAR »
`; export default { async fetch(req): Promise { // If request is for test.css, serve the raw CSS if (/test\.css$/.test(req.url)) { return new Response(CSS, { headers: { "content-type": "text/css", }, }); } else { // Serve raw HTML using Early Hints for the CSS file return new Response(HTML, { headers: { "content-type": "text/html", link: "; rel=preload; as=style", }, }); } }, } satisfies ExportedHandler; ```
```py import re from workers import Response CSS = "body { color: red; }" HTML = """ Early Hints test

Early Hints test page

OSZAR »
""" def on_fetch(request): if re.search("test.css", request.url): headers = {"content-type": "text/css"} return Response(CSS, headers=headers) else: headers = {"content-type": "text/html","link": "; rel=preload; as=style"} return Response(HTML, headers=headers) ```
```ts import { Hono } from "hono"; const app = new Hono(); const CSS = "body { color: red; }"; const HTML = ` Early Hints test

Early Hints test page

OSZAR »
`; // Serve CSS file app.get("/test.css", (c) => { return c.body(CSS, { headers: { "content-type": "text/css", }, }); }); // Serve HTML with early hints app.get("*", (c) => { return c.html(HTML, { headers: { link: "; rel=preload; as=style", }, }); }); export default app; ```
--- # Languages URL: https://developers.cloudflare.com/workers/languages/ import { DirectoryListing } from "~/components"; Workers is a polyglot platform, and provides first-class support for the following programming languages: Workers also supports [WebAssembly](/workers/runtime-apis/webassembly/) (abbreviated as "Wasm") — a binary format that many languages can be compiled to. This allows you to write Workers using programming language beyond the languages listed above, including C, C++, Kotlin, Go and more. --- # A/B testing with same-URL direct access URL: https://developers.cloudflare.com/workers/examples/ab-testing/ import { TabItem, Tabs } from "~/components"; ```js const NAME = "myExampleWorkersABTest"; export default { async fetch(req) { const url = new URL(req.url); // Enable Passthrough to allow direct access to control and test routes. if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test")) return fetch(req); // Determine which group this requester is in. const cookie = req.headers.get("cookie"); if (cookie && cookie.includes(`${NAME}=control`)) { url.pathname = "/control" + url.pathname; } else if (cookie && cookie.includes(`${NAME}=test`)) { url.pathname = "/test" + url.pathname; } else { // If there is no cookie, this is a new client. Choose a group and set the cookie. const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split if (group === "control") { url.pathname = "/control" + url.pathname; } else { url.pathname = "/test" + url.pathname; } // Reconstruct response to avoid immutability let res = await fetch(url); res = new Response(res.body, res); // Set cookie to enable persistent A/B sessions. res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`); return res; } return fetch(url); }, }; ``` ```ts const NAME = "myExampleWorkersABTest"; export default { async fetch(req): Promise { const url = new URL(req.url); // Enable Passthrough to allow direct access to control and test routes. if (url.pathname.startsWith("/control") || url.pathname.startsWith("/test")) return fetch(req); // Determine which group this requester is in. const cookie = req.headers.get("cookie"); if (cookie && cookie.includes(`${NAME}=control`)) { url.pathname = "/control" + url.pathname; } else if (cookie && cookie.includes(`${NAME}=test`)) { url.pathname = "/test" + url.pathname; } else { // If there is no cookie, this is a new client. Choose a group and set the cookie. const group = Math.random() < 0.5 ? "test" : "control"; // 50/50 split if (group === "control") { url.pathname = "/control" + url.pathname; } else { url.pathname = "/test" + url.pathname; } // Reconstruct response to avoid immutability let res = await fetch(url); res = new Response(res.body, res); // Set cookie to enable persistent A/B sessions. res.headers.append("Set-Cookie", `${NAME}=${group}; path=/`); return res; } return fetch(url); }, } satisfies ExportedHandler; ``` ```py import random from urllib.parse import urlparse, urlunparse from workers import Response, fetch NAME = "myExampleWorkersABTest" async def on_fetch(request): url = urlparse(request.url) # Uncomment below when testing locally # url = url._replace(netloc="example.com") if "localhost" in url.netloc else url # Enable Passthrough to allow direct access to control and test routes. if url.path.startswith("/control") or url.path.startswith("/test"): return fetch(urlunparse(url)) # Determine which group this requester is in. cookie = request.headers.get("cookie") if cookie and f'{NAME}=control' in cookie: url = url._replace(path="/control" + url.path) elif cookie and f'{NAME}=test' in cookie: url = url._replace(path="/test" + url.path) else: # If there is no cookie, this is a new client. Choose a group and set the cookie. group = "test" if random.random() < 0.5 else "control" if group == "control": url = url._replace(path="/control" + url.path) else: url = url._replace(path="/test" + url.path) # Reconstruct response to avoid immutability res = await fetch(urlunparse(url)) headers = dict(res.headers) headers["Set-Cookie"] = f'{NAME}={group}; path=/' return Response(res.body, headers=headers) return fetch(urlunparse(url)) ``` ```ts import { Hono } from "hono"; import { getCookie, setCookie } from "hono/cookie"; const app = new Hono(); const NAME = "myExampleWorkersABTest"; // Enable passthrough to allow direct access to control and test routes app.all("/control/*", (c) => fetch(c.req.raw)); app.all("/test/*", (c) => fetch(c.req.raw)); // Middleware to handle A/B testing logic app.use("*", async (c) => { const url = new URL(c.req.url); // Determine which group this requester is in const abTestCookie = getCookie(c, NAME); if (abTestCookie === "control") { // User is in control group url.pathname = "/control" + c.req.path; } else if (abTestCookie === "test") { // User is in test group url.pathname = "/test" + c.req.path; } else { // If there is no cookie, this is a new client // Choose a group and set the cookie (50/50 split) const group = Math.random() < 0.5 ? "test" : "control"; // Update URL path based on assigned group if (group === "control") { url.pathname = "/control" + c.req.path; } else { url.pathname = "/test" + c.req.path; } // Set cookie to enable persistent A/B sessions setCookie(c, NAME, group, { path: "/", }); } const res = await fetch(url); return c.body(res.body, res); }); export default app; ``` --- # Accessing the Cloudflare Object URL: https://developers.cloudflare.com/workers/examples/accessing-the-cloudflare-object/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/accessing-the-cloudflare-object) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(req) { const data = req.cf !== undefined ? req.cf : { error: "The `cf` object is not available inside the preview." }; return new Response(JSON.stringify(data, null, 2), { headers: { "content-type": "application/json;charset=UTF-8", }, }); }, }; ``` ```ts export default { async fetch(req): Promise { const data = req.cf !== undefined ? req.cf : { error: "The `cf` object is not available inside the preview." }; return new Response(JSON.stringify(data, null, 2), { headers: { "content-type": "application/json;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ``` ```ts import { Hono } from "hono"; const app = new Hono(); app.get("*", async (c) => { // Access the raw request to get the cf object const req = c.req.raw; // Check if the cf object is available const data = req.cf !== undefined ? req.cf : { error: "The `cf` object is not available inside the preview." }; // Return the data formatted with 2-space indentation return c.json(data); }); export default app; ``` ```py import json from workers import Response from js import JSON def on_fetch(request): error = json.dumps({ "error": "The `cf` object is not available inside the preview." }) data = request.cf if request.cf is not None else error headers = {"content-type":"application/json"} return Response(JSON.stringify(data, None, 2), headers=headers) ``` --- # Aggregate requests URL: https://developers.cloudflare.com/workers/examples/aggregate-requests/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/aggregate-requests) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { // someHost is set up to return JSON responses const someHost = "https://jsonplaceholder.typicode.com"; const url1 = someHost + "/todos/1"; const url2 = someHost + "/todos/2"; const responses = await Promise.all([fetch(url1), fetch(url2)]); const results = await Promise.all(responses.map((r) => r.json())); const options = { headers: { "content-type": "application/json;charset=UTF-8" }, }; return new Response(JSON.stringify(results), options); }, }; ``` ```ts export default { async fetch(request) { // someHost is set up to return JSON responses const someHost = "https://jsonplaceholder.typicode.com"; const url1 = someHost + "/todos/1"; const url2 = someHost + "/todos/2"; const responses = await Promise.all([fetch(url1), fetch(url2)]); const results = await Promise.all(responses.map((r) => r.json())); const options = { headers: { "content-type": "application/json;charset=UTF-8" }, }; return new Response(JSON.stringify(results), options); }, } satisfies ExportedHandler; ``` ```ts import { Hono } from "hono"; const app = new Hono(); app.get("*", async (c) => { // someHost is set up to return JSON responses const someHost = "https://jsonplaceholder.typicode.com"; const url1 = someHost + "/todos/1"; const url2 = someHost + "/todos/2"; // Fetch both URLs concurrently const responses = await Promise.all([fetch(url1), fetch(url2)]); // Parse JSON responses concurrently const results = await Promise.all(responses.map((r) => r.json())); // Return aggregated results return c.json(results); }); export default app; ``` ```py from workers import Response, fetch import asyncio import json async def on_fetch(request): # some_host is set up to return JSON responses some_host = "https://jsonplaceholder.typicode.com" url1 = some_host + "/todos/1" url2 = some_host + "/todos/2" responses = await asyncio.gather(fetch(url1), fetch(url2)) results = await asyncio.gather(*(r.json() for r in responses)) headers = {"content-type": "application/json;charset=UTF-8"} return Response.json(results, headers=headers) ``` --- # Alter headers URL: https://developers.cloudflare.com/workers/examples/alter-headers/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/alter-headers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const response = await fetch("https://example.com"); // Clone the response so that it's no longer immutable const newResponse = new Response(response.body, response); // Add a custom header with a value newResponse.headers.append( "x-workers-hello", "Hello from Cloudflare Workers", ); // Delete headers newResponse.headers.delete("x-header-to-delete"); newResponse.headers.delete("x-header2-to-delete"); // Adjust the value for an existing header newResponse.headers.set("x-header-to-change", "NewValue"); return newResponse; }, }; ``` ```ts export default { async fetch(request): Promise { const response = await fetch(request); // Clone the response so that it's no longer immutable const newResponse = new Response(response.body, response); // Add a custom header with a value newResponse.headers.append( "x-workers-hello", "Hello from Cloudflare Workers", ); // Delete headers newResponse.headers.delete("x-header-to-delete"); newResponse.headers.delete("x-header2-to-delete"); // Adjust the value for an existing header newResponse.headers.set("x-header-to-change", "NewValue"); return newResponse; }, } satisfies ExportedHandler; ``` ```py from workers import Response, fetch async def on_fetch(request): response = await fetch("https://example.com") # Grab the response headers so they can be modified new_headers = response.headers # Add a custom header with a value new_headers["x-workers-hello"] = "Hello from Cloudflare Workers" # Delete headers if "x-header-to-delete" in new_headers: del new_headers["x-header-to-delete"] if "x-header2-to-delete" in new_headers: del new_headers["x-header2-to-delete"] # Adjust the value for an existing header new_headers["x-header-to-change"] = "NewValue" return Response(response.body, headers=new_headers) ``` ```ts import { Hono } from 'hono'; const app = new Hono(); app.use('*', async (c, next) => { // Process the request with the next middleware/handler await next(); // After the response is generated, we can modify its headers // Add a custom header with a value c.res.headers.append( "x-workers-hello", "Hello from Cloudflare Workers with Hono" ); // Delete headers c.res.headers.delete("x-header-to-delete"); c.res.headers.delete("x-header2-to-delete"); // Adjust the value for an existing header c.res.headers.set("x-header-to-change", "NewValue"); }); app.get('*', async (c) => { // Fetch content from example.com const response = await fetch("https://example.com"); // Return the response body with original headers // (our middleware will modify the headers before sending) return new Response(response.body, { headers: response.headers }); }); export default app; ``` You can also use the [`custom-headers-example` template](https://github.com/kristianfreeman/custom-headers-example) to deploy this code to your custom domain. --- # Auth with headers URL: https://developers.cloudflare.com/workers/examples/auth-with-headers/ import { TabItem, Tabs } from "~/components"; :::caution[Caution when using in production] The example code contains a generic header key and value of `X-Custom-PSK` and `mypresharedkey`. To best protect your resources, change the header key and value in the Workers editor before saving your code. ::: ```js export default { async fetch(request) { /** * @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key * @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value */ const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"; const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"; const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY); if (psk === PRESHARED_AUTH_HEADER_VALUE) { // Correct preshared header key supplied. Fetch request from origin. return fetch(request); } // Incorrect key supplied. Reject the request. return new Response("Sorry, you have supplied an invalid key.", { status: 403, }); }, }; ``` ```ts export default { async fetch(request): Promise { /** * @param {string} PRESHARED_AUTH_HEADER_KEY Custom header to check for key * @param {string} PRESHARED_AUTH_HEADER_VALUE Hard coded key value */ const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"; const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"; const psk = request.headers.get(PRESHARED_AUTH_HEADER_KEY); if (psk === PRESHARED_AUTH_HEADER_VALUE) { // Correct preshared header key supplied. Fetch request from origin. return fetch(request); } // Incorrect key supplied. Reject the request. return new Response("Sorry, you have supplied an invalid key.", { status: 403, }); }, } satisfies ExportedHandler; ``` ```py from workers import Response, fetch async def on_fetch(request): PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK" PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey" psk = request.headers[PRESHARED_AUTH_HEADER_KEY] if psk == PRESHARED_AUTH_HEADER_VALUE: # Correct preshared header key supplied. Fetch request from origin. return fetch(request) # Incorrect key supplied. Reject the request. return Response("Sorry, you have supplied an invalid key.", status=403) ``` ```ts import { Hono } from 'hono'; const app = new Hono(); // Add authentication middleware app.use('*', async (c, next) => { /** * Define authentication constants */ const PRESHARED_AUTH_HEADER_KEY = "X-Custom-PSK"; const PRESHARED_AUTH_HEADER_VALUE = "mypresharedkey"; // Get the pre-shared key from the request header const psk = c.req.header(PRESHARED_AUTH_HEADER_KEY); if (psk === PRESHARED_AUTH_HEADER_VALUE) { // Correct preshared header key supplied. Continue to the next handler. await next(); } else { // Incorrect key supplied. Reject the request. return c.text("Sorry, you have supplied an invalid key.", 403); } }); // Handle all authenticated requests by passing through to origin app.all('*', async (c) => { return fetch(c.req.raw); }); export default app; ``` --- # HTTP Basic Authentication URL: https://developers.cloudflare.com/workers/examples/basic-auth/ import { TabItem, Tabs } from "~/components"; :::note This example Worker makes use of the [Node.js Buffer API](/workers/runtime-apis/nodejs/buffer/), which is available as part of the Worker's runtime [Node.js compatibility mode](/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the `nodejs_compat` compatibility flag](/workers/configuration/compatibility-flags/#nodejs-compatibility-flag). ::: :::caution[Caution when using in production] This code is provided as a sample, and is not suitable for production use. Basic Authentication sends credentials unencrypted, and must be used with an HTTPS connection to be considered secure. For a production-ready authentication system, consider using [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/applications/configure-apps/self-hosted-public-app/). ::: ```js /** * Shows how to restrict access using the HTTP Basic schema. * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication * @see https://tools.ietf.org/html/rfc7617 * */ import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); /** * Protect against timing attacks by safely comparing values using `timingSafeEqual`. * Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details * @param {string} a * @param {string} b * @returns {boolean} */ function timingSafeEqual(a, b) { const aBytes = encoder.encode(a); const bBytes = encoder.encode(b); if (aBytes.byteLength !== bBytes.byteLength) { // Strings must be the same length in order to compare // with crypto.subtle.timingSafeEqual return false; } return crypto.subtle.timingSafeEqual(aBytes, bBytes); } export default { /** * * @param {Request} request * @param {{PASSWORD: string}} env * @returns */ async fetch(request, env) { const BASIC_USER = "admin"; // You will need an admin password. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const BASIC_PASS = env.PASSWORD ?? "password"; const url = new URL(request.url); switch (url.pathname) { case "/": return new Response("Anyone can access the homepage."); case "/logout": // Invalidate the "Authorization" header by returning a HTTP 401. // We do not send a "WWW-Authenticate" header, as this would trigger // a popup in the browser, immediately asking for credentials again. return new Response("Logged out.", { status: 401 }); case "/admin": { // The "Authorization" header is sent when authenticated. const authorization = request.headers.get("Authorization"); if (!authorization) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } const [scheme, encoded] = authorization.split(" "); // The Authorization header must start with Basic, followed by a space. if (!encoded || scheme !== "Basic") { return new Response("Malformed authorization header.", { status: 400, }); } const credentials = Buffer.from(encoded, "base64").toString(); // The username & password are split by the first colon. //=> example: "username:password" const index = credentials.indexOf(":"); const user = credentials.substring(0, index); const pass = credentials.substring(index + 1); if ( !timingSafeEqual(BASIC_USER, user) || !timingSafeEqual(BASIC_PASS, pass) ) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } return new Response("🎉 You have private access!", { status: 200, headers: { "Cache-Control": "no-store", }, }); } } return new Response("Not Found.", { status: 404 }); }, }; ``` ```ts /** * Shows how to restrict access using the HTTP Basic schema. * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication * @see https://tools.ietf.org/html/rfc7617 * */ import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); /** * Protect against timing attacks by safely comparing values using `timingSafeEqual`. * Refer to https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#timingsafeequal for more details */ function timingSafeEqual(a: string, b: string) { const aBytes = encoder.encode(a); const bBytes = encoder.encode(b); if (aBytes.byteLength !== bBytes.byteLength) { // Strings must be the same length in order to compare // with crypto.subtle.timingSafeEqual return false; } return crypto.subtle.timingSafeEqual(aBytes, bBytes); } interface Env { PASSWORD: string; } export default { async fetch(request, env): Promise { const BASIC_USER = "admin"; // You will need an admin password. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const BASIC_PASS = env.PASSWORD ?? "password"; const url = new URL(request.url); switch (url.pathname) { case "/": return new Response("Anyone can access the homepage."); case "/logout": // Invalidate the "Authorization" header by returning a HTTP 401. // We do not send a "WWW-Authenticate" header, as this would trigger // a popup in the browser, immediately asking for credentials again. return new Response("Logged out.", { status: 401 }); case "/admin": { // The "Authorization" header is sent when authenticated. const authorization = request.headers.get("Authorization"); if (!authorization) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } const [scheme, encoded] = authorization.split(" "); // The Authorization header must start with Basic, followed by a space. if (!encoded || scheme !== "Basic") { return new Response("Malformed authorization header.", { status: 400, }); } const credentials = Buffer.from(encoded, "base64").toString(); // The username and password are split by the first colon. //=> example: "username:password" const index = credentials.indexOf(":"); const user = credentials.substring(0, index); const pass = credentials.substring(index + 1); if ( !timingSafeEqual(BASIC_USER, user) || !timingSafeEqual(BASIC_PASS, pass) ) { return new Response("You need to login.", { status: 401, headers: { // Prompts the user for credentials. "WWW-Authenticate": 'Basic realm="my scope", charset="UTF-8"', }, }); } return new Response("🎉 You have private access!", { status: 200, headers: { "Cache-Control": "no-store", }, }); } } return new Response("Not Found.", { status: 404 }); }, } satisfies ExportedHandler; ``` ```rs use base64::prelude::*; use worker::*; #[event(fetch)] async fn fetch(req: Request, env: Env, _ctx: Context) -> Result { let basic_user = "admin"; // You will need an admin password. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ let basic_pass = match env.secret("PASSWORD") { Ok(s) => s.to_string(), Err(_) => "password".to_string(), }; let url = req.url()?; match url.path() { "/" => Response::ok("Anyone can access the homepage."), // Invalidate the "Authorization" header by returning a HTTP 401. // We do not send a "WWW-Authenticate" header, as this would trigger // a popup in the browser, immediately asking for credentials again. "/logout" => Response::error("Logged out.", 401), "/admin" => { // The "Authorization" header is sent when authenticated. let authorization = req.headers().get("Authorization")?; if authorization == None { let mut headers = Headers::new(); // Prompts the user for credentials. headers.set( "WWW-Authenticate", "Basic realm='my scope', charset='UTF-8'", )?; return Ok(Response::error("You need to login.", 401)?.with_headers(headers)); } let authorization = authorization.unwrap(); let auth: Vec<&str> = authorization.split(" ").collect(); let scheme = auth[0]; let encoded = auth[1]; // The Authorization header must start with Basic, followed by a space. if encoded == "" || scheme != "Basic" { return Response::error("Malformed authorization header.", 400); } let buff = BASE64_STANDARD.decode(encoded).unwrap(); let credentials = String::from_utf8_lossy(&buff); // The username & password are split by the first colon. //=> example: "username:password" let credentials: Vec<&str> = credentials.split(':').collect(); let user = credentials[0]; let pass = credentials[1]; if user != basic_user || pass != basic_pass { let mut headers = Headers::new(); // Prompts the user for credentials. headers.set( "WWW-Authenticate", "Basic realm='my scope', charset='UTF-8'", )?; return Ok(Response::error("You need to login.", 401)?.with_headers(headers)); } let mut headers = Headers::new(); headers.set("Cache-Control", "no-store")?; Ok(Response::ok("🎉 You have private access!")?.with_headers(headers)) } _ => Response::error("Not Found.", 404), } } ```` ```ts /** * Shows how to restrict access using the HTTP Basic schema with Hono. * @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication * @see https://tools.ietf.org/html/rfc7617 */ import { Hono } from "hono"; import { basicAuth } from "hono/basic-auth"; // Define environment interface interface Env { Bindings: { USERNAME: string; PASSWORD: string; }; } const app = new Hono(); // Public homepage - accessible to everyone app.get("/", (c) => { return c.text("Anyone can access the homepage."); }); // Admin route - protected with Basic Auth app.get( "/admin", async (c, next) => { const auth = basicAuth({ username: c.env.USERNAME, password: c.env.PASSWORD }) return await auth(c, next); }, (c) => { return c.text("🎉 You have private access!", 200, { "Cache-Control": "no-store", }); } ); export default app; ```` --- # Block on TLS URL: https://developers.cloudflare.com/workers/examples/block-on-tls/ import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { try { const tlsVersion = request.cf.tlsVersion; // Allow only TLS versions 1.2 and 1.3 if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("Please use TLS version 1.2 or higher.", { status: 403, }); } return fetch(request); } catch (err) { console.error( "request.cf does not exist in the previewer, only in production", ); return new Response(`Error in workers script ${err.message}`, { status: 500, }); } }, }; ``` ```ts export default { async fetch(request): Promise { try { const tlsVersion = request.cf.tlsVersion; // Allow only TLS versions 1.2 and 1.3 if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("Please use TLS version 1.2 or higher.", { status: 403, }); } return fetch(request); } catch (err) { console.error( "request.cf does not exist in the previewer, only in production", ); return new Response(`Error in workers script ${err.message}`, { status: 500, }); } }, } satisfies ExportedHandler; ``` ```ts import { Hono } from "hono"; const app = new Hono(); // Middleware to check TLS version app.use("*", async (c, next) => { // Access the raw request to get the cf object with TLS info const request = c.req.raw; const tlsVersion = request.cf?.tlsVersion; // Allow only TLS versions 1.2 and 1.3 if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return c.text("Please use TLS version 1.2 or higher.", 403); } await next(); }); app.onError((err, c) => { console.error( "request.cf does not exist in the previewer, only in production", ); return c.text(`Error in workers script: ${err.message}`, 500); }); app.get("/", async (c) => { return c.text(`TLS Version: ${c.req.raw.cf.tlsVersion}`); }); export default app; ``` ```py from workers import Response, fetch async def on_fetch(request): tls_version = request.cf.tlsVersion if tls_version not in ("TLSv1.2", "TLSv1.3"): return Response("Please use TLS version 1.2 or higher.", status=403) return fetch(request) ``` --- # Bulk origin override URL: https://developers.cloudflare.com/workers/examples/bulk-origin-proxy/ import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { /** * An object with different URLs to fetch * @param {Object} ORIGINS */ const ORIGINS = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", }; const url = new URL(request.url); // Check if incoming hostname is a key in the ORIGINS object if (url.hostname in ORIGINS) { const target = ORIGINS[url.hostname]; url.hostname = target; // If it is, proxy request to that third party origin return fetch(url.toString(), request); } // Otherwise, process request as normal return fetch(request); }, }; ``` ```ts export default { async fetch(request): Promise { /** * An object with different URLs to fetch * @param {Object} ORIGINS */ const ORIGINS = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", }; const url = new URL(request.url); // Check if incoming hostname is a key in the ORIGINS object if (url.hostname in ORIGINS) { const target = ORIGINS[url.hostname]; url.hostname = target; // If it is, proxy request to that third party origin return fetch(url.toString(), request); } // Otherwise, process request as normal return fetch(request); }, } satisfies ExportedHandler; ``` ```ts import { Hono } from "hono"; import { proxy } from "hono/proxy"; // An object with different URLs to fetch const ORIGINS: Record = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", }; const app = new Hono(); app.all("*", async (c) => { const url = new URL(c.req.url); // Check if incoming hostname is a key in the ORIGINS object if (url.hostname in ORIGINS) { const target = ORIGINS[url.hostname]; url.hostname = target; // If it is, proxy request to that third party origin return proxy(url, c.req.raw); } // Otherwise, process request as normal return proxy(c.req.raw); }); export default app; ``` ```py from js import fetch, URL async def on_fetch(request): # A dict with different URLs to fetch ORIGINS = { "starwarsapi.yourdomain.com": "swapi.dev", "google.yourdomain.com": "www.google.com", } url = URL.new(request.url) # Check if incoming hostname is a key in the ORIGINS object if url.hostname in ORIGINS: url.hostname = ORIGINS[url.hostname] # If it is, proxy request to that third party origin return fetch(url.toString(), request) # Otherwise, process request as normal return fetch(request) ``` --- # Bulk redirects URL: https://developers.cloudflare.com/workers/examples/bulk-redirects/ import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const externalHostname = "examples.cloudflareworkers.com"; const redirectMap = new Map([ ["/bulk1", "https://" + externalHostname + "/redirect2"], ["/bulk2", "https://" + externalHostname + "/redirect3"], ["/bulk3", "https://" + externalHostname + "/redirect4"], ["/bulk4", "https://google.com"], ]); const requestURL = new URL(request.url); const path = requestURL.pathname; const location = redirectMap.get(path); if (location) { return Response.redirect(location, 301); } // If request not in map, return the original request return fetch(request); }, }; ``` ```ts export default { async fetch(request): Promise { const externalHostname = "examples.cloudflareworkers.com"; const redirectMap = new Map([ ["/bulk1", "https://" + externalHostname + "/redirect2"], ["/bulk2", "https://" + externalHostname + "/redirect3"], ["/bulk3", "https://" + externalHostname + "/redirect4"], ["/bulk4", "https://google.com"], ]); const requestURL = new URL(request.url); const path = requestURL.pathname; const location = redirectMap.get(path); if (location) { return Response.redirect(location, 301); } // If request not in map, return the original request return fetch(request); }, } satisfies ExportedHandler; ``` ```py from workers import Response, fetch from urllib.parse import urlparse async def on_fetch(request): external_hostname = "examples.cloudflareworkers.com" redirect_map = { "/bulk1": "https://" + external_hostname + "/redirect2", "/bulk2": "https://" + external_hostname + "/redirect3", "/bulk3": "https://" + external_hostname + "/redirect4", "/bulk4": "https://google.com", } url = urlparse(request.url) location = redirect_map.get(url.path, None) if location: return Response.redirect(location, 301) # If request not in map, return the original request return fetch(request) ``` ```ts import { Hono } from "hono"; const app = new Hono(); // Configure your redirects const externalHostname = "examples.cloudflareworkers.com"; const redirectMap = new Map([ ["/bulk1", `https://${externalHostname}/redirect2`], ["/bulk2", `https://${externalHostname}/redirect3`], ["/bulk3", `https://${externalHostname}/redirect4`], ["/bulk4", "https://google.com"], ]); // Middleware to handle redirects app.use("*", async (c, next) => { const path = c.req.path; const location = redirectMap.get(path); if (location) { // If path is in our redirect map, perform the redirect return c.redirect(location, 301); } // Otherwise, continue to the next handler await next(); }); // Default handler for requests that don't match any redirects app.all("*", async (c) => { // Pass through to origin return fetch(c.req.raw); }); export default app; ``` --- # Using the Cache API URL: https://developers.cloudflare.com/workers/examples/cache-api/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-api) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request, env, ctx) { const cacheUrl = new URL(request.url); // Construct the cache key from the cache URL const cacheKey = new Request(cacheUrl.toString(), request); const cache = caches.default; // Check whether the value is already available in the cache // if not, you will need to fetch it from origin, and store it in the cache let response = await cache.match(cacheKey); if (!response) { console.log( `Response for request url: ${request.url} not present in cache. Fetching and caching request.`, ); // If not in cache, get it from origin response = await fetch(request); // Must use Response constructor to inherit all of response's fields response = new Response(response.body, response); // Cache API respects Cache-Control headers. Setting s-max-age to 10 // will limit the response to be in cache for 10 seconds max // Any changes made to the response here will be reflected in the cached value response.headers.append("Cache-Control", "s-maxage=10"); ctx.waitUntil(cache.put(cacheKey, response.clone())); } else { console.log(`Cache hit for: ${request.url}.`); } return response; }, }; ``` ```ts interface Env {} export default { async fetch(request, env, ctx): Promise { const cacheUrl = new URL(request.url); // Construct the cache key from the cache URL const cacheKey = new Request(cacheUrl.toString(), request); const cache = caches.default; // Check whether the value is already available in the cache // if not, you will need to fetch it from origin, and store it in the cache let response = await cache.match(cacheKey); if (!response) { console.log( `Response for request url: ${request.url} not present in cache. Fetching and caching request.`, ); // If not in cache, get it from origin response = await fetch(request); // Must use Response constructor to inherit all of response's fields response = new Response(response.body, response); // Cache API respects Cache-Control headers. Setting s-max-age to 10 // will limit the response to be in cache for 10 seconds max // Any changes made to the response here will be reflected in the cached value response.headers.append("Cache-Control", "s-maxage=10"); ctx.waitUntil(cache.put(cacheKey, response.clone())); } else { console.log(`Cache hit for: ${request.url}.`); } return response; }, } satisfies ExportedHandler; ``` ```py from pyodide.ffi import create_proxy from js import Response, Request, URL, caches, fetch async def on_fetch(request, _env, ctx): cache_url = request.url # Construct the cache key from the cache URL cache_key = Request.new(cache_url, request) cache = caches.default # Check whether the value is already available in the cache # if not, you will need to fetch it from origin, and store it in the cache response = await cache.match(cache_key) if response is None: print(f"Response for request url: {request.url} not present in cache. Fetching and caching request.") # If not in cache, get it from origin response = await fetch(request) # Must use Response constructor to inherit all of response's fields response = Response.new(response.body, response) # Cache API respects Cache-Control headers. Setting s-max-age to 10 # will limit the response to be in cache for 10 seconds s-maxage # Any changes made to the response here will be reflected in the cached value response.headers.append("Cache-Control", "s-maxage=10") ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone()))) else: print(f"Cache hit for: {request.url}.") return response ``` ```ts import { Hono } from "hono"; import { cache } from "hono/cache"; const app = new Hono(); // We leverage hono built-in cache helper here app.get( "*", cache({ cacheName: "my-cache", cacheControl: "max-age=3600", // 1 hour }), ); // Add a route to handle the request if it's not in cache app.get("*", (c) => { return c.text("Hello from Hono!"); }); export default app; ``` --- # Cache POST requests URL: https://developers.cloudflare.com/workers/examples/cache-post-request/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-post-request) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request, env, ctx) { async function sha256(message) { // encode as UTF-8 const msgBuffer = await new TextEncoder().encode(message); // hash the message const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer); // convert bytes to hex string return [...new Uint8Array(hashBuffer)] .map((b) => b.toString(16).padStart(2, "0")) .join(""); } try { if (request.method.toUpperCase() === "POST") { const body = await request.clone().text(); // Hash the request body to use it as a part of the cache key const hash = await sha256(body); const cacheUrl = new URL(request.url); // Store the URL in cache by prepending the body's hash cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash; // Convert to a GET to be able to cache const cacheKey = new Request(cacheUrl.toString(), { headers: request.headers, method: "GET", }); const cache = caches.default; // Find the cache key in the cache let response = await cache.match(cacheKey); // Otherwise, fetch response to POST request from origin if (!response) { response = await fetch(request); ctx.waitUntil(cache.put(cacheKey, response.clone())); } return response; } return fetch(request); } catch (e) { return new Response("Error thrown " + e.message); } }, }; ``` ```ts interface Env {} export default { async fetch(request, env, ctx): Promise { async function sha256(message) { // encode as UTF-8 const msgBuffer = await new TextEncoder().encode(message); // hash the message const hashBuffer = await crypto.subtle.digest("SHA-256", msgBuffer); // convert bytes to hex string return [...new Uint8Array(hashBuffer)] .map((b) => b.toString(16).padStart(2, "0")) .join(""); } try { if (request.method.toUpperCase() === "POST") { const body = await request.clone().text(); // Hash the request body to use it as a part of the cache key const hash = await sha256(body); const cacheUrl = new URL(request.url); // Store the URL in cache by prepending the body's hash cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash; // Convert to a GET to be able to cache const cacheKey = new Request(cacheUrl.toString(), { headers: request.headers, method: "GET", }); const cache = caches.default; // Find the cache key in the cache let response = await cache.match(cacheKey); // Otherwise, fetch response to POST request from origin if (!response) { response = await fetch(request); ctx.waitUntil(cache.put(cacheKey, response.clone())); } return response; } return fetch(request); } catch (e) { return new Response("Error thrown " + e.message); } }, } satisfies ExportedHandler; ``` ```py import hashlib from pyodide.ffi import create_proxy from js import fetch, URL, Headers, Request, caches async def on_fetch(request, _, ctx): if 'POST' in request.method: # Hash the request body to use it as a part of the cache key body = await request.clone().text() body_hash = hashlib.sha256(body.encode('UTF-8')).hexdigest() # Store the URL in cache by prepending the body's hash cache_url = URL.new(request.url) cache_url.pathname = "/posts" + cache_url.pathname + body_hash # Convert to a GET to be able to cache headers = Headers.new(dict(request.headers).items()) cache_key = Request.new(cache_url.toString(), method='GET', headers=headers) # Find the cache key in the cache cache = caches.default response = await cache.match(cache_key) # Otherwise, fetch response to POST request from origin if response is None: response = await fetch(request) ctx.waitUntil(create_proxy(cache.put(cache_key, response.clone()))) return response return fetch(request) ``` ```ts import { Hono } from "hono"; import { sha256 } from "hono/utils/crypto"; const app = new Hono(); // Middleware for caching POST requests app.post("*", async (c) => { try { // Get the request body const body = await c.req.raw.clone().text(); // Hash the request body to use it as part of the cache key const hash = await sha256(body); // Create the cache URL const cacheUrl = new URL(c.req.url); // Store the URL in cache by prepending the body's hash cacheUrl.pathname = "/posts" + cacheUrl.pathname + hash; // Convert to a GET to be able to cache const cacheKey = new Request(cacheUrl.toString(), { headers: c.req.raw.headers, method: "GET", }); const cache = caches.default; // Find the cache key in the cache let response = await cache.match(cacheKey); // If not in cache, fetch response to POST request from origin if (!response) { response = await fetch(c.req.raw); c.executionCtx.waitUntil(cache.put(cacheKey, response.clone())); } return response; } catch (e) { return c.text("Error thrown " + e.message, 500); } }); // Handle all other HTTP methods app.all("*", (c) => { return fetch(c.req.raw); }); export default app; ``` --- # Cache Tags using Workers URL: https://developers.cloudflare.com/workers/examples/cache-tags/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-tags) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const requestUrl = new URL(request.url); const params = requestUrl.searchParams; const tags = params && params.has("tags") ? params.get("tags").split(",") : []; const url = params && params.has("uri") ? params.get("uri") : ""; if (!url) { const errorObject = { error: "URL cannot be empty", }; return new Response(JSON.stringify(errorObject), { status: 400 }); } const init = { cf: { cacheTags: tags, }, }; return fetch(url, init) .then((result) => { const cacheStatus = result.headers.get("cf-cache-status"); const lastModified = result.headers.get("last-modified"); const response = { cache: cacheStatus, lastModified: lastModified, }; return new Response(JSON.stringify(response), { status: result.status, }); }) .catch((err) => { const errorObject = { error: err.message, }; return new Response(JSON.stringify(errorObject), { status: 500 }); }); }, }; ``` ```ts export default { async fetch(request): Promise { const requestUrl = new URL(request.url); const params = requestUrl.searchParams; const tags = params && params.has("tags") ? params.get("tags").split(",") : []; const url = params && params.has("uri") ? params.get("uri") : ""; if (!url) { const errorObject = { error: "URL cannot be empty", }; return new Response(JSON.stringify(errorObject), { status: 400 }); } const init = { cf: { cacheTags: tags, }, }; return fetch(url, init) .then((result) => { const cacheStatus = result.headers.get("cf-cache-status"); const lastModified = result.headers.get("last-modified"); const response = { cache: cacheStatus, lastModified: lastModified, }; return new Response(JSON.stringify(response), { status: result.status, }); }) .catch((err) => { const errorObject = { error: err.message, }; return new Response(JSON.stringify(errorObject), { status: 500 }); }); }, } satisfies ExportedHandler; ``` ```ts import { Hono } from "hono"; const app = new Hono(); app.all("*", async (c) => { const tags = c.req.query("tags") ? c.req.query("tags").split(",") : []; const uri = c.req.query("uri") ? c.req.query("uri") : ""; if (!uri) { return c.json({ error: "URL cannot be empty" }, 400); } const init = { cf: { cacheTags: tags, }, }; const result = await fetch(uri, init); const cacheStatus = result.headers.get("cf-cache-status"); const lastModified = result.headers.get("last-modified"); const response = { cache: cacheStatus, lastModified: lastModified, }; return c.json(response, result.status); }); app.onError((err, c) => { return c.json({ error: err.message }, 500); }); export default app; ``` ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, Object, fetch def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) async def on_fetch(request): request_url = URL.new(request.url) params = request_url.searchParams tags = params["tags"].split(",") if "tags" in params else [] url = params["uri"] or None if url is None: error = {"error": "URL cannot be empty"} return Response.json(to_js(error), status=400) options = {"cf": {"cacheTags": tags}} result = await fetch(url, to_js(options)) cache_status = result.headers["cf-cache-status"] last_modified = result.headers["last-modified"] response = {"cache": cache_status, "lastModified": last_modified} return Response.json(to_js(response), status=result.status) ``` --- # Cache using fetch URL: https://developers.cloudflare.com/workers/examples/cache-using-fetch/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cache-using-fetch) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const url = new URL(request.url); // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here const someCustomKey = `https://${url.hostname}${url.pathname}`; let response = await fetch(request, { cf: { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cacheTtl: 5, cacheEverything: true, //Enterprise only feature, see Cache API for other plans cacheKey: someCustomKey, }, }); // Reconstruct the Response object to make its headers mutable. response = new Response(response.body, response); // Set cache control headers to cache on browser for 25 minutes response.headers.set("Cache-Control", "max-age=1500"); return response; }, }; ``` ```ts export default { async fetch(request): Promise { const url = new URL(request.url); // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here const someCustomKey = `https://${url.hostname}${url.pathname}`; let response = await fetch(request, { cf: { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cacheTtl: 5, cacheEverything: true, //Enterprise only feature, see Cache API for other plans cacheKey: someCustomKey, }, }); // Reconstruct the Response object to make its headers mutable. response = new Response(response.body, response); // Set cache control headers to cache on browser for 25 minutes response.headers.set("Cache-Control", "max-age=1500"); return response; }, } satisfies ExportedHandler; ``` ```ts import { Hono } from 'hono'; type Bindings = {}; const app = new Hono<{ Bindings: Bindings }>(); app.all('*', async (c) => { const url = new URL(c.req.url); // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here const someCustomKey = `https://${url.hostname}${url.pathname}`; // Fetch the request with custom cache settings let response = await fetch(c.req.raw, { cf: { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cacheTtl: 5, cacheEverything: true, // Enterprise only feature, see Cache API for other plans cacheKey: someCustomKey, }, }); // Reconstruct the Response object to make its headers mutable response = new Response(response.body, response); // Set cache control headers to cache on browser for 25 minutes response.headers.set("Cache-Control", "max-age=1500"); return response; }); export default app; ``` ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, Object, fetch def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) async def on_fetch(request): url = URL.new(request.url) # Only use the path for the cache key, removing query strings # and always store using HTTPS, for example, https://www.example.com/file-uri-here some_custom_key = f"https://{url.hostname}{url.pathname}" response = await fetch( request, cf=to_js({ # Always cache this fetch regardless of content type # for a max of 5 seconds before revalidating the resource "cacheTtl": 5, "cacheEverything": True, # Enterprise only feature, see Cache API for other plans "cacheKey": some_custom_key, }), ) # Reconstruct the Response object to make its headers mutable response = Response.new(response.body, response) # Set cache control headers to cache on browser for 25 minutes response.headers["Cache-Control"] = "max-age=1500" return response ``` ```rs use worker::*; #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result { let url = req.url()?; // Only use the path for the cache key, removing query strings // and always store using HTTPS, for example, https://www.example.com/file-uri-here let custom_key = format!( "https://{host}{path}", host = url.host_str().unwrap(), path = url.path() ); let request = Request::new_with_init( url.as_str(), &RequestInit { headers: req.headers().clone(), method: req.method(), cf: CfProperties { // Always cache this fetch regardless of content type // for a max of 5 seconds before revalidating the resource cache_ttl: Some(5), cache_everything: Some(true), // Enterprise only feature, see Cache API for other plans cache_key: Some(custom_key), ..CfProperties::default() }, ..RequestInit::default() }, )?; let mut response = Fetch::Request(request).send().await?; // Set cache control headers to cache on browser for 25 minutes let _ = response.headers_mut().set("Cache-Control", "max-age=1500"); Ok(response) } ``` ## Caching HTML resources ```js // Force Cloudflare to cache an asset fetch(event.request, { cf: { cacheEverything: true } }); ``` Setting the cache level to **Cache Everything** will override the default cacheability of the asset. For time-to-live (TTL), Cloudflare will still rely on headers set by the origin. ## Custom cache keys :::note This feature is available only to Enterprise customers. ::: A request's cache key is what determines if two requests are the same for caching purposes. If a request has the same cache key as some previous request, then Cloudflare can serve the same cached response for both. For more about cache keys, refer to the [Create custom cache keys](/cache/how-to/cache-keys/#create-custom-cache-keys) documentation. ```js // Set cache key for this request to "some-string". fetch(event.request, { cf: { cacheKey: "some-string" } }); ``` Normally, Cloudflare computes the cache key for a request based on the request's URL. Sometimes, though, you may like different URLs to be treated as if they were the same for caching purposes. For example, if your website content is hosted from both Amazon S3 and Google Cloud Storage - you have the same content in both places, and you can use a Worker to randomly balance between the two. However, you do not want to end up caching two copies of your content. You could utilize custom cache keys to cache based on the original request URL rather than the subrequest URL: ```js export default { async fetch(request) { let url = new URL(request.url); if (Math.random() < 0.5) { url.hostname = "example.s3.amazonaws.com"; } else { url.hostname = "example.storage.googleapis.com"; } let newRequest = new Request(url, request); return fetch(newRequest, { cf: { cacheKey: request.url }, }); }, }; ``` ```ts export default { async fetch(request): Promise { let url = new URL(request.url); if (Math.random() < 0.5) { url.hostname = "example.s3.amazonaws.com"; } else { url.hostname = "example.storage.googleapis.com"; } let newRequest = new Request(url, request); return fetch(newRequest, { cf: { cacheKey: request.url }, }); }, } satisfies ExportedHandler; ``` ```ts import { Hono } from 'hono'; type Bindings = {}; const app = new Hono<{ Bindings: Bindings }>(); app.all('*', async (c) => { const originalUrl = c.req.url; const url = new URL(originalUrl); // Randomly select a storage backend if (Math.random() < 0.5) { url.hostname = "example.s3.amazonaws.com"; } else { url.hostname = "example.storage.googleapis.com"; } // Create a new request to the selected backend const newRequest = new Request(url, c.req.raw); // Fetch using the original URL as the cache key return fetch(newRequest, { cf: { cacheKey: originalUrl }, }); }); export default app; ``` Workers operating on behalf of different zones cannot affect each other's cache. You can only override cache keys when making requests within your own zone (in the above example `event.request.url` was the key stored), or requests to hosts that are not on Cloudflare. When making a request to another Cloudflare zone (for example, belonging to a different Cloudflare customer), that zone fully controls how its own content is cached within Cloudflare; you cannot override it. ## Override based on origin response code ```js // Force response to be cached for 86400 seconds for 200 status // codes, 1 second for 404, and do not cache 500 errors. fetch(request, { cf: { cacheTtlByStatus: { "200-299": 86400, 404: 1, "500-599": 0 } }, }); ``` This option is a version of the `cacheTtl` feature which chooses a TTL based on the response's status code and does not automatically set `cacheEverything: true`. If the response to this request has a status code that matches, Cloudflare will cache for the instructed time, and override cache directives sent by the origin. You can review [details on the `cacheTtl` feature on the Request page](/workers/runtime-apis/request/#the-cf-property-requestinitcfproperties). ## Customize cache behavior based on request file type Using custom cache keys and overrides based on response code, you can write a Worker that sets the TTL based on the response status code from origin, and request file type. The following example demonstrates how you might use this to cache requests for streaming media assets: ```js title="index.js" export default { async fetch(request) { // Instantiate new URL to make it mutable const newRequest = new URL(request.url); const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`; const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`; // Different asset types usually have different caching strategies. Most of the time media content such as audio, videos and images that are not user-generated content would not need to be updated often so a long TTL would be best. However, with HLS streaming, manifest files usually are set with short TTLs so that playback will not be affected, as this files contain the data that the player would need. By setting each caching strategy for categories of asset types in an object within an array, you can solve complex needs when it comes to media content for your application const cacheAssets = [ { asset: "video", key: customCacheKey, regex: /(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "image", key: queryCacheKey, regex: /(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "frontEnd", key: queryCacheKey, regex: /^.*\.(css|js)/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "audio", key: customCacheKey, regex: /(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "directPlay", key: customCacheKey, regex: /.*(\/Download)/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "manifest", key: customCacheKey, regex: /^.*\.(m3u8|mpd)/, info: 0, ok: 3, redirects: 2, clientError: 1, serverError: 0, }, ]; const { asset, regex, ...cache } = cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {}; const newResponse = await fetch(request, { cf: { cacheKey: cache.key, polish: false, cacheEverything: true, cacheTtlByStatus: { "100-199": cache.info, "200-299": cache.ok, "300-399": cache.redirects, "400-499": cache.clientError, "500-599": cache.serverError, }, cacheTags: ["static"], }, }); const response = new Response(newResponse.body, newResponse); // For debugging purposes response.headers.set("debug", JSON.stringify(cache)); return response; }, }; ``` ```js title="index.js" addEventListener("fetch", (event) => { return event.respondWith(handleRequest(event.request)); }); async function handleRequest(request) { // Instantiate new URL to make it mutable const newRequest = new URL(request.url); // Set `const` to be used in the array later on const customCacheKey = `${newRequest.hostname}${newRequest.pathname}`; const queryCacheKey = `${newRequest.hostname}${newRequest.pathname}${newRequest.search}`; // Set all variables needed to manipulate Cloudflare's cache using the fetch API in the `cf` object. You will be passing these variables in the objects down below. const cacheAssets = [ { asset: "video", key: customCacheKey, regex: /(.*\/Video)|(.*\.(m4s|mp4|ts|avi|mpeg|mpg|mkv|bin|webm|vob|flv|m2ts|mts|3gp|m4v|wmv|qt))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "image", key: queryCacheKey, regex: /(.*\/Images)|(.*\.(jpg|jpeg|png|bmp|pict|tif|tiff|webp|gif|heif|exif|bat|bpg|ppm|pgn|pbm|pnm))/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "frontEnd", key: queryCacheKey, regex: /^.*\.(css|js)/, info: 0, ok: 3600, redirects: 30, clientError: 10, serverError: 0, }, { asset: "audio", key: customCacheKey, regex: /(.*\/Audio)|(.*\.(flac|aac|mp3|alac|aiff|wav|ogg|aiff|opus|ape|wma|3gp))/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "directPlay", key: customCacheKey, regex: /.*(\/Download)/, info: 0, ok: 31556952, redirects: 30, clientError: 10, serverError: 0, }, { asset: "manifest", key: customCacheKey, regex: /^.*\.(m3u8|mpd)/, info: 0, ok: 3, redirects: 2, clientError: 1, serverError: 0, }, ]; // the `.find` method is used to find elements in an array (`cacheAssets`), in this case, `regex`, which can passed to the .`match` method to match on file extensions to cache, since they are many media types in the array. If you want to add more types, update the array. Refer to https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find for more information. const { asset, regex, ...cache } = cacheAssets.find(({ regex }) => newRequest.pathname.match(regex)) ?? {}; const newResponse = await fetch(request, { cf: { cacheKey: cache.key, polish: false, cacheEverything: true, cacheTtlByStatus: { "100-199": cache.info, "200-299": cache.ok, "300-399": cache.redirects, "400-499": cache.clientError, "500-599": cache.serverError, }, cacheTags: ["static"], }, }); const response = new Response(newResponse.body, newResponse); // For debugging purposes response.headers.set("debug", JSON.stringify(cache)); return response; } ``` ## Using the HTTP Cache API The `cache` mode can be set in `fetch` options. Currently Workers only support the `no-store` mode for controlling the cache. When `no-store` is supplied the cache is bypassed on the way to the origin and the request is not cacheable. ```js fetch(request, { cache: 'no-store'}); ``` --- # Conditional response URL: https://developers.cloudflare.com/workers/examples/conditional-response/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/conditional-response) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"]; // Return a new Response based on a URL's hostname const url = new URL(request.url); if (BLOCKED_HOSTNAMES.includes(url.hostname)) { return new Response("Blocked Host", { status: 403 }); } // Block paths ending in .doc or .xml based on the URL's file extension const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/); if (forbiddenExtRegExp.test(url.pathname)) { return new Response("Blocked Extension", { status: 403 }); } // On HTTP method if (request.method === "POST") { return new Response("Response for POST"); } // On User Agent const userAgent = request.headers.get("User-Agent") || ""; if (userAgent.includes("bot")) { return new Response("Block User Agent containing bot", { status: 403 }); } // On Client's IP address const clientIP = request.headers.get("CF-Connecting-IP"); if (clientIP === "1.2.3.4") { return new Response("Block the IP 1.2.3.4", { status: 403 }); } // On ASN if (request.cf && request.cf.asn == 64512) { return new Response("Block the ASN 64512 response"); } // On Device Type // Requires Enterprise "CF-Device-Type Header" zone setting or // Page Rule with "Cache By Device Type" setting applied. const device = request.headers.get("CF-Device-Type"); if (device === "mobile") { return Response.redirect("https://mobile.example.com"); } console.error( "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker", ); return fetch(request); }, }; ``` ```ts export default { async fetch(request): Promise { const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"]; // Return a new Response based on a URL's hostname const url = new URL(request.url); if (BLOCKED_HOSTNAMES.includes(url.hostname)) { return new Response("Blocked Host", { status: 403 }); } // Block paths ending in .doc or .xml based on the URL's file extension const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/); if (forbiddenExtRegExp.test(url.pathname)) { return new Response("Blocked Extension", { status: 403 }); } // On HTTP method if (request.method === "POST") { return new Response("Response for POST"); } // On User Agent const userAgent = request.headers.get("User-Agent") || ""; if (userAgent.includes("bot")) { return new Response("Block User Agent containing bot", { status: 403 }); } // On Client's IP address const clientIP = request.headers.get("CF-Connecting-IP"); if (clientIP === "1.2.3.4") { return new Response("Block the IP 1.2.3.4", { status: 403 }); } // On ASN if (request.cf && request.cf.asn == 64512) { return new Response("Block the ASN 64512 response"); } // On Device Type // Requires Enterprise "CF-Device-Type Header" zone setting or // Page Rule with "Cache By Device Type" setting applied. const device = request.headers.get("CF-Device-Type"); if (device === "mobile") { return Response.redirect("https://mobile.example.com"); } console.error( "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker", ); return fetch(request); }, } satisfies ExportedHandler; ``` ```py import re from workers import Response from urllib.parse import urlparse async def on_fetch(request): blocked_hostnames = ["nope.mywebsite.com", "bye.website.com"] url = urlparse(request.url) # Block on hostname if url.hostname in blocked_hostnames: return Response("Blocked Host", status=403) # On paths ending in .doc or .xml if re.search(r'\.(doc|xml)$', url.path): return Response("Blocked Extension", status=403) # On HTTP method if "POST" in request.method: return Response("Response for POST") # On User Agent user_agent = request.headers["User-Agent"] or "" if "bot" in user_agent: return Response("Block User Agent containing bot", status=403) # On Client's IP address client_ip = request.headers["CF-Connecting-IP"] if client_ip == "1.2.3.4": return Response("Block the IP 1.2.3.4", status=403) # On ASN if request.cf and request.cf.asn == 64512: return Response("Block the ASN 64512 response") # On Device Type # Requires Enterprise "CF-Device-Type Header" zone setting or # Page Rule with "Cache By Device Type" setting applied. device = request.headers["CF-Device-Type"] if device == "mobile": return Response.redirect("https://mobile.example.com") return fetch(request) ``` ```ts import { Hono } from "hono"; import { HTTPException } from "hono/http-exception"; const app = new Hono(); // Middleware to handle all conditions before reaching the main handler app.use("*", async (c, next) => { const request = c.req.raw; const BLOCKED_HOSTNAMES = ["nope.mywebsite.com", "bye.website.com"]; const hostname = new URL(c.req.url)?.hostname; // Return a new Response based on a URL's hostname if (BLOCKED_HOSTNAMES.includes(hostname)) { return c.text("Blocked Host", 403); } // Block paths ending in .doc or .xml based on the URL's file extension const forbiddenExtRegExp = new RegExp(/\.(doc|xml)$/); if (forbiddenExtRegExp.test(c.req.pathname)) { return c.text("Blocked Extension", 403); } // On User Agent const userAgent = c.req.header("User-Agent") || ""; if (userAgent.includes("bot")) { return c.text("Block User Agent containing bot", 403); } // On Client's IP address const clientIP = c.req.header("CF-Connecting-IP"); if (clientIP === "1.2.3.4") { return c.text("Block the IP 1.2.3.4", 403); } // On ASN if (request.cf && request.cf.asn === 64512) { return c.text("Block the ASN 64512 response"); } // On Device Type // Requires Enterprise "CF-Device-Type Header" zone setting or // Page Rule with "Cache By Device Type" setting applied. const device = c.req.header("CF-Device-Type"); if (device === "mobile") { return c.redirect("https://mobile.example.com"); } // Continue to the next handler await next(); }); // Handle POST requests differently app.post("*", (c) => { return c.text("Response for POST"); }); // Default handler for other methods app.get("*", async (c) => { console.error( "Getting Client's IP address, device type, and ASN are not supported in playground. Must test on a live worker", ); // Fetch the original request return fetch(c.req.raw); }); export default app; ``` --- # CORS header proxy URL: https://developers.cloudflare.com/workers/examples/cors-header-proxy/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cors-header-proxy) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const corsHeaders = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", }; // The URL for the remote third party API you want to fetch from // but does not implement CORS const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi"; // The endpoint you want the CORS reverse proxy to be on const PROXY_ENDPOINT = "/corsproxy/"; // The rest of this snippet for the demo page function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } const DEMO_PAGE = `

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting

OSZAR »
`; async function handleRequest(request) { const url = new URL(request.url); let apiUrl = url.searchParams.get("apiurl"); if (apiUrl == null) { apiUrl = API_URL; } // Rewrite request to point to API URL. This also makes the request mutable // so you can add the correct Origin header to make the API server think // that this request is not cross-site. request = new Request(apiUrl, request); request.headers.set("Origin", new URL(apiUrl).origin); let response = await fetch(request); // Recreate the response so you can modify the headers response = new Response(response.body, response); // Set CORS headers response.headers.set("Access-Control-Allow-Origin", url.origin); // Append to/Add Vary header so browser will cache response correctly response.headers.append("Vary", "Origin"); return response; } async function handleOptions(request) { if ( request.headers.get("Origin") !== null && request.headers.get("Access-Control-Request-Method") !== null && request.headers.get("Access-Control-Request-Headers") !== null ) { // Handle CORS preflight requests. return new Response(null, { headers: { ...corsHeaders, "Access-Control-Allow-Headers": request.headers.get( "Access-Control-Request-Headers", ), }, }); } else { // Handle standard OPTIONS request. return new Response(null, { headers: { Allow: "GET, HEAD, POST, OPTIONS", }, }); } } const url = new URL(request.url); if (url.pathname.startsWith(PROXY_ENDPOINT)) { if (request.method === "OPTIONS") { // Handle CORS preflight requests return handleOptions(request); } else if ( request.method === "GET" || request.method === "HEAD" || request.method === "POST" ) { // Handle requests to the API server return handleRequest(request); } else { return new Response(null, { status: 405, statusText: "Method Not Allowed", }); } } else { return rawHtmlResponse(DEMO_PAGE); } }, }; ```
```ts export default { async fetch(request): Promise { const corsHeaders = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", }; // The URL for the remote third party API you want to fetch from // but does not implement CORS const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi"; // The endpoint you want the CORS reverse proxy to be on const PROXY_ENDPOINT = "/corsproxy/"; // The rest of this snippet for the demo page function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } const DEMO_PAGE = `

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting

OSZAR »
`; async function handleRequest(request) { const url = new URL(request.url); let apiUrl = url.searchParams.get("apiurl"); if (apiUrl == null) { apiUrl = API_URL; } // Rewrite request to point to API URL. This also makes the request mutable // so you can add the correct Origin header to make the API server think // that this request is not cross-site. request = new Request(apiUrl, request); request.headers.set("Origin", new URL(apiUrl).origin); let response = await fetch(request); // Recreate the response so you can modify the headers response = new Response(response.body, response); // Set CORS headers response.headers.set("Access-Control-Allow-Origin", url.origin); // Append to/Add Vary header so browser will cache response correctly response.headers.append("Vary", "Origin"); return response; } async function handleOptions(request) { if ( request.headers.get("Origin") !== null && request.headers.get("Access-Control-Request-Method") !== null && request.headers.get("Access-Control-Request-Headers") !== null ) { // Handle CORS preflight requests. return new Response(null, { headers: { ...corsHeaders, "Access-Control-Allow-Headers": request.headers.get( "Access-Control-Request-Headers", ), }, }); } else { // Handle standard OPTIONS request. return new Response(null, { headers: { Allow: "GET, HEAD, POST, OPTIONS", }, }); } } const url = new URL(request.url); if (url.pathname.startsWith(PROXY_ENDPOINT)) { if (request.method === "OPTIONS") { // Handle CORS preflight requests return handleOptions(request); } else if ( request.method === "GET" || request.method === "HEAD" || request.method === "POST" ) { // Handle requests to the API server return handleRequest(request); } else { return new Response(null, { status: 405, statusText: "Method Not Allowed", }); } } else { return rawHtmlResponse(DEMO_PAGE); } }, } satisfies ExportedHandler; ```
```ts import { Hono } from "hono"; import { cors } from "hono/cors"; // The URL for the remote third party API you want to fetch from // but does not implement CORS const API_URL = "https://examples.cloudflareworkers.com/demos/demoapi"; // The endpoint you want the CORS reverse proxy to be on const PROXY_ENDPOINT = "/corsproxy/"; const app = new Hono(); // Demo page handler app.get("*", async (c) => { // Only handle non-proxy requests with this handler if (c.req.path.startsWith(PROXY_ENDPOINT)) { return next(); } // Create the demo page HTML const DEMO_PAGE = `

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting

OSZAR »
`; return c.html(DEMO_PAGE); }); // CORS proxy routes app.on(["GET", "HEAD", "POST", "OPTIONS"], PROXY_ENDPOINT + "*", async (c) => { const url = new URL(c.req.url); // Handle OPTIONS preflight requests if (c.req.method === "OPTIONS") { const origin = c.req.header("Origin"); const requestMethod = c.req.header("Access-Control-Request-Method"); const requestHeaders = c.req.header("Access-Control-Request-Headers"); if (origin && requestMethod && requestHeaders) { // Handle CORS preflight requests return new Response(null, { headers: { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", "Access-Control-Allow-Headers": requestHeaders, }, }); } else { // Handle standard OPTIONS request return new Response(null, { headers: { Allow: "GET, HEAD, POST, OPTIONS", }, }); } } // Handle actual requests let apiUrl = url.searchParams.get("apiurl") || API_URL; // Rewrite request to point to API URL const modifiedRequest = new Request(apiUrl, c.req.raw); modifiedRequest.headers.set("Origin", new URL(apiUrl).origin); let response = await fetch(modifiedRequest); // Recreate the response so we can modify the headers response = new Response(response.body, response); // Set CORS headers response.headers.set("Access-Control-Allow-Origin", url.origin); // Append to/Add Vary header so browser will cache response correctly response.headers.append("Vary", "Origin"); return response; }); // Handle method not allowed for proxy endpoint app.all(PROXY_ENDPOINT + "*", (c) => { return new Response(null, { status: 405, statusText: "Method Not Allowed", }); }); export default app; ```
```py from pyodide.ffi import to_js as _to_js from js import Response, URL, fetch, Object, Request def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) async def on_fetch(request): cors_headers = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET,HEAD,POST,OPTIONS", "Access-Control-Max-Age": "86400", } api_url = "https://examples.cloudflareworkers.com/demos/demoapi" proxy_endpoint = "/corsproxy/" def raw_html_response(html): return Response.new(html, headers=to_js({"content-type": "text/html;charset=UTF-8"})) demo_page = f'''

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting

OSZAR »
''' async def handle_request(request): url = URL.new(request.url) api_url2 = url.searchParams["apiurl"] if not api_url2: api_url2 = api_url request = Request.new(api_url2, request) request.headers["Origin"] = (URL.new(api_url2)).origin print(request.headers) response = await fetch(request) response = Response.new(response.body, response) response.headers["Access-Control-Allow-Origin"] = url.origin response.headers["Vary"] = "Origin" return response async def handle_options(request): if "Origin" in request.headers and "Access-Control-Request-Method" in request.headers and "Access-Control-Request-Headers" in request.headers: return Response.new(None, headers=to_js({ **cors_headers, "Access-Control-Allow-Headers": request.headers["Access-Control-Request-Headers"] })) return Response.new(None, headers=to_js({"Allow": "GET, HEAD, POST, OPTIONS"})) url = URL.new(request.url) if url.pathname.startswith(proxy_endpoint): if request.method == "OPTIONS": return handle_options(request) if request.method in ("GET", "HEAD", "POST"): return handle_request(request) return Response.new(None, status=405, statusText="Method Not Allowed") return raw_html_response(demo_page) ```
```rs use std::{borrow::Cow, collections::HashMap}; use worker::*; fn raw*html_response(html: &str) -> Result { Response::from_html(html) } async fn handle_request(req: Request, api_url: &str) -> Result { let url = req.url().unwrap(); let mut api_url2 = url .query_pairs() .find(|x| x.0 == Cow::Borrowed("apiurl")) .unwrap() .1 .to_string(); if api_url2 == String::from("") { api_url2 = api_url.to_string(); } let mut request = req.clone_mut()?; \*request.path_mut()? = api_url2.clone(); if let url::Origin::Tuple(origin, *, _) = Url::parse(&api_url2)?.origin() { (\*request.headers_mut()?).set("Origin", &origin)?; } let mut response = Fetch::Request(request).send().await?.cloned()?; let headers = response.headers_mut(); if let url::Origin::Tuple(origin, _, \_) = url.origin() { headers.set("Access-Control-Allow-Origin", &origin)?; headers.set("Vary", "Origin")?; } Ok(response) } fn handle*options(req: Request, cors_headers: &HashMap<&str, &str>) -> Result { let headers: Vec<*> = req.headers().keys().collect(); if [ "access-control-request-method", "access-control-request-headers", "origin", ] .iter() .all(|i| headers.contains(&i.to_string())) { let mut headers = Headers::new(); for (k, v) in cors_headers.iter() { headers.set(k, v)?; } return Ok(Response::empty()?.with_headers(headers)); } Response::empty() } #[event(fetch)] async fn fetch(req: Request, \_env: Env, \_ctx: Context) -> Result { let cors_headers = HashMap::from([ ("Access-Control-Allow-Origin", "*"), ("Access-Control-Allow-Methods", "GET,HEAD,POST,OPTIONS"), ("Access-Control-Max-Age", "86400"), ]); let api_url = "https://examples.cloudflareworkers.com/demos/demoapi"; let proxy_endpoint = "/corsproxy/"; let demo_page = format!( r#"

API GET without CORS Proxy

Shows TypeError: Failed to fetch since CORS is misconfigured

Waiting

API GET with CORS Proxy

Waiting

API POST with CORS Proxy + Preflight

Waiting

OSZAR »
"# ); if req.url()?.path().starts_with(proxy_endpoint) { match req.method() { Method::Options => return handle_options(req, &cors_headers), Method::Get | Method::Head | Method::Post => return handle_request(req, api_url).await, _ => return Response::error("Method Not Allowed", 405), } } raw_html_response(&demo_page) } ```
``` --- # Country code redirect URL: https://developers.cloudflare.com/workers/examples/country-code-redirect/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/country-code-redirect) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { /** * A map of the URLs to redirect to * @param {Object} countryMap */ const countryMap = { US: "https://example.com/us", EU: "https://example.com/eu", }; // Use the cf object to obtain the country of the request // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties const country = request.cf.country; if (country != null && country in countryMap) { const url = countryMap[country]; // Remove this logging statement from your final output. console.log( `Based on ${country}-based request, your user would go to ${url}.`, ); return Response.redirect(url); } else { return fetch("https://example.com", request); } }, }; ``` ```ts export default { async fetch(request): Promise { /** * A map of the URLs to redirect to * @param {Object} countryMap */ const countryMap = { US: "https://example.com/us", EU: "https://example.com/eu", }; // Use the cf object to obtain the country of the request // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties const country = request.cf.country; if (country != null && country in countryMap) { const url = countryMap[country]; return Response.redirect(url); } else { return fetch(request); } }, } satisfies ExportedHandler; ``` ```py from workers import Response, fetch async def on_fetch(request): countries = { "US": "https://example.com/us", "EU": "https://example.com/eu", } # Use the cf object to obtain the country of the request # more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties country = request.cf.country if country and country in countries: url = countries[country] return Response.redirect(url) return fetch("https://example.com", request) ``` ```ts import { Hono } from 'hono'; // Define the RequestWithCf interface to add Cloudflare-specific properties interface RequestWithCf extends Request { cf: { country: string; // Other CF properties can be added as needed }; } const app = new Hono(); app.get('*', async (c) => { /** * A map of the URLs to redirect to */ const countryMap: Record = { US: "https://example.com/us", EU: "https://example.com/eu", }; // Cast the raw request to include Cloudflare-specific properties const request = c.req.raw as RequestWithCf; // Use the cf object to obtain the country of the request // more on the cf object: https://developers.cloudflare.com/workers/runtime-apis/request#incomingrequestcfproperties const country = request.cf.country; if (country != null && country in countryMap) { const url = countryMap[country]; // Redirect using Hono's redirect helper return c.redirect(url); } else { // Default fallback return fetch("https://example.com", request); } }); export default app; ``` --- # Setting Cron Triggers URL: https://developers.cloudflare.com/workers/examples/cron-trigger/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/cron-trigger) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { Render, TabItem, Tabs, WranglerConfig } from "~/components"; ```js export default { async scheduled(controller, env, ctx) { console.log("cron processed"); }, }; ``` ```ts interface Env {} export default { async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { console.log("cron processed"); }, }; ``` ```python from workers import handler @handler async def on_scheduled(controller, env, ctx): print("cron processed") ``` ```ts import { Hono } from 'hono'; interface Env {} // Create Hono app const app = new Hono<{ Bindings: Env }>(); // Regular routes for normal HTTP requests app.get('/', (c) => c.text('Hello World!')); // Export both the app and a scheduled function export default { // The Hono app handles regular HTTP requests fetch: app.fetch, // The scheduled function handles Cron triggers async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { console.log("cron processed"); // You could also perform actions like: // - Fetching data from external APIs // - Updating KV or Durable Object storage // - Running maintenance tasks // - Sending notifications }, }; ``` ## Set Cron Triggers in Wrangler Refer to [Cron Triggers](/workers/configuration/cron-triggers/) for more information on how to add a Cron Trigger. If you are deploying with Wrangler, set the cron syntax (once per hour as shown below) by adding this to your Wrangler file: ```toml name = "worker" # ... [triggers] crons = ["0 * * * *"] ``` You also can set a different Cron Trigger for each [environment](/workers/wrangler/environments/) in your [Wrangler configuration file](/workers/wrangler/configuration/). You need to put the `[triggers]` table under your chosen environment. For example: ```toml [env.dev.triggers] crons = ["0 * * * *"] ``` ## Test Cron Triggers using Wrangler The recommended way of testing Cron Triggers is using Wrangler. Cron Triggers can be tested using Wrangler by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/__scheduled` (or `/cdn-cgi/handler/scheduled` for Python Workers) route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh npx wrangler dev --test-scheduled curl "http://localhost:8787/__scheduled?cron=0+*+*+*+*" curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*" # Python Workers ``` --- # Data loss prevention URL: https://developers.cloudflare.com/workers/examples/data-loss-prevention/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/data-loss-prevention) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const DEBUG = true; const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook"; /** * Alert a data breach by posting to a webhook server */ async function postDataBreach(request) { return await fetch(SOME_HOOK_SERVER, { method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, body: JSON.stringify({ ip: request.headers.get("cf-connecting-ip"), time: Date.now(), request: request, }), }); } /** * Define personal data with regular expressions. * Respond with block if credit card data, and strip * emails and phone numbers from the response. * Execution will be limited to MIME type "text/*". */ const response = await fetch(request); // Return origin response, if response wasn’t text const contentType = response.headers.get("content-type") || ""; if (!contentType.toLowerCase().includes("text/")) { return response; } let text = await response.text(); // When debugging replace the response // from the origin with an email text = DEBUG ? text.replace("You may use this", "me@example.com may use this") : text; const sensitiveRegexsMap = { creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`, email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`, phone: String.raw`\b07\d{9}\b`, }; for (const kind in sensitiveRegexsMap) { const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig"); const match = await sensitiveRegex.test(text); if (match) { // Alert a data breach await postDataBreach(request); // Respond with a block if credit card, // otherwise replace sensitive text with `*`s return kind === "creditCard" ? new Response(kind + " found\nForbidden\n", { status: 403, statusText: "Forbidden", }) : new Response(text.replace(sensitiveRegex, "**********"), response); } } return new Response(text, response); }, }; ``` ```ts export default { async fetch(request): Promise { const DEBUG = true; const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook"; /** * Alert a data breach by posting to a webhook server */ async function postDataBreach(request) { return await fetch(SOME_HOOK_SERVER, { method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, body: JSON.stringify({ ip: request.headers.get("cf-connecting-ip"), time: Date.now(), request: request, }), }); } /** * Define personal data with regular expressions. * Respond with block if credit card data, and strip * emails and phone numbers from the response. * Execution will be limited to MIME type "text/*". */ const response = await fetch(request); // Return origin response, if response wasn’t text const contentType = response.headers.get("content-type") || ""; if (!contentType.toLowerCase().includes("text/")) { return response; } let text = await response.text(); // When debugging replace the response // from the origin with an email text = DEBUG ? text.replace("You may use this", "me@example.com may use this") : text; const sensitiveRegexsMap = { creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`, email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`, phone: String.raw`\b07\d{9}\b`, }; for (const kind in sensitiveRegexsMap) { const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig"); const match = await sensitiveRegex.test(text); if (match) { // Alert a data breach await postDataBreach(request); // Respond with a block if credit card, // otherwise replace sensitive text with `*`s return kind === "creditCard" ? new Response(kind + " found\nForbidden\n", { status: 403, statusText: "Forbidden", }) : new Response(text.replace(sensitiveRegex, "**********"), response); } } return new Response(text, response); }, } satisfies ExportedHandler; ``` ```py import re from datetime import datetime from js import Response, fetch, JSON, Headers # Alert a data breach by posting to a webhook server async def post_data_breach(request): some_hook_server = "https://webhook.flow-wolf.io/hook" headers = Headers.new({"content-type": "application/json"}.items()) body = JSON.stringify({ "ip": request.headers["cf-connecting-ip"], "time": datetime.now(), "request": request, }) return await fetch(some_hook_server, method="POST", headers=headers, body=body) async def on_fetch(request): debug = True # Define personal data with regular expressions. # Respond with block if credit card data, and strip # emails and phone numbers from the response. # Execution will be limited to MIME type "text/*". response = await fetch(request) # Return origin response, if response wasn’t text content_type = response.headers["content-type"] or "" if "text" not in content_type: return response text = await response.text() # When debugging replace the response from the origin with an email text = text.replace("You may use this", "me@example.com may use this") if debug else text sensitive_regex = [ ("credit_card", r'\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b'), ("email", r'\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b'), ("phone", r'\b07\d{9}\b'), ] for (kind, regex) in sensitive_regex: match = re.search(regex, text, flags=re.IGNORECASE) if match: # Alert a data breach await post_data_breach(request) # Respond with a block if credit card, otherwise replace sensitive text with `*`s card_resp = Response.new(kind + " found\nForbidden\n", status=403,statusText="Forbidden") sensitive_resp = Response.new(re.sub(regex, "*"*10, text, flags=re.IGNORECASE), response) return card_resp if kind == "credit_card" else sensitive_resp return Response.new(text, response) ``` ```ts import { Hono } from 'hono'; const app = new Hono(); // Configuration const DEBUG = true; const SOME_HOOK_SERVER = "https://webhook.flow-wolf.io/hook"; // Define sensitive data patterns const sensitiveRegexsMap = { creditCard: String.raw`\b(?:4[0-9]{12}(?:[0-9]{3})?|(?:5[1-5][0-9]{2}|222[1-9]|22[3-9][0-9]|2[3-6][0-9]{2}|27[01][0-9]|2720)[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|6(?:011|5[0-9]{2})[0-9]{12}|(?:2131|1800|35\d{3})\d{11})\b`, email: String.raw`\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b`, phone: String.raw`\b07\d{9}\b`, }; /** * Alert a data breach by posting to a webhook server */ async function postDataBreach(request: Request) { return await fetch(SOME_HOOK_SERVER, { method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, body: JSON.stringify({ ip: request.headers.get("cf-connecting-ip"), time: Date.now(), request: request, }), }); } // Main middleware to handle data loss prevention app.use('*', async (c) => { // Fetch the origin response const response = await fetch(c.req.raw); // Return origin response if response wasn't text const contentType = response.headers.get("content-type") || ""; if (!contentType.toLowerCase().includes("text/")) { return response; } // Get the response text let text = await response.text(); // When debugging, replace the response from the origin with an email text = DEBUG ? text.replace("You may use this", "me@example.com may use this") : text; // Check for sensitive data for (const kind in sensitiveRegexsMap) { const sensitiveRegex = new RegExp(sensitiveRegexsMap[kind], "ig"); const match = sensitiveRegex.test(text); if (match) { // Alert a data breach await postDataBreach(c.req.raw); // Respond with a block if credit card, otherwise replace sensitive text with `*`s if (kind === "creditCard") { return c.text(`${kind} found\nForbidden\n`, 403); } else { return new Response(text.replace(sensitiveRegex, "**********"), { status: response.status, statusText: response.statusText, headers: response.headers, }); } } } // Return the modified response return new Response(text, { status: response.status, statusText: response.statusText, headers: response.headers, }); }); export default app; ``` --- # Debugging logs URL: https://developers.cloudflare.com/workers/examples/debugging-logs/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/debugging-logs) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request, env, ctx) { // Service configured to receive logs const LOG_URL = "https://log-service.example.com/"; async function postLog(data) { return await fetch(LOG_URL, { method: "POST", body: data, }); } let response; try { response = await fetch(request); if (!response.ok && !response.redirected) { const body = await response.text(); throw new Error( "Bad response at origin. Status: " + response.status + " Body: " + // Ensure the string is small enough to be a header body.trim().substring(0, 10), ); } } catch (err) { // Without ctx.waitUntil(), your fetch() to Cloudflare's // logging service may or may not complete ctx.waitUntil(postLog(err.toString())); const stack = JSON.stringify(err.stack) || err; // Copy the response and initialize body to the stack trace response = new Response(stack, response); // Add the error stack into a header to find out what happened response.headers.set("X-Debug-stack", stack); response.headers.set("X-Debug-err", err); } return response; }, }; ``` ```ts interface Env {} export default { async fetch(request, env, ctx): Promise { // Service configured to receive logs const LOG_URL = "https://log-service.example.com/"; async function postLog(data) { return await fetch(LOG_URL, { method: "POST", body: data, }); } let response; try { response = await fetch(request); if (!response.ok && !response.redirected) { const body = await response.text(); throw new Error( "Bad response at origin. Status: " + response.status + " Body: " + // Ensure the string is small enough to be a header body.trim().substring(0, 10), ); } } catch (err) { // Without ctx.waitUntil(), your fetch() to Cloudflare's // logging service may or may not complete ctx.waitUntil(postLog(err.toString())); const stack = JSON.stringify(err.stack) || err; // Copy the response and initialize body to the stack trace response = new Response(stack, response); // Add the error stack into a header to find out what happened response.headers.set("X-Debug-stack", stack); response.headers.set("X-Debug-err", err); } return response; }, } satisfies ExportedHandler; ``` ```py import json import traceback from pyodide.ffi import create_once_callable from js import Response, fetch, Headers async def on_fetch(request, _env, ctx): # Service configured to receive logs log_url = "https://log-service.example.com/" async def post_log(data): return await fetch(log_url, method="POST", body=data) response = await fetch(request) try: if not response.ok and not response.redirected: body = await response.text() # Simulating an error. Ensure the string is small enough to be a header raise Exception(f'Bad response at origin. Status:{response.status} Body:{body.strip()[:10]}') except Exception as e: # Without ctx.waitUntil(), your fetch() to Cloudflare's # logging service may or may not complete ctx.waitUntil(create_once_callable(post_log(e))) stack = json.dumps(traceback.format_exc()) or e # Copy the response and add to header response = Response.new(stack, response) response.headers["X-Debug-stack"] = stack response.headers["X-Debug-err"] = e return response ``` ```ts import { Hono } from 'hono'; // Define the environment with appropriate types interface Env {} const app = new Hono<{ Bindings: Env }>(); // Service configured to receive logs const LOG_URL = "https://log-service.example.com/"; // Function to post logs to an external service async function postLog(data: string) { return await fetch(LOG_URL, { method: "POST", body: data, }); } // Middleware to handle error logging app.use('*', async (c, next) => { try { // Process the request with the next handler await next(); // After processing, check if the response indicates an error if (c.res && (!c.res.ok && !c.res.redirected)) { const body = await c.res.clone().text(); throw new Error( "Bad response at origin. Status: " + c.res.status + " Body: " + // Ensure the string is small enough to be a header body.trim().substring(0, 10) ); } } catch (err) { // Without waitUntil, the fetch to the logging service may not complete c.executionCtx.waitUntil( postLog(err.toString()) ); // Get the error stack or error itself const stack = JSON.stringify(err.stack) || err.toString(); // Create a new response with the error information const response = c.res ? new Response(stack, { status: c.res.status, headers: c.res.headers }) : new Response(stack, { status: 500 }); // Add debug headers response.headers.set("X-Debug-stack", stack); response.headers.set("X-Debug-err", err.toString()); // Set the modified response c.res = response; } }); // Default route handler that passes requests through app.all('*', async (c) => { return fetch(c.req.raw); }); export default app; ``` --- # Cookie parsing URL: https://developers.cloudflare.com/workers/examples/extract-cookie-value/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/extract-cookie-value) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js import { parse } from "cookie"; export default { async fetch(request) { // The name of the cookie const COOKIE_NAME = "__uid"; const cookie = parse(request.headers.get("Cookie") || ""); if (cookie[COOKIE_NAME] != null) { // Respond with the cookie value return new Response(cookie[COOKIE_NAME]); } return new Response("No cookie with name: " + COOKIE_NAME); }, }; ``` ```ts import { parse } from "cookie"; export default { async fetch(request): Promise { // The name of the cookie const COOKIE_NAME = "__uid"; const cookie = parse(request.headers.get("Cookie") || ""); if (cookie[COOKIE_NAME] != null) { // Respond with the cookie value return new Response(cookie[COOKIE_NAME]); } return new Response("No cookie with name: " + COOKIE_NAME); }, } satisfies ExportedHandler; ``` ```py from http.cookies import SimpleCookie from workers import Response async def on_fetch(request): # Name of the cookie cookie_name = "__uid" cookies = SimpleCookie(request.headers["Cookie"] or "") if cookie_name in cookies: # Respond with cookie value return Response(cookies[cookie_name].value) return Response("No cookie with name: " + cookie_name) ``` ```ts import { Hono } from 'hono'; import { getCookie } from 'hono/cookie'; const app = new Hono(); app.get('*', (c) => { // The name of the cookie const COOKIE_NAME = "__uid"; // Get the specific cookie value using Hono's cookie helper const cookieValue = getCookie(c, COOKIE_NAME); if (cookieValue) { // Respond with the cookie value return c.text(cookieValue); } return c.text("No cookie with name: " + COOKIE_NAME); }); export default app; ``` :::note[External dependencies] This example requires the npm package [`cookie`](https://www.npmjs.com/package/cookie) to be installed in your JavaScript project. The Hono example uses the built-in cookie utilities provided by Hono, so no external dependencies are needed for that implementation. ::: --- # Fetch JSON URL: https://developers.cloudflare.com/workers/examples/fetch-json/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/fetch-json) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request, env, ctx) { const url = "https://jsonplaceholder.typicode.com/todos/1"; // gatherResponse returns both content-type & response body as a string async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } return { contentType, result: await response.text() }; } const response = await fetch(url); const { contentType, result } = await gatherResponse(response); const options = { headers: { "content-type": contentType } }; return new Response(result, options); }, }; ``` ```ts interface Env {} export default { async fetch(request, env, ctx): Promise { const url = "https://jsonplaceholder.typicode.com/todos/1"; // gatherResponse returns both content-type & response body as a string async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } return { contentType, result: await response.text() }; } const response = await fetch(url); const { contentType, result } = await gatherResponse(response); const options = { headers: { "content-type": contentType } }; return new Response(result, options); }, } satisfies ExportedHandler; ``` ```py from workers import Response, fetch import json async def on_fetch(request): url = "https://jsonplaceholder.typicode.com/todos/1" # gather_response returns both content-type & response body as a string async def gather_response(response): headers = response.headers content_type = headers["content-type"] or "" if "application/json" in content_type: return (content_type, json.dumps(await response.json())) return (content_type, await response.text()) response = await fetch(url) content_type, result = await gather_response(response) headers = {"content-type": content_type} return Response(result, headers=headers) ``` ```ts import { Hono } from 'hono'; type Env = {}; const app = new Hono<{ Bindings: Env }>(); app.get('*', async (c) => { const url = "https://jsonplaceholder.typicode.com/todos/1"; // gatherResponse returns both content-type & response body as a string async function gatherResponse(response: Response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } return { contentType, result: await response.text() }; } const response = await fetch(url); const { contentType, result } = await gatherResponse(response); return new Response(result, { headers: { "content-type": contentType } }); }); export default app; ``` --- # Fetch HTML URL: https://developers.cloudflare.com/workers/examples/fetch-html/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/fetch-html) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { Render, TabItem, Tabs } from "~/components"; ```ts export default { async fetch(request: Request): Promise { /** * Replace `remote` with the host you wish to send requests to */ const remote = "https://example.com"; return await fetch(remote, request); }, }; ``` ```py from js import fetch async def on_fetch(request): # Replace `remote` with the host you wish to send requests to remote = "https://example.com" return await fetch(remote, request) ``` ```ts import { Hono } from 'hono'; const app = new Hono(); app.all('*', async (c) => { /** * Replace `remote` with the host you wish to send requests to */ const remote = "https://example.com"; // Forward the request to the remote server return await fetch(remote, c.req.raw); }); export default app; ``` --- # Geolocation: Weather application URL: https://developers.cloudflare.com/workers/examples/geolocation-app-weather/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/geolocation-app-weather) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { let endpoint = "https://api.waqi.info/feed/geo:"; const token = ""; //Use a token from https://aqicn.org/api/ let html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`; let html_content = "

Weather 🌦

"; const latitude = request.cf.latitude; const longitude = request.cf.longitude; endpoint += `${latitude};${longitude}/?token=${token}`; const init = { headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(endpoint, init); const content = await response.json(); html_content += `

This is a demo using Workers geolocation data.

`; html_content += `You are located at: ${latitude},${longitude}.

`; html_content += `

Based off sensor data from ${content.data.city.name}:

`; html_content += `

The AQI level is: ${content.data.aqi}.

`; html_content += `

The N02 level is: ${content.data.iaqi.no2?.v}.

`; html_content += `

The O3 level is: ${content.data.iaqi.o3?.v}.

`; html_content += `

The temperature is: ${content.data.iaqi.t?.v}°C.

`; let html = ` Geolocation: Weather
${html_content}
OSZAR »
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, }; ```
```ts export default { async fetch(request): Promise { let endpoint = "https://api.waqi.info/feed/geo:"; const token = ""; //Use a token from https://aqicn.org/api/ let html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`; let html_content = "

Weather 🌦

"; const latitude = request.cf.latitude; const longitude = request.cf.longitude; endpoint += `${latitude};${longitude}/?token=${token}`; const init = { headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(endpoint, init); const content = await response.json(); html_content += `

This is a demo using Workers geolocation data.

`; html_content += `You are located at: ${latitude},${longitude}.

`; html_content += `

Based off sensor data from ${content.data.city.name}:

`; html_content += `

The AQI level is: ${content.data.aqi}.

`; html_content += `

The N02 level is: ${content.data.iaqi.no2?.v}.

`; html_content += `

The O3 level is: ${content.data.iaqi.o3?.v}.

`; html_content += `

The temperature is: ${content.data.iaqi.t?.v}°C.

`; let html = ` Geolocation: Weather
${html_content}
OSZAR »
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ```
```ts import { Hono } from 'hono'; import { html } from 'hono/html'; type Bindings = {}; interface WeatherApiResponse { data: { aqi: number; city: { name: string; url: string; }; iaqi: { no2?: { v: number }; o3?: { v: number }; t?: { v: number }; }; }; } const app = new Hono<{ Bindings: Bindings }>(); app.get('*', async (c) => { // Get API endpoint let endpoint = "https://api.waqi.info/feed/geo:"; const token = ""; // Use a token from https://aqicn.org/api/ // Define styles const html_style = `body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}`; // Get geolocation from Cloudflare request const req = c.req.raw; const latitude = req.cf?.latitude; const longitude = req.cf?.longitude; // Create complete API endpoint with coordinates endpoint += `${latitude};${longitude}/?token=${token}`; // Fetch weather data const init = { headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(endpoint, init); const content = await response.json() as WeatherApiResponse; // Build HTML content const weatherContent = html`

Weather 🌦

This is a demo using Workers geolocation data.

You are located at: ${latitude},${longitude}.

Based off sensor data from ${content.data.city.name}:

The AQI level is: ${content.data.aqi}.

The N02 level is: ${content.data.iaqi.no2?.v}.

The O3 level is: ${content.data.iaqi.o3?.v}.

The temperature is: ${content.data.iaqi.t?.v}°C.

`; // Complete HTML document const htmlDocument = html` Geolocation: Weather
${weatherContent}
OSZAR »
`; // Return HTML response return c.html(htmlDocument); }); export default app; ```
```py from workers import Response, fetch async def on_fetch(request): endpoint = "https://api.waqi.info/feed/geo:" token = "" # Use a token from https://aqicn.org/api/ html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f}" html_content = "

Weather 🌦

" latitude = request.cf.latitude longitude = request.cf.longitude endpoint += f"{latitude};{longitude}/?token={token}" response = await fetch(endpoint) content = await response.json() html_content += "

This is a demo using Workers geolocation data.

" html_content += f"You are located at: {latitude},{longitude}.

" html_content += f"

Based off sensor data from {content['data']['city']['name']}:

" html_content += f"

The AQI level is: {content['data']['aqi']}.

" html_content += f"

The N02 level is: {content['data']['iaqi']['no2']['v']}.

" html_content += f"

The O3 level is: {content['data']['iaqi']['o3']['v']}.

" html_content += f"

The temperature is: {content['data']['iaqi']['t']['v']}°C.

" html = f""" Geolocation: Weather
{html_content}
OSZAR »
""" headers = {"content-type": "text/html;charset=UTF-8"} return Response(html, headers=headers) ```
--- # Geolocation: Custom Styling URL: https://developers.cloudflare.com/workers/examples/geolocation-custom-styling/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/geolocation-custom-styling) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { let grads = [ [ { color: "00000c", position: 0 }, { color: "00000c", position: 0 }, ], [ { color: "020111", position: 85 }, { color: "191621", position: 100 }, ], [ { color: "020111", position: 60 }, { color: "20202c", position: 100 }, ], [ { color: "020111", position: 10 }, { color: "3a3a52", position: 100 }, ], [ { color: "20202c", position: 0 }, { color: "515175", position: 100 }, ], [ { color: "40405c", position: 0 }, { color: "6f71aa", position: 80 }, { color: "8a76ab", position: 100 }, ], [ { color: "4a4969", position: 0 }, { color: "7072ab", position: 50 }, { color: "cd82a0", position: 100 }, ], [ { color: "757abf", position: 0 }, { color: "8583be", position: 60 }, { color: "eab0d1", position: 100 }, ], [ { color: "82addb", position: 0 }, { color: "ebb2b1", position: 100 }, ], [ { color: "94c5f8", position: 1 }, { color: "a6e6ff", position: 70 }, { color: "b1b5ea", position: 100 }, ], [ { color: "b7eaff", position: 0 }, { color: "94dfff", position: 100 }, ], [ { color: "9be2fe", position: 0 }, { color: "67d1fb", position: 100 }, ], [ { color: "90dffe", position: 0 }, { color: "38a3d1", position: 100 }, ], [ { color: "57c1eb", position: 0 }, { color: "246fa8", position: 100 }, ], [ { color: "2d91c2", position: 0 }, { color: "1e528e", position: 100 }, ], [ { color: "2473ab", position: 0 }, { color: "1e528e", position: 70 }, { color: "5b7983", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "265889", position: 50 }, { color: "9da671", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "728a7c", position: 50 }, { color: "e9ce5d", position: 100 }, ], [ { color: "154277", position: 0 }, { color: "576e71", position: 30 }, { color: "e1c45e", position: 70 }, { color: "b26339", position: 100 }, ], [ { color: "163C52", position: 0 }, { color: "4F4F47", position: 30 }, { color: "C5752D", position: 60 }, { color: "B7490F", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "071B26", position: 0 }, { color: "071B26", position: 30 }, { color: "8A3B12", position: 80 }, { color: "240E03", position: 100 }, ], [ { color: "010A10", position: 30 }, { color: "59230B", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "090401", position: 50 }, { color: "4B1D06", position: 100 }, ], [ { color: "00000c", position: 80 }, { color: "150800", position: 100 }, ], ]; async function toCSSGradient(hour) { let css = "linear-gradient(to bottom,"; const data = grads[hour]; const len = data.length; for (let i = 0; i < len; i++) { const item = data[i]; css += ` #${item.color} ${item.position}%`; if (i < len - 1) css += ","; } return css + ")"; } let html_content = ""; let html_style = ` html{width:100vw; height:100vh;} body{padding:0; margin:0 !important;height:100%;} #container { display: flex; flex-direction:column; align-items: center; justify-content: center; height: 100%; color:white; font-family:sans-serif; }`; const timezone = request.cf.timezone; console.log(timezone); let localized_date = new Date( new Date().toLocaleString("en-US", { timeZone: timezone }), ); let hour = localized_date.getHours(); let minutes = localized_date.getMinutes(); html_content += "

" + hour + ":" + minutes + "

"; html_content += "

" + timezone + "

"; html_style += "body{background:" + (await toCSSGradient(hour)) + ";}"; let html = ` Geolocation: Customized Design
${html_content}
OSZAR »
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8" }, }); }, }; ```
```ts export default { async fetch(request): Promise { let grads = [ [ { color: "00000c", position: 0 }, { color: "00000c", position: 0 }, ], [ { color: "020111", position: 85 }, { color: "191621", position: 100 }, ], [ { color: "020111", position: 60 }, { color: "20202c", position: 100 }, ], [ { color: "020111", position: 10 }, { color: "3a3a52", position: 100 }, ], [ { color: "20202c", position: 0 }, { color: "515175", position: 100 }, ], [ { color: "40405c", position: 0 }, { color: "6f71aa", position: 80 }, { color: "8a76ab", position: 100 }, ], [ { color: "4a4969", position: 0 }, { color: "7072ab", position: 50 }, { color: "cd82a0", position: 100 }, ], [ { color: "757abf", position: 0 }, { color: "8583be", position: 60 }, { color: "eab0d1", position: 100 }, ], [ { color: "82addb", position: 0 }, { color: "ebb2b1", position: 100 }, ], [ { color: "94c5f8", position: 1 }, { color: "a6e6ff", position: 70 }, { color: "b1b5ea", position: 100 }, ], [ { color: "b7eaff", position: 0 }, { color: "94dfff", position: 100 }, ], [ { color: "9be2fe", position: 0 }, { color: "67d1fb", position: 100 }, ], [ { color: "90dffe", position: 0 }, { color: "38a3d1", position: 100 }, ], [ { color: "57c1eb", position: 0 }, { color: "246fa8", position: 100 }, ], [ { color: "2d91c2", position: 0 }, { color: "1e528e", position: 100 }, ], [ { color: "2473ab", position: 0 }, { color: "1e528e", position: 70 }, { color: "5b7983", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "265889", position: 50 }, { color: "9da671", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "728a7c", position: 50 }, { color: "e9ce5d", position: 100 }, ], [ { color: "154277", position: 0 }, { color: "576e71", position: 30 }, { color: "e1c45e", position: 70 }, { color: "b26339", position: 100 }, ], [ { color: "163C52", position: 0 }, { color: "4F4F47", position: 30 }, { color: "C5752D", position: 60 }, { color: "B7490F", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "071B26", position: 0 }, { color: "071B26", position: 30 }, { color: "8A3B12", position: 80 }, { color: "240E03", position: 100 }, ], [ { color: "010A10", position: 30 }, { color: "59230B", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "090401", position: 50 }, { color: "4B1D06", position: 100 }, ], [ { color: "00000c", position: 80 }, { color: "150800", position: 100 }, ], ]; async function toCSSGradient(hour) { let css = "linear-gradient(to bottom,"; const data = grads[hour]; const len = data.length; for (let i = 0; i < len; i++) { const item = data[i]; css += ` #${item.color} ${item.position}%`; if (i < len - 1) css += ","; } return css + ")"; } let html_content = ""; let html_style = ` html{width:100vw; height:100vh;} body{padding:0; margin:0 !important;height:100%;} #container { display: flex; flex-direction:column; align-items: center; justify-content: center; height: 100%; color:white; font-family:sans-serif; }`; const timezone = request.cf.timezone; console.log(timezone); let localized_date = new Date( new Date().toLocaleString("en-US", { timeZone: timezone }), ); let hour = localized_date.getHours(); let minutes = localized_date.getMinutes(); html_content += "

" + hour + ":" + minutes + "

"; html_content += "

" + timezone + "

"; html_style += "body{background:" + (await toCSSGradient(hour)) + ";}"; let html = ` Geolocation: Customized Design
${html_content}
OSZAR »
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8" }, }); }, } satisfies ExportedHandler; ```
```ts import { Hono } from 'hono'; type Bindings = {}; type ColorStop = { color: string; position: number }; const app = new Hono<{ Bindings: Bindings }>(); // Gradient configurations for each hour of the day (0-23) const grads: ColorStop[][] = [ [ { color: "00000c", position: 0 }, { color: "00000c", position: 0 }, ], [ { color: "020111", position: 85 }, { color: "191621", position: 100 }, ], [ { color: "020111", position: 60 }, { color: "20202c", position: 100 }, ], [ { color: "020111", position: 10 }, { color: "3a3a52", position: 100 }, ], [ { color: "20202c", position: 0 }, { color: "515175", position: 100 }, ], [ { color: "40405c", position: 0 }, { color: "6f71aa", position: 80 }, { color: "8a76ab", position: 100 }, ], [ { color: "4a4969", position: 0 }, { color: "7072ab", position: 50 }, { color: "cd82a0", position: 100 }, ], [ { color: "757abf", position: 0 }, { color: "8583be", position: 60 }, { color: "eab0d1", position: 100 }, ], [ { color: "82addb", position: 0 }, { color: "ebb2b1", position: 100 }, ], [ { color: "94c5f8", position: 1 }, { color: "a6e6ff", position: 70 }, { color: "b1b5ea", position: 100 }, ], [ { color: "b7eaff", position: 0 }, { color: "94dfff", position: 100 }, ], [ { color: "9be2fe", position: 0 }, { color: "67d1fb", position: 100 }, ], [ { color: "90dffe", position: 0 }, { color: "38a3d1", position: 100 }, ], [ { color: "57c1eb", position: 0 }, { color: "246fa8", position: 100 }, ], [ { color: "2d91c2", position: 0 }, { color: "1e528e", position: 100 }, ], [ { color: "2473ab", position: 0 }, { color: "1e528e", position: 70 }, { color: "5b7983", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "265889", position: 50 }, { color: "9da671", position: 100 }, ], [ { color: "1e528e", position: 0 }, { color: "728a7c", position: 50 }, { color: "e9ce5d", position: 100 }, ], [ { color: "154277", position: 0 }, { color: "576e71", position: 30 }, { color: "e1c45e", position: 70 }, { color: "b26339", position: 100 }, ], [ { color: "163C52", position: 0 }, { color: "4F4F47", position: 30 }, { color: "C5752D", position: 60 }, { color: "B7490F", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "071B26", position: 0 }, { color: "071B26", position: 30 }, { color: "8A3B12", position: 80 }, { color: "240E03", position: 100 }, ], [ { color: "010A10", position: 30 }, { color: "59230B", position: 80 }, { color: "2F1107", position: 100 }, ], [ { color: "090401", position: 50 }, { color: "4B1D06", position: 100 }, ], [ { color: "00000c", position: 80 }, { color: "150800", position: 100 }, ], ]; // Convert hour to CSS gradient async function toCSSGradient(hour: number): Promise { let css = "linear-gradient(to bottom,"; const data = grads[hour]; const len = data.length; for (let i = 0; i < len; i++) { const item = data[i]; css += ` #${item.color} ${item.position}%`; if (i < len - 1) css += ","; } return css + ")"; } app.get('*', async (c) => { const request = c.req.raw; // Base HTML style let html_style = ` html{width:100vw; height:100vh;} body{padding:0; margin:0 !important;height:100%;} #container { display: flex; flex-direction:column; align-items: center; justify-content: center; height: 100%; color:white; font-family:sans-serif; }`; // Get timezone from Cloudflare request const timezone = request.cf?.timezone || 'UTC'; console.log(timezone); // Get localized time let localized_date = new Date( new Date().toLocaleString("en-US", { timeZone: timezone }) ); let hour = localized_date.getHours(); let minutes = localized_date.getMinutes(); // Generate HTML content let html_content = `

${hour}:${minutes}

`; html_content += `

${timezone}

`; // Add background gradient based on hour html_style += `body{background:${await toCSSGradient(hour)};}`; // Complete HTML document let html = ` Geolocation: Customized Design
${html_content}
OSZAR »
`; return c.html(html); }); export default app; ```
--- # Geolocation: Hello World URL: https://developers.cloudflare.com/workers/examples/geolocation-hello-world/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/geolocation-hello-world) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { let html_content = ""; let html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}"; html_content += "

Colo: " + request.cf.colo + "

"; html_content += "

Country: " + request.cf.country + "

"; html_content += "

City: " + request.cf.city + "

"; html_content += "

Continent: " + request.cf.continent + "

"; html_content += "

Latitude: " + request.cf.latitude + "

"; html_content += "

Longitude: " + request.cf.longitude + "

"; html_content += "

PostalCode: " + request.cf.postalCode + "

"; html_content += "

MetroCode: " + request.cf.metroCode + "

"; html_content += "

Region: " + request.cf.region + "

"; html_content += "

RegionCode: " + request.cf.regionCode + "

"; html_content += "

Timezone: " + request.cf.timezone + "

"; let html = ` Geolocation: Hello World

Geolocation: Hello World!

You now have access to geolocation data about where your user is visiting from.

${html_content}
OSZAR »
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, }; ```
```ts export default { async fetch(request): Promise { let html_content = ""; let html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}"; html_content += "

Colo: " + request.cf.colo + "

"; html_content += "

Country: " + request.cf.country + "

"; html_content += "

City: " + request.cf.city + "

"; html_content += "

Continent: " + request.cf.continent + "

"; html_content += "

Latitude: " + request.cf.latitude + "

"; html_content += "

Longitude: " + request.cf.longitude + "

"; html_content += "

PostalCode: " + request.cf.postalCode + "

"; html_content += "

MetroCode: " + request.cf.metroCode + "

"; html_content += "

Region: " + request.cf.region + "

"; html_content += "

RegionCode: " + request.cf.regionCode + "

"; html_content += "

Timezone: " + request.cf.timezone + "

"; let html = ` Geolocation: Hello World

Geolocation: Hello World!

You now have access to geolocation data about where your user is visiting from.

${html_content}
OSZAR »
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ```
```py from workers import Response async def on_fetch(request): html_content = "" html_style = "body{padding:6em font-family: sans-serif;} h1{color:#f6821f;}" html_content += "

Colo: " + request.cf.colo + "

" html_content += "

Country: " + request.cf.country + "

" html_content += "

City: " + request.cf.city + "

" html_content += "

Continent: " + request.cf.continent + "

" html_content += "

Latitude: " + request.cf.latitude + "

" html_content += "

Longitude: " + request.cf.longitude + "

" html_content += "

PostalCode: " + request.cf.postalCode + "

" html_content += "

Region: " + request.cf.region + "

" html_content += "

RegionCode: " + request.cf.regionCode + "

" html_content += "

Timezone: " + request.cf.timezone + "

" html = f""" Geolocation: Hello World

Geolocation: Hello World!

You now have access to geolocation data about where your user is visiting from.

{html_content}
OSZAR »
""" headers = {"content-type": "text/html;charset=UTF-8"} return Response(html, headers=headers) ```
```ts import { Hono } from "hono"; import { html } from "hono/html"; // Define the RequestWithCf interface to add Cloudflare-specific properties interface RequestWithCf extends Request { cf: { // Cloudflare-specific properties for geolocation colo: string; country: string; city: string; continent: string; latitude: string; longitude: string; postalCode: string; metroCode: string; region: string; regionCode: string; timezone: string; // Add other CF properties as needed }; } const app = new Hono(); app.get("*", (c) => { // Cast the raw request to include Cloudflare-specific properties const request = c.req.raw; // Define styles const html_style = "body{padding:6em; font-family: sans-serif;} h1{color:#f6821f;}"; // Create content with geolocation data let html_content = html`

Colo: ${request.cf.colo}

Country: ${request.cf.country}

City: ${request.cf.city}

Continent: ${request.cf.continent}

Latitude: ${request.cf.latitude}

Longitude: ${request.cf.longitude}

PostalCode: ${request.cf.postalCode}

MetroCode: ${request.cf.metroCode}

Region: ${request.cf.region}

RegionCode: ${request.cf.regionCode}

Timezone: ${request.cf.timezone}

`; // Compose the full HTML const htmlContent = html` Geolocation: Hello World

Geolocation: Hello World!

You now have access to geolocation data about where your user is visiting from.

${html_content}
OSZAR »
`; // Return the HTML response return c.html(htmlContent); }); export default app; ```
--- # Hot-link protection URL: https://developers.cloudflare.com/workers/examples/hot-link-protection/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/hot-link-protection) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/"; const PROTECTED_TYPE = "image/"; // Fetch the original request const response = await fetch(request); // If it's an image, engage hotlink protection based on the // Referer header. const referer = request.headers.get("Referer"); const contentType = response.headers.get("Content-Type") || ""; if (referer && contentType.startsWith(PROTECTED_TYPE)) { // If the hostnames don't match, it's a hotlink if (new URL(referer).hostname !== new URL(request.url).hostname) { // Redirect the user to your website return Response.redirect(HOMEPAGE_URL, 302); } } // Everything is fine, return the response normally. return response; }, }; ``` ```ts export default { async fetch(request): Promise { const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/"; const PROTECTED_TYPE = "image/"; // Fetch the original request const response = await fetch(request); // If it's an image, engage hotlink protection based on the // Referer header. const referer = request.headers.get("Referer"); const contentType = response.headers.get("Content-Type") || ""; if (referer && contentType.startsWith(PROTECTED_TYPE)) { // If the hostnames don't match, it's a hotlink if (new URL(referer).hostname !== new URL(request.url).hostname) { // Redirect the user to your website return Response.redirect(HOMEPAGE_URL, 302); } } // Everything is fine, return the response normally. return response; }, } satisfies ExportedHandler; ``` ```py from workers import Response, fetch from urllib.parse import urlparse async def on_fetch(request): homepage_url = "https://tutorial.cloudflareworkers.com/" protected_type = "image/" # Fetch the original request response = await fetch(request) # If it's an image, engage hotlink protection based on the referer header referer = request.headers["Referer"] content_type = response.headers["Content-Type"] or "" if referer and content_type.startswith(protected_type): # If the hostnames don't match, it's a hotlink if urlparse(referer).hostname != urlparse(request.url).hostname: # Redirect the user to your website return Response.redirect(homepage_url, 302) # Everything is fine, return the response normally return response ``` ```ts import { Hono } from 'hono'; const app = new Hono(); // Middleware for hot-link protection app.use('*', async (c, next) => { const HOMEPAGE_URL = "https://tutorial.cloudflareworkers.com/"; const PROTECTED_TYPE = "image/"; // Continue to the next handler to get the response await next(); // If we have a response, check for hotlinking if (c.res) { // If it's an image, engage hotlink protection based on the Referer header const referer = c.req.header("Referer"); const contentType = c.res.headers.get("Content-Type") || ""; if (referer && contentType.startsWith(PROTECTED_TYPE)) { // If the hostnames don't match, it's a hotlink if (new URL(referer).hostname !== new URL(c.req.url).hostname) { // Redirect the user to your website c.res = c.redirect(HOMEPAGE_URL, 302); } } } }); // Default route handler that passes through the request to the origin app.all('*', async (c) => { // Fetch the original request return fetch(c.req.raw); }); export default app; ``` --- # Custom Domain with Images URL: https://developers.cloudflare.com/workers/examples/images-workers/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/images-workers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; To serve images from a custom domain: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com). 2. Select your account > select **Workers & Pages**. 3. Select **Create application** > **Workers** > **Create Worker** and create your Worker. 4. In your Worker, select **Quick edit** and paste the following code. ```js export default { async fetch(request) { // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA const accountHash = ""; const { pathname } = new URL(request.url); // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public // will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(`https://imagedelivery.net/${accountHash}${pathname}`); }, }; ``` ```ts export default { async fetch(request): Promise { // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA const accountHash = ""; const { pathname } = new URL(request.url); // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public // will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(`https://imagedelivery.net/${accountHash}${pathname}`); }, } satisfies ExportedHandler; ``` ```ts import { Hono } from 'hono'; interface Env { // You can store your account hash as a binding variable ACCOUNT_HASH?: string; } const app = new Hono<{ Bindings: Env }>(); app.get('*', async (c) => { // You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA // Either get it from environment or hardcode it here const accountHash = c.env.ACCOUNT_HASH || ""; const url = new URL(c.req.url); // A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public // will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(`https://imagedelivery.net/${accountHash}${url.pathname}`); }); export default app; ``` ```py from js import URL, fetch async def on_fetch(request): # You can find this in the dashboard, it should look something like this: ZWd9g1K7eljCn_KDTu_MWA account_hash = "" url = URL.new(request.url) # A request to something like cdn.example.com/83eb7b2-5392-4565-b69e-aff66acddd00/public # will fetch "https://imagedelivery.net//83eb7b2-5392-4565-b69e-aff66acddd00/public" return fetch(f'https://imagedelivery.net/{account_hash}{url.pathname}') ``` Another way you can serve images from a custom domain is by using the `cdn-cgi/imagedelivery` prefix path which is used as path to trigger `cdn-cgi` image proxy. Below is an example showing the hostname as a Cloudflare proxied domain under the same account as the Image, followed with the prefix path and the image ``, `` and `` which can be found in the **Images** on the Cloudflare dashboard. ```js https://example.com/cdn-cgi/imagedelivery/// ``` --- # Examples URL: https://developers.cloudflare.com/workers/examples/ import { GlossaryTooltip, ListExamples } from "~/components"; Explore the following examples for Workers. --- # Logging headers to console URL: https://developers.cloudflare.com/workers/examples/logging-headers/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/logging-headers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { console.log(new Map(request.headers)); return new Response("Hello world"); }, }; ``` ```ts export default { async fetch(request): Promise { console.log(new Map(request.headers)); return new Response("Hello world"); }, } satisfies ExportedHandler; ``` ```py from workers import Response async def on_fetch(request): print(dict(request.headers)) return Response('Hello world') ``` ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { console_log!("{:?}", req.headers()); Response::ok("hello world") } ```` ```ts import { Hono } from 'hono'; const app = new Hono(); app.get('*', (c) => { // Different ways to log headers in Hono: // 1. Using Map to display headers in console console.log('Headers as Map:', new Map(c.req.raw.headers)); // 2. Using spread operator to log headers console.log('Headers spread:', [...c.req.raw.headers]); // 3. Using Object.fromEntries to convert to an object console.log('Headers as Object:', Object.fromEntries(c.req.raw.headers)); // 4. Hono's built-in header accessor (for individual headers) console.log('User-Agent:', c.req.header('User-Agent')); // 5. Using c.req.headers to get all headers console.log('All headers from Hono context:', c.req.header()); return c.text('Hello world'); }); export default app; ```` --- ## Console-logging headers Use a `Map` if you need to log a `Headers` object to the console: ```js console.log(new Map(request.headers)); ``` Use the `spread` operator if you need to quickly stringify a `Headers` object: ```js let requestHeaders = JSON.stringify([...request.headers]); ``` Use `Object.fromEntries` to convert the headers to an object: ```js let requestHeaders = Object.fromEntries(request.headers); ``` ### The problem When debugging Workers, examine the headers on a request or response. A common mistake is to try to log headers to the developer console via code like this: ```js console.log(request.headers); ``` Or this: ```js console.log(`Request headers: ${JSON.stringify(request.headers)}`); ``` Both attempts result in what appears to be an empty object — the string `"{}"` — even though calling `request.headers.has("Your-Header-Name")` might return true. This is the same behavior that browsers implement. The reason this happens is because [Headers](https://developer.mozilla.org/en-US/docs/Web/API/Headers) objects do not store headers in enumerable JavaScript properties, so the developer console and JSON stringifier do not know how to read the names and values of the headers. It is not actually an empty object, but rather an opaque object. `Headers` objects are iterable, which you can take advantage of to develop a couple of quick one-liners for debug-printing headers. ### Pass headers through a Map The first common idiom for making Headers `console.log()`-friendly is to construct a `Map` object from the `Headers` object and log the `Map` object. ```js console.log(new Map(request.headers)); ``` This works because: - `Map` objects can be constructed from iterables, like `Headers`. - The `Map` object does store its entries in enumerable JavaScript properties, so the developer console can see into it. ### Spread headers into an array The `Map` approach works for calls to `console.log()`. If you need to stringify your headers, you will discover that stringifying a `Map` yields nothing more than `[object Map]`. Even though a `Map` stores its data in enumerable properties, those properties are [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol)-keyed. Because of this, `JSON.stringify()` will [ignore Symbol-keyed properties](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol#symbols_and_json.stringify) and you will receive an empty `{}`. Instead, you can take advantage of the iterability of the `Headers` object in a new way by applying the [spread operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) (`...`) to it. ```js let requestHeaders = JSON.stringify([...request.headers], null, 2); console.log(`Request headers: ${requestHeaders}`); ``` ### Convert headers into an object with Object.fromEntries (ES2019) ES2019 provides [`Object.fromEntries`](https://github.com/tc39/proposal-object-from-entries) which is a call to convert the headers into an object: ```js let headersObject = Object.fromEntries(request.headers); let requestHeaders = JSON.stringify(headersObject, null, 2); console.log(`Request headers: ${requestHeaders}`); ``` This results in something like: ```js Request headers: { "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "accept-encoding": "gzip", "accept-language": "en-US,en;q=0.9", "cf-ipcountry": "US", // ... }" ``` --- # Modify request property URL: https://developers.cloudflare.com/workers/examples/modify-request-property/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/modify-request-property) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { /** * Example someHost is set up to return raw JSON * @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied * @param {string} someHost the host the request will resolve too */ const someHost = "example.com"; const someUrl = "https://foo.example.com/api.js"; /** * The best practice is to only assign new RequestInit properties * on the request object using either a method or the constructor */ const newRequestInit = { // Change method method: "POST", // Change body body: JSON.stringify({ bar: "foo" }), // Change the redirect mode. redirect: "follow", // Change headers, note this method will erase existing headers headers: { "Content-Type": "application/json", }, // Change a Cloudflare feature on the outbound response cf: { apps: false }, }; // Change just the host const url = new URL(someUrl); url.hostname = someHost; // Best practice is to always use the original request to construct the new request // to clone all the attributes. Applying the URL also requires a constructor // since once a Request has been constructed, its URL is immutable. const newRequest = new Request( url.toString(), new Request(request, newRequestInit), ); // Set headers using method newRequest.headers.set("X-Example", "bar"); newRequest.headers.set("Content-Type", "application/json"); try { return await fetch(newRequest); } catch (e) { return new Response(JSON.stringify({ error: e.message }), { status: 500, }); } }, }; ``` ```ts export default { async fetch(request): Promise { /** * Example someHost is set up to return raw JSON * @param {string} someUrl the URL to send the request to, since we are setting hostname too only path is applied * @param {string} someHost the host the request will resolve too */ const someHost = "example.com"; const someUrl = "https://foo.example.com/api.js"; /** * The best practice is to only assign new RequestInit properties * on the request object using either a method or the constructor */ const newRequestInit = { // Change method method: "POST", // Change body body: JSON.stringify({ bar: "foo" }), // Change the redirect mode. redirect: "follow", // Change headers, note this method will erase existing headers headers: { "Content-Type": "application/json", }, // Change a Cloudflare feature on the outbound response cf: { apps: false }, }; // Change just the host const url = new URL(someUrl); url.hostname = someHost; // Best practice is to always use the original request to construct the new request // to clone all the attributes. Applying the URL also requires a constructor // since once a Request has been constructed, its URL is immutable. const newRequest = new Request( url.toString(), new Request(request, newRequestInit), ); // Set headers using method newRequest.headers.set("X-Example", "bar"); newRequest.headers.set("Content-Type", "application/json"); try { return await fetch(newRequest); } catch (e) { return new Response(JSON.stringify({ error: e.message }), { status: 500, }); } }, } satisfies ExportedHandler; ``` ```py import json from pyodide.ffi import to_js as _to_js from js import Object, URL, Request, fetch, Response def to_js(obj): return _to_js(obj, dict_converter=Object.fromEntries) async def on_fetch(request): some_host = "example.com" some_url = "https://foo.example.com/api.js" # The best practice is to only assign new_request_init properties # on the request object using either a method or the constructor new_request_init = { "method": "POST", # Change method "body": json.dumps({ "bar": "foo" }), # Change body "redirect": "follow", # Change the redirect mode # Change headers, note this method will erase existing headers "headers": { "Content-Type": "application/json", }, # Change a Cloudflare feature on the outbound response "cf": { "apps": False }, } # Change just the host url = URL.new(some_url) url.hostname = some_host # Best practice is to always use the original request to construct the new request # to clone all the attributes. Applying the URL also requires a constructor # since once a Request has been constructed, its URL is immutable. org_request = Request.new(request, new_request_init) new_request = Request.new(url.toString(),org_request) new_request.headers["X-Example"] = "bar" new_request.headers["Content-Type"] = "application/json" try: return await fetch(new_request) except Exception as e: return Response.new({"error": str(e)}, status=500) ``` ```ts import { Hono } from "hono"; const app = new Hono(); app.all("*", async (c) => { /** * Example someHost is set up to return raw JSON */ const someHost = "example.com"; const someUrl = "https://foo.example.com/api.js"; // Create a URL object to modify the hostname const url = new URL(someUrl); url.hostname = someHost; // Create a new request // First create a clone of the original request with the new properties const requestClone = new Request(c.req.raw, { // Change method method: "POST", // Change body body: JSON.stringify({ bar: "foo" }), // Change the redirect mode redirect: "follow" as RequestRedirect, // Change headers, note this method will erase existing headers headers: { "Content-Type": "application/json", "X-Example": "bar", }, // Change a Cloudflare feature on the outbound response cf: { apps: false }, }); // Then create a new request with the modified URL const newRequest = new Request(url.toString(), requestClone); // Send the modified request const response = await fetch(newRequest); // Return the response return response; }); // Handle errors app.onError((err, c) => { return err.getResponse(); }); export default app; ``` --- # Modify response URL: https://developers.cloudflare.com/workers/examples/modify-response/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/modify-response) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { /** * @param {string} headerNameSrc Header to get the new value from * @param {string} headerNameDst Header to set based off of value in src */ const headerNameSrc = "foo"; //"Orig-Header" const headerNameDst = "Last-Modified"; /** * Response properties are immutable. To change them, construct a new * Response and pass modified status or statusText in the ResponseInit * object. Response headers can be modified through the headers `set` method. */ const originalResponse = await fetch(request); // Change status and statusText, but preserve body and headers let response = new Response(originalResponse.body, { status: 500, statusText: "some message", headers: originalResponse.headers, }); // Change response body by adding the foo prop const originalBody = await originalResponse.json(); const body = JSON.stringify({ foo: "bar", ...originalBody }); response = new Response(body, response); // Add a header using set method response.headers.set("foo", "bar"); // Set destination header to the value of the source header const src = response.headers.get(headerNameSrc); if (src != null) { response.headers.set(headerNameDst, src); console.log( `Response header "${headerNameDst}" was set to "${response.headers.get( headerNameDst, )}"`, ); } return response; }, }; ``` ```ts export default { async fetch(request): Promise { /** * @param {string} headerNameSrc Header to get the new value from * @param {string} headerNameDst Header to set based off of value in src */ const headerNameSrc = "foo"; //"Orig-Header" const headerNameDst = "Last-Modified"; /** * Response properties are immutable. To change them, construct a new * Response and pass modified status or statusText in the ResponseInit * object. Response headers can be modified through the headers `set` method. */ const originalResponse = await fetch(request); // Change status and statusText, but preserve body and headers let response = new Response(originalResponse.body, { status: 500, statusText: "some message", headers: originalResponse.headers, }); // Change response body by adding the foo prop const originalBody = await originalResponse.json(); const body = JSON.stringify({ foo: "bar", ...originalBody }); response = new Response(body, response); // Add a header using set method response.headers.set("foo", "bar"); // Set destination header to the value of the source header const src = response.headers.get(headerNameSrc); if (src != null) { response.headers.set(headerNameDst, src); console.log( `Response header "${headerNameDst}" was set to "${response.headers.get( headerNameDst, )}"`, ); } return response; }, } satisfies ExportedHandler; ``` ```py from workers import Response, fetch import json async def on_fetch(request): header_name_src = "foo" # Header to get the new value from header_name_dst = "Last-Modified" # Header to set based off of value in src # Response properties are immutable. To change them, construct a new response original_response = await fetch(request) # Change status and statusText, but preserve body and headers response = Response(original_response.body, status=500, status_text="some message", headers=original_response.headers) # Change response body by adding the foo prop new_body = await original_response.json() new_body["foo"] = "bar" response.replace_body(json.dumps(new_body)) # Add a new header response.headers["foo"] = "bar" # Set destination header to the value of the source header src = response.headers[header_name_src] if src is not None: response.headers[header_name_dst] = src print(f'Response header {header_name_dst} was set to {response.headers[header_name_dst]}') return response ``` ```ts import { Hono } from 'hono'; const app = new Hono(); app.get('*', async (c) => { /** * Header configuration */ const headerNameSrc = "foo"; // Header to get the new value from const headerNameDst = "Last-Modified"; // Header to set based off of value in src /** * Response properties are immutable. With Hono, we can modify the response * by creating custom response objects. */ const originalResponse = await fetch(c.req.raw); // Get the JSON body from the original response const originalBody = await originalResponse.json(); // Modify the body by adding a new property const modifiedBody = { foo: "bar", ...originalBody }; // Create a new custom response with modified status, headers, and body const response = new Response(JSON.stringify(modifiedBody), { status: 500, statusText: "some message", headers: originalResponse.headers, }); // Add a header using set method response.headers.set("foo", "bar"); // Set destination header to the value of the source header const src = response.headers.get(headerNameSrc); if (src != null) { response.headers.set(headerNameDst, src); console.log( `Response header "${headerNameDst}" was set to "${response.headers.get(headerNameDst)}"` ); } return response; }); export default app; ``` --- # Multiple Cron Triggers URL: https://developers.cloudflare.com/workers/examples/multiple-cron-triggers/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/multiple-cron-triggers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async scheduled(event, env, ctx) { // Write code for updating your API switch (event.cron) { case "*/3 * * * *": // Every three minutes await updateAPI(); break; case "*/10 * * * *": // Every ten minutes await updateAPI2(); break; case "*/45 * * * *": // Every forty-five minutes await updateAPI3(); break; } console.log("cron processed"); }, }; ``` ```ts interface Env {} export default { async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { // Write code for updating your API switch (controller.cron) { case "*/3 * * * *": // Every three minutes await updateAPI(); break; case "*/10 * * * *": // Every ten minutes await updateAPI2(); break; case "*/45 * * * *": // Every forty-five minutes await updateAPI3(); break; } console.log("cron processed"); }, }; ``` ```ts import { Hono } from "hono"; interface Env {} // Create Hono app const app = new Hono<{ Bindings: Env }>(); // Regular routes for normal HTTP requests app.get("/", (c) => c.text("Multiple Cron Trigger Example")); // Export both the app and a scheduled function export default { // The Hono app handles regular HTTP requests fetch: app.fetch, // The scheduled function handles Cron triggers async scheduled( controller: ScheduledController, env: Env, ctx: ExecutionContext, ) { // Check which cron schedule triggered this execution switch (controller.cron) { case "*/3 * * * *": // Every three minutes await updateAPI(); break; case "*/10 * * * *": // Every ten minutes await updateAPI2(); break; case "*/45 * * * *": // Every forty-five minutes await updateAPI3(); break; } console.log("cron processed"); }, }; ``` ## Test Cron Triggers using Wrangler The recommended way of testing Cron Triggers is using Wrangler. Cron Triggers can be tested using Wrangler by passing in the `--test-scheduled` flag to [`wrangler dev`](/workers/wrangler/commands/#dev). This will expose a `/__scheduled` (or `/cdn-cgi/handler/scheduled` for Python Workers) route which can be used to test using a HTTP request. To simulate different cron patterns, a `cron` query parameter can be passed in. ```sh npx wrangler dev --test-scheduled curl "http://localhost:8787/__scheduled?cron=*%2F3+*+*+*+*" curl "http://localhost:8787/cdn-cgi/handler/scheduled?cron=*+*+*+*+*" # Python Workers ``` --- # Stream OpenAI API Responses URL: https://developers.cloudflare.com/workers/examples/openai-sdk-streaming/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/openai-sdk-streaming) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; In order to run this code, you must install the OpenAI SDK by running `npm i openai`. :::note For analytics, caching, rate limiting, and more, you can also send requests like this through Cloudflare's [AI Gateway](/ai-gateway/providers/openai/). ::: ```ts import OpenAI from "openai"; export default { async fetch(request, env, ctx): Promise { const openai = new OpenAI({ apiKey: env.OPENAI_API_KEY, }); // Create a TransformStream to handle streaming data let { readable, writable } = new TransformStream(); let writer = writable.getWriter(); const textEncoder = new TextEncoder(); ctx.waitUntil( (async () => { const stream = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Tell me a story" }], stream: true, }); // loop over the data as it is streamed and write to the writeable for await (const part of stream) { writer.write( textEncoder.encode(part.choices[0]?.delta?.content || ""), ); } writer.close(); })(), ); // Send the readable back to the browser return new Response(readable); }, } satisfies ExportedHandler; ``` ```ts import { Hono } from "hono"; import { streamText } from "hono/streaming"; import OpenAI from "openai"; interface Env { OPENAI_API_KEY: string; } const app = new Hono<{ Bindings: Env }>(); app.get("*", async (c) => { const openai = new OpenAI({ apiKey: c.env.OPENAI_API_KEY, }); const chatStream = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: [{ role: "user", content: "Tell me a story" }], stream: true, }); return streamText(c, async (stream) => { for await (const message of chatStream) { await stream.write(message.choices[0].delta.content || ""); } stream.close(); }); }); export default app; ``` --- # Post JSON URL: https://developers.cloudflare.com/workers/examples/post-json/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/post-json) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { /** * Example someHost is set up to take in a JSON request * Replace url with the host you wish to send requests to * @param {string} url the URL to send the request to * @param {BodyInit} body the JSON data to send in the request */ const someHost = "https://examples.cloudflareworkers.com/demos"; const url = someHost + "/requests/json"; const body = { results: ["default data to send"], errors: null, msg: "I sent this to the fetch", }; /** * gatherResponse awaits and returns a response body as a string. * Use await gatherResponse(..) in an async function to get the response body * @param {Response} response */ async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return JSON.stringify(await response.json()); } else if (contentType.includes("application/text")) { return response.text(); } else if (contentType.includes("text/html")) { return response.text(); } else { return response.text(); } } const init = { body: JSON.stringify(body), method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(url, init); const results = await gatherResponse(response); return new Response(results, init); }, }; ``` ```ts export default { async fetch(request): Promise { /** * Example someHost is set up to take in a JSON request * Replace url with the host you wish to send requests to * @param {string} url the URL to send the request to * @param {BodyInit} body the JSON data to send in the request */ const someHost = "https://examples.cloudflareworkers.com/demos"; const url = someHost + "/requests/json"; const body = { results: ["default data to send"], errors: null, msg: "I sent this to the fetch", }; /** * gatherResponse awaits and returns a response body as a string. * Use await gatherResponse(..) in an async function to get the response body * @param {Response} response */ async function gatherResponse(response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return JSON.stringify(await response.json()); } else if (contentType.includes("application/text")) { return response.text(); } else if (contentType.includes("text/html")) { return response.text(); } else { return response.text(); } } const init = { body: JSON.stringify(body), method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(url, init); const results = await gatherResponse(response); return new Response(results, init); }, } satisfies ExportedHandler; ``` ```py import json from pyodide.ffi import to_js as _to_js from js import Object, fetch, Response, Headers def to_js(obj): return _to_js(obj, dict_converter=Object.fromEntries) # gather_response returns both content-type & response body as a string async def gather_response(response): headers = response.headers content_type = headers["content-type"] or "" if "application/json" in content_type: return (content_type, json.dumps(dict(await response.json()))) return (content_type, await response.text()) async def on_fetch(_request): url = "https://jsonplaceholder.typicode.com/todos/1" body = { "results": ["default data to send"], "errors": None, "msg": "I sent this to the fetch", } options = { "body": json.dumps(body), "method": "POST", "headers": { "content-type": "application/json;charset=UTF-8", }, } response = await fetch(url, to_js(options)) content_type, result = await gather_response(response) headers = Headers.new({"content-type": content_type}.items()) return Response.new(result, headers=headers) ``` ```ts import { Hono } from 'hono'; const app = new Hono(); app.get('*', async (c) => { /** * Example someHost is set up to take in a JSON request * Replace url with the host you wish to send requests to */ const someHost = "https://examples.cloudflareworkers.com/demos"; const url = someHost + "/requests/json"; const body = { results: ["default data to send"], errors: null, msg: "I sent this to the fetch", }; /** * gatherResponse awaits and returns a response body as a string. * Use await gatherResponse(..) in an async function to get the response body */ async function gatherResponse(response: Response) { const { headers } = response; const contentType = headers.get("content-type") || ""; if (contentType.includes("application/json")) { return { contentType, result: JSON.stringify(await response.json()) }; } else if (contentType.includes("application/text")) { return { contentType, result: await response.text() }; } else if (contentType.includes("text/html")) { return { contentType, result: await response.text() }; } else { return { contentType, result: await response.text() }; } } const init = { body: JSON.stringify(body), method: "POST", headers: { "content-type": "application/json;charset=UTF-8", }, }; const response = await fetch(url, init); const { contentType, result } = await gatherResponse(response); return new Response(result, { headers: { "content-type": contentType, }, }); }); export default app; ``` --- # Using timingSafeEqual URL: https://developers.cloudflare.com/workers/examples/protect-against-timing-attacks/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/protect-against-timing-attacks) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; The [`crypto.subtle.timingSafeEqual`](/workers/runtime-apis/web-crypto/#timingsafeequal) function compares two values using a constant-time algorithm. The time taken is independent of the contents of the values. When strings are compared using the equality operator (`==` or `===`), the comparison will end at the first mismatched character. By using `timingSafeEqual`, an attacker would not be able to use timing to find where at which point in the two strings there is a difference. The `timingSafeEqual` function takes two `ArrayBuffer` or `TypedArray` values to compare. These buffers must be of equal length, otherwise an exception is thrown. Note that this function is not constant time with respect to the length of the parameters and also does not guarantee constant time for the surrounding code. Handling of secrets should be taken with care to not introduce timing side channels. In order to compare two strings, you must use the [`TextEncoder`](/workers/runtime-apis/encoding/#textencoder) API. ```ts interface Environment { MY_SECRET_VALUE?: string; } export default { async fetch(req: Request, env: Environment) { if (!env.MY_SECRET_VALUE) { return new Response("Missing secret binding", { status: 500 }); } const authToken = req.headers.get("Authorization") || ""; if (authToken.length !== env.MY_SECRET_VALUE.length) { return new Response("Unauthorized", { status: 401 }); } const encoder = new TextEncoder(); const a = encoder.encode(authToken); const b = encoder.encode(env.MY_SECRET_VALUE); if (a.byteLength !== b.byteLength) { return new Response("Unauthorized", { status: 401 }); } if (!crypto.subtle.timingSafeEqual(a, b)) { return new Response("Unauthorized", { status: 401 }); } return new Response("Welcome!"); }, }; ``` ```py from workers import Response from js import TextEncoder, crypto async def on_fetch(request, env): auth_token = request.headers["Authorization"] or "" secret = env.MY_SECRET_VALUE if secret is None: return Response("Missing secret binding", status=500) if len(auth_token) != len(secret): return Response("Unauthorized", status=401) encoder = TextEncoder.new() a = encoder.encode(auth_token) b = encoder.encode(secret) if a.byteLength != b.byteLength: return Response("Unauthorized", status=401) if not crypto.subtle.timingSafeEqual(a, b): return Response("Unauthorized", status=401) return Response("Welcome!") ``` ```ts import { Hono } from 'hono'; interface Environment { Bindings: { MY_SECRET_VALUE?: string; } } const app = new Hono(); // Middleware to handle authentication with timing-safe comparison app.use('*', async (c, next) => { const secret = c.env.MY_SECRET_VALUE; if (!secret) { return c.text("Missing secret binding", 500); } const authToken = c.req.header("Authorization") || ""; // Early length check to avoid unnecessary processing if (authToken.length !== secret.length) { return c.text("Unauthorized", 401); } const encoder = new TextEncoder(); const a = encoder.encode(authToken); const b = encoder.encode(secret); if (a.byteLength !== b.byteLength) { return c.text("Unauthorized", 401); } // Perform timing-safe comparison if (!crypto.subtle.timingSafeEqual(a, b)) { return c.text("Unauthorized", 401); } // If we got here, the auth token is valid await next(); }); // Protected route app.get('*', (c) => { return c.text("Welcome!"); }); export default app; ``` --- # Read POST URL: https://developers.cloudflare.com/workers/examples/read-post/ import { TabItem, Tabs, Render } from "~/components"; If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/read-post) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. ```js export default { async fetch(request) { /** * rawHtmlResponse returns HTML inputted directly * into the worker script * @param {string} html */ function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } /** * readRequestBody reads in the incoming request body * Use await readRequestBody(..) in an async function to get the string * @param {Request} request the incoming request to read from */ async function readRequestBody(request) { const contentType = request.headers.get("content-type"); if (contentType.includes("application/json")) { return JSON.stringify(await request.json()); } else if (contentType.includes("application/text")) { return request.text(); } else if (contentType.includes("text/html")) { return request.text(); } else if (contentType.includes("form")) { const formData = await request.formData(); const body = {}; for (const entry of formData.entries()) { body[entry[0]] = entry[1]; } return JSON.stringify(body); } else { // Perhaps some other type of data was submitted in the form // like an image, or some other binary data. return "a file"; } } const { url } = request; if (url.includes("form")) { return rawHtmlResponse(someForm); } if (request.method === "POST") { const reqBody = await readRequestBody(request); const retBody = `The request body sent in was ${reqBody}`; return new Response(retBody); } else if (request.method === "GET") { return new Response("The request was a GET"); } }, }; ``` ```ts export default { async fetch(request): Promise { /** * rawHtmlResponse returns HTML inputted directly * into the worker script * @param {string} html */ function rawHtmlResponse(html) { return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); } /** * readRequestBody reads in the incoming request body * Use await readRequestBody(..) in an async function to get the string * @param {Request} request the incoming request to read from */ async function readRequestBody(request: Request) { const contentType = request.headers.get("content-type"); if (contentType.includes("application/json")) { return JSON.stringify(await request.json()); } else if (contentType.includes("application/text")) { return request.text(); } else if (contentType.includes("text/html")) { return request.text(); } else if (contentType.includes("form")) { const formData = await request.formData(); const body = {}; for (const entry of formData.entries()) { body[entry[0]] = entry[1]; } return JSON.stringify(body); } else { // Perhaps some other type of data was submitted in the form // like an image, or some other binary data. return "a file"; } } const { url } = request; if (url.includes("form")) { return rawHtmlResponse(someForm); } if (request.method === "POST") { const reqBody = await readRequestBody(request); const retBody = `The request body sent in was ${reqBody}`; return new Response(retBody); } else if (request.method === "GET") { return new Response("The request was a GET"); } }, } satisfies ExportedHandler; ``` ```py from js import Object, Response, Headers, JSON async def read_request_body(request): headers = request.headers content_type = headers["content-type"] or "" if "application/json" in content_type: return JSON.stringify(await request.json()) if "form" in content_type: form = await request.formData() data = Object.fromEntries(form.entries()) return JSON.stringify(data) return await request.text() async def on_fetch(request): def raw_html_response(html): headers = Headers.new({"content-type": "text/html;charset=UTF-8"}.items()) return Response.new(html, headers=headers) if "form" in request.url: return raw_html_response("") if "POST" in request.method: req_body = await read_request_body(request) ret_body = f"The request body sent in was {req_body}" return Response.new(ret_body) return Response.new("The request was not POST") ``` ```rs use serde::{Deserialize, Serialize}; use worker::*; fn raw_html_response(html: &str) -> Result { Response::from_html(html) } #[derive(Deserialize, Serialize, Debug)] struct Payload { msg: String, } async fn read_request_body(mut req: Request) -> String { let ctype = req.headers().get("content-type").unwrap().unwrap(); match ctype.as_str() { "application/json" => format!("{:?}", req.json::().await.unwrap()), "text/html" => req.text().await.unwrap(), "multipart/form-data" => format!("{:?}", req.form_data().await.unwrap()), _ => String::from("a file"), } } #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result { if String::from(req.url()?).contains("form") { return raw_html_response("some html form"); } match req.method() { Method::Post => { let req_body = read_request_body(req).await; Response::ok(format!("The request body sent in was {}", req_body)) } _ => Response::ok(format!("The result was a {:?}", req.method())), } } ``` ```ts import { Hono } from "hono"; import { html } from "hono/html"; const app = new Hono(); /** * readRequestBody reads in the incoming request body * @param {Request} request the incoming request to read from */ async function readRequestBody(request: Request): Promise { const contentType = request.headers.get("content-type") || ""; if (contentType.includes("application/json")) { const body = await request.json(); return JSON.stringify(body); } else if (contentType.includes("application/text")) { return request.text(); } else if (contentType.includes("text/html")) { return request.text(); } else if (contentType.includes("form")) { const formData = await request.formData(); const body: Record = {}; for (const [key, value] of formData.entries()) { body[key] = value.toString(); } return JSON.stringify(body); } else { // Perhaps some other type of data was submitted in the form // like an image, or some other binary data. return "a file"; } } const someForm = html`
OSZAR »
`; app.get("*", async (c) => { const url = c.req.url; if (url.includes("form")) { return c.html(someForm); } return c.text("The request was a GET"); }); app.post("*", async (c) => { const reqBody = await readRequestBody(c.req.raw); const retBody = `The request body sent in was ${reqBody}`; return c.text(retBody); }); export default app; ```
--- # Redirect URL: https://developers.cloudflare.com/workers/examples/redirect/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/redirect) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { Render, TabItem, Tabs } from "~/components"; ## Redirect all requests to one URL ```ts export default { async fetch(request): Promise { const destinationURL = "https://example.com"; const statusCode = 301; return Response.redirect(destinationURL, statusCode); }, } satisfies ExportedHandler; ``` ```py from workers import Response def on_fetch(request): destinationURL = "https://example.com" statusCode = 301 return Response.redirect(destinationURL, statusCode) ``` ```rs use worker::*; #[event(fetch)] async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result { let destination_url = Url::parse("https://example.com")?; let status_code = 301; Response::redirect_with_status(destination_url, status_code) } ``` ```ts import { Hono } from 'hono'; const app = new Hono(); app.all('*', (c) => { const destinationURL = "https://example.com"; const statusCode = 301; return c.redirect(destinationURL, statusCode); }); export default app; ``` ## Redirect requests from one domain to another ```js export default { async fetch(request) { const base = "https://example.com"; const statusCode = 301; const url = new URL(request.url); const { pathname, search } = url; const destinationURL = `${base}${pathname}${search}`; console.log(destinationURL); return Response.redirect(destinationURL, statusCode); }, }; ``` ```ts export default { async fetch(request): Promise { const base = "https://example.com"; const statusCode = 301; const url = new URL(request.url); const { pathname, search } = url; const destinationURL = `${base}${pathname}${search}`; console.log(destinationURL); return Response.redirect(destinationURL, statusCode); }, } satisfies ExportedHandler; ``` ```py from workers import Response from urllib.parse import urlparse async def on_fetch(request): base = "https://example.com" statusCode = 301 url = urlparse(request.url) destinationURL = f'{base}{url.path}{url.query}' print(destinationURL) return Response.redirect(destinationURL, statusCode) ``` ```rs use worker::*; #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result { let mut base = Url::parse("https://example.com")?; let status_code = 301; let url = req.url()?; base.set_path(url.path()); base.set_query(url.query()); console_log!("{:?}", base.to_string()); Response::redirect_with_status(base, status_code) } ``` ```ts import { Hono } from 'hono'; const app = new Hono(); app.all('*', (c) => { const base = "https://example.com"; const statusCode = 301; const { pathname, search } = new URL(c.req.url); const destinationURL = `${base}${pathname}${search}`; console.log(destinationURL); return c.redirect(destinationURL, statusCode); }); export default app; ``` --- # Respond with another site URL: https://developers.cloudflare.com/workers/examples/respond-with-another-site/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/respond-with-another-site) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { Render, TabItem, Tabs } from "~/components"; ```ts export default { async fetch(request): Promise { async function MethodNotAllowed(request) { return new Response(`Method ${request.method} not allowed.`, { status: 405, headers: { Allow: "GET", }, }); } // Only GET requests work with this proxy. if (request.method !== "GET") return MethodNotAllowed(request); return fetch(`https://example.com`); }, } satisfies ExportedHandler; ``` ```py from workers import Response, fetch def on_fetch(request): def method_not_allowed(request): msg = f'Method {request.method} not allowed.' headers = {"Allow": "GET"} return Response(msg, headers=headers, status=405) # Only GET requests work with this proxy. if request.method != "GET": return method_not_allowed(request) return fetch("https://example.com") ``` --- # Return small HTML page URL: https://developers.cloudflare.com/workers/examples/return-html/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/return-html) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { Render, TabItem, Tabs } from "~/components"; ```ts export default { async fetch(request): Promise { const html = `

Hello World

This markup was generated by a Cloudflare Worker.

OSZAR »
`; return new Response(html, { headers: { "content-type": "text/html;charset=UTF-8", }, }); }, } satisfies ExportedHandler; ```
```py from workers import Response def on_fetch(request): html = """

Hello World

This markup was generated by a Cloudflare Worker.

OSZAR »
""" headers = {"content-type": "text/html;charset=UTF-8"} return Response(html, headers=headers) ```
```rs use worker::*; #[event(fetch)] async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result { let html = r#"

Hello World

This markup was generated by a Cloudflare Worker.

OSZAR »
"#; Response::from_html(html) } ```
```ts import { Hono } from "hono"; import { html } from "hono/html"; const app = new Hono(); app.get("*", (c) => { const doc = html`

Hello World

This markup was generated by a Cloudflare Worker with Hono.

OSZAR »
`; return c.html(doc); }); export default app; ```
--- # Return JSON URL: https://developers.cloudflare.com/workers/examples/return-json/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/return-json) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { Render, TabItem, Tabs } from "~/components"; ```ts export default { async fetch(request): Promise { const data = { hello: "world", }; return Response.json(data); }, } satisfies ExportedHandler; ``` ```py from workers import Response import json def on_fetch(request): data = json.dumps({"hello": "world"}) headers = {"content-type": "application/json"} return Response(data, headers=headers) ``` ```rs use serde::{Deserialize, Serialize}; use worker::*; #[derive(Deserialize, Serialize, Debug)] struct Json { hello: String, } #[event(fetch)] async fn fetch(_req: Request, _env: Env, _ctx: Context) -> Result { let data = Json { hello: String::from("world"), }; Response::from_json(&data) } ``` ```ts import { Hono } from 'hono'; const app = new Hono(); app.get('*', (c) => { const data = { hello: "world", }; return c.json(data); }); export default app; ``` --- # Rewrite links URL: https://developers.cloudflare.com/workers/examples/rewrite-links/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/rewrite-links) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const OLD_URL = "developer.mozilla.org"; const NEW_URL = "mynewdomain.com"; class AttributeRewriter { constructor(attributeName) { this.attributeName = attributeName; } element(element) { const attribute = element.getAttribute(this.attributeName); if (attribute) { element.setAttribute( this.attributeName, attribute.replace(OLD_URL, NEW_URL), ); } } } const rewriter = new HTMLRewriter() .on("a", new AttributeRewriter("href")) .on("img", new AttributeRewriter("src")); const res = await fetch(request); const contentType = res.headers.get("Content-Type"); // If the response is HTML, it can be transformed with // HTMLRewriter -- otherwise, it should pass through if (contentType.startsWith("text/html")) { return rewriter.transform(res); } else { return res; } }, }; ``` ```ts export default { async fetch(request): Promise { const OLD_URL = "developer.mozilla.org"; const NEW_URL = "mynewdomain.com"; class AttributeRewriter { constructor(attributeName) { this.attributeName = attributeName; } element(element) { const attribute = element.getAttribute(this.attributeName); if (attribute) { element.setAttribute( this.attributeName, attribute.replace(OLD_URL, NEW_URL), ); } } } const rewriter = new HTMLRewriter() .on("a", new AttributeRewriter("href")) .on("img", new AttributeRewriter("src")); const res = await fetch(request); const contentType = res.headers.get("Content-Type"); // If the response is HTML, it can be transformed with // HTMLRewriter -- otherwise, it should pass through if (contentType.startsWith("text/html")) { return rewriter.transform(res); } else { return res; } }, } satisfies ExportedHandler; ``` ```py from pyodide.ffi import create_proxy from js import HTMLRewriter, fetch async def on_fetch(request): old_url = "developer.mozilla.org" new_url = "mynewdomain.com" class AttributeRewriter: def __init__(self, attr_name): self.attr_name = attr_name def element(self, element): attr = element.getAttribute(self.attr_name) if attr: element.setAttribute(self.attr_name, attr.replace(old_url, new_url)) href = create_proxy(AttributeRewriter("href")) src = create_proxy(AttributeRewriter("src")) rewriter = HTMLRewriter.new().on("a", href).on("img", src) res = await fetch(request) content_type = res.headers["Content-Type"] # If the response is HTML, it can be transformed with # HTMLRewriter -- otherwise, it should pass through if content_type.startswith("text/html"): return rewriter.transform(res) return res ``` ```ts import { Hono } from 'hono'; import { html } from 'hono/html'; const app = new Hono(); app.get('*', async (c) => { const OLD_URL = "developer.mozilla.org"; const NEW_URL = "mynewdomain.com"; class AttributeRewriter { attributeName: string; constructor(attributeName: string) { this.attributeName = attributeName; } element(element: Element) { const attribute = element.getAttribute(this.attributeName); if (attribute) { element.setAttribute( this.attributeName, attribute.replace(OLD_URL, NEW_URL) ); } } } // Make a fetch request using the original request const res = await fetch(c.req.raw); const contentType = res.headers.get("Content-Type") || ""; // If the response is HTML, transform it with HTMLRewriter if (contentType.startsWith("text/html")) { const rewriter = new HTMLRewriter() .on("a", new AttributeRewriter("href")) .on("img", new AttributeRewriter("src")); return new Response(rewriter.transform(res).body, { headers: res.headers }); } else { // Pass through the response as is return res; } }); export default app; ``` --- # Set security headers URL: https://developers.cloudflare.com/workers/examples/security-headers/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/security-headers) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; ```js export default { async fetch(request) { const DEFAULT_SECURITY_HEADERS = { /* Secure your application with Content-Security-Policy headers. Enabling these headers will permit content from a trusted domain and all its subdomains. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy "Content-Security-Policy": "default-src 'self' example.com *.example.com", */ /* You can also set Strict-Transport-Security headers. These are not automatically set because your website might get added to Chrome's HSTS preload list. Here's the code if you want to apply it: "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload", */ /* Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: "Permissions-Policy": "interest-cohort=()", */ /* X-XSS-Protection header prevents a page from loading if an XSS attack is detected. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection */ "X-XSS-Protection": "0", /* X-Frame-Options header prevents click-jacking attacks. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options */ "X-Frame-Options": "DENY", /* X-Content-Type-Options header prevents MIME-sniffing. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options */ "X-Content-Type-Options": "nosniff", "Referrer-Policy": "strict-origin-when-cross-origin", "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";', "Cross-Origin-Opener-Policy": 'same-site; report-to="default";', "Cross-Origin-Resource-Policy": "same-site", }; const BLOCKED_HEADERS = [ "Public-Key-Pins", "X-Powered-By", "X-AspNet-Version", ]; let response = await fetch(request); let newHeaders = new Headers(response.headers); const tlsVersion = request.cf.tlsVersion; console.log(tlsVersion); // This sets the headers for HTML responses: if ( newHeaders.has("Content-Type") && !newHeaders.get("Content-Type").includes("text/html") ) { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => { newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]); }); BLOCKED_HEADERS.forEach((name) => { newHeaders.delete(name); }); if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("You need to use TLS version 1.2 or higher.", { status: 400, }); } else { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } }, }; ``` ```ts export default { async fetch(request): Promise { const DEFAULT_SECURITY_HEADERS = { /* Secure your application with Content-Security-Policy headers. Enabling these headers will permit content from a trusted domain and all its subdomains. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy "Content-Security-Policy": "default-src 'self' example.com *.example.com", */ /* You can also set Strict-Transport-Security headers. These are not automatically set because your website might get added to Chrome's HSTS preload list. Here's the code if you want to apply it: "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload", */ /* Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: "Permissions-Policy": "interest-cohort=()", */ /* X-XSS-Protection header prevents a page from loading if an XSS attack is detected. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection */ "X-XSS-Protection": "0", /* X-Frame-Options header prevents click-jacking attacks. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options */ "X-Frame-Options": "DENY", /* X-Content-Type-Options header prevents MIME-sniffing. @see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options */ "X-Content-Type-Options": "nosniff", "Referrer-Policy": "strict-origin-when-cross-origin", "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";', "Cross-Origin-Opener-Policy": 'same-site; report-to="default";', "Cross-Origin-Resource-Policy": "same-site", }; const BLOCKED_HEADERS = [ "Public-Key-Pins", "X-Powered-By", "X-AspNet-Version", ]; let response = await fetch(request); let newHeaders = new Headers(response.headers); const tlsVersion = request.cf.tlsVersion; console.log(tlsVersion); // This sets the headers for HTML responses: if ( newHeaders.has("Content-Type") && !newHeaders.get("Content-Type").includes("text/html") ) { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } Object.keys(DEFAULT_SECURITY_HEADERS).map((name) => { newHeaders.set(name, DEFAULT_SECURITY_HEADERS[name]); }); BLOCKED_HEADERS.forEach((name) => { newHeaders.delete(name); }); if (tlsVersion !== "TLSv1.2" && tlsVersion !== "TLSv1.3") { return new Response("You need to use TLS version 1.2 or higher.", { status: 400, }); } else { return new Response(response.body, { status: response.status, statusText: response.statusText, headers: newHeaders, }); } }, } satisfies ExportedHandler; ``` ```py from workers import Response, fetch async def on_fetch(request): default_security_headers = { # Secure your application with Content-Security-Policy headers. #Enabling these headers will permit content from a trusted domain and all its subdomains. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy "Content-Security-Policy": "default-src 'self' example.com *.example.com", #You can also set Strict-Transport-Security headers. #These are not automatically set because your website might get added to Chrome's HSTS preload list. #Here's the code if you want to apply it: "Strict-Transport-Security" : "max-age=63072000; includeSubDomains; preload", #Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: "Permissions-Policy": "interest-cohort=()", #X-XSS-Protection header prevents a page from loading if an XSS attack is detected. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection "X-XSS-Protection": "0", #X-Frame-Options header prevents click-jacking attacks. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options "X-Frame-Options": "DENY", #X-Content-Type-Options header prevents MIME-sniffing. #@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options "X-Content-Type-Options": "nosniff", "Referrer-Policy": "strict-origin-when-cross-origin", "Cross-Origin-Embedder-Policy": 'require-corp; report-to="default";', "Cross-Origin-Opener-Policy": 'same-site; report-to="default";', "Cross-Origin-Resource-Policy": "same-site", } blocked_headers = ["Public-Key-Pins", "X-Powered-By" ,"X-AspNet-Version"] res = await fetch(request) new_headers = res.headers # This sets the headers for HTML responses if "text/html" in new_headers["Content-Type"]: return Response(res.body, status=res.status, statusText=res.statusText, headers=new_headers) for name in default_security_headers: new_headers[name] = default_security_headers[name] for name in blocked_headers: del new_headers["name"] tls = request.cf.tlsVersion if not tls in ("TLSv1.2", "TLSv1.3"): return Response("You need to use TLS version 1.2 or higher.", status=400) return Response(res.body, status=res.status, statusText=res.statusText, headers=new_headers) ``` ```rs use std::collections::HashMap; use worker::*; #[event(fetch)] async fn fetch(req: Request, _env: Env, _ctx: Context) -> Result { let default_security_headers = HashMap::from([ //Secure your application with Content-Security-Policy headers. //Enabling these headers will permit content from a trusted domain and all its subdomains. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Security-Policy ( "Content-Security-Policy", "default-src 'self' example.com *.example.com", ), //You can also set Strict-Transport-Security headers. //These are not automatically set because your website might get added to Chrome's HSTS preload list. //Here's the code if you want to apply it: ( "Strict-Transport-Security", "max-age=63072000; includeSubDomains; preload", ), //Permissions-Policy header provides the ability to allow or deny the use of browser features, such as opting out of FLoC - which you can use below: ("Permissions-Policy", "interest-cohort=()"), //X-XSS-Protection header prevents a page from loading if an XSS attack is detected. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-XSS-Protection ("X-XSS-Protection", "0"), //X-Frame-Options header prevents click-jacking attacks. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Frame-Options ("X-Frame-Options", "DENY"), //X-Content-Type-Options header prevents MIME-sniffing. //@see https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Content-Type-Options ("X-Content-Type-Options", "nosniff"), ("Referrer-Policy", "strict-origin-when-cross-origin"), ( "Cross-Origin-Embedder-Policy", "require-corp; report-to='default';", ), ( "Cross-Origin-Opener-Policy", "same-site; report-to='default';", ), ("Cross-Origin-Resource-Policy", "same-site"), ]); let blocked_headers = ["Public-Key-Pins", "X-Powered-By", "X-AspNet-Version"]; let tls = req.cf().unwrap().tls_version(); let res = Fetch::Request(req).send().await?; let mut new_headers = res.headers().clone(); // This sets the headers for HTML responses if Some(String::from("text/html")) == new_headers.get("Content-Type")? { return Ok(Response::from_body(res.body().clone())? .with_headers(new_headers) .with_status(res.status_code())); } for (k, v) in default_security_headers { new_headers.set(k, v)?; } for k in blocked_headers { new_headers.delete(k)?; } if !vec!["TLSv1.2", "TLSv1.3"].contains(&tls.as_str()) { return Response::error("You need to use TLS version 1.2 or higher.", 400); } Ok(Response::from_body(res.body().clone())? .with_headers(new_headers) .with_status(res.status_code())) } ```` ```ts import { Hono } from 'hono'; import { secureHeaders } from 'hono/secure-headers'; const app = new Hono(); app.use(secureHeaders()); // Handle all other requests by passing through to origin app.all('*', async (c) => { return fetch(c.req.raw); }); export default app; ```` --- # Sign requests URL: https://developers.cloudflare.com/workers/examples/signing-requests/ If you want to get started quickly, click on the button below. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/docs-examples/tree/main/workers/signing-requests) This creates a repository in your GitHub account and deploys the application to Cloudflare Workers. import { TabItem, Tabs } from "~/components"; :::note This example Worker makes use of the [Node.js Buffer API](/workers/runtime-apis/nodejs/buffer/), which is available as part of the Worker's runtime [Node.js compatibility mode](/workers/runtime-apis/nodejs/). To run this Worker, you will need to [enable the `nodejs_compat` compatibility flag](/workers/runtime-apis/nodejs/#get-started). ::: You can both verify and generate signed requests from within a Worker using the [Web Crypto APIs](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/subtle). The following Worker will: - For request URLs beginning with `/generate/`, replace `/generate/` with `/`, sign the resulting path with its timestamp, and return the full, signed URL in the response body. - For all other request URLs, verify the signed URL and allow the request through. ```js import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); // How long an HMAC token should be valid for, in seconds const EXPIRY = 60; export default { /** * * @param {Request} request * @param {{SECRET_DATA: string}} env * @returns */ async fetch(request, env) { // You will need some secret data to use as a symmetric key. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import your secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); const url = new URL(request.url); // This is a demonstration Worker that allows unauthenticated access to /generate // In a real application you would want to make sure that // users could only generate signed URLs when authenticated if (url.pathname.startsWith("/generate/")) { url.pathname = url.pathname.replace("/generate/", "/"); const timestamp = Math.floor(Date.now() / 1000); // This contains all the data about the request that you want to be able to verify // Here we only sign the timestamp and the pathname, but often you will want to // include more data (for instance, the URL hostname or query parameters) const dataToAuthenticate = `${url.pathname}${timestamp}`; const mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(dataToAuthenticate), ); // Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/ // for more details on using Node.js APIs in Workers const base64Mac = Buffer.from(mac).toString("base64"); url.searchParams.set("verify", `${timestamp}-${base64Mac}`); return new Response(`${url.pathname}${url.search}`); // Verify all non /generate requests } else { // Make sure you have the minimum necessary query parameters. if (!url.searchParams.has("verify")) { return new Response("Missing query parameter", { status: 403 }); } const [timestamp, hmac] = url.searchParams.get("verify").split("-"); const assertedTimestamp = Number(timestamp); const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`; const receivedMac = Buffer.from(hmac, "base64"); // Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use // symmetric keys, you could implement this by calling crypto.subtle.sign() and // then doing a string comparison -- this is insecure, as string comparisons // bail out on the first mismatch, which leaks information to potential // attackers. const verified = await crypto.subtle.verify( "HMAC", key, receivedMac, encoder.encode(dataToAuthenticate), ); if (!verified) { return new Response("Invalid MAC", { status: 403 }); } // Signed requests expire after one minute. Note that this value should depend on your specific use case if (Date.now() / 1000 > assertedTimestamp + EXPIRY) { return new Response( `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`, { status: 403 }, ); } } return fetch(new URL(url.pathname, "https://example.com"), request); }, }; ``` ```ts import { Buffer } from "node:buffer"; const encoder = new TextEncoder(); // How long an HMAC token should be valid for, in seconds const EXPIRY = 60; interface Env { SECRET_DATA: string; } export default { async fetch(request, env): Promise { // You will need some secret data to use as a symmetric key. This should be // attached to your Worker as an encrypted secret. // Refer to https://developers.cloudflare.com/workers/configuration/secrets/ const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import your secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); const url = new URL(request.url); // This is a demonstration Worker that allows unauthenticated access to /generate // In a real application you would want to make sure that // users could only generate signed URLs when authenticated if (url.pathname.startsWith("/generate/")) { url.pathname = url.pathname.replace("/generate/", "/"); const timestamp = Math.floor(Date.now() / 1000); // This contains all the data about the request that you want to be able to verify // Here we only sign the timestamp and the pathname, but often you will want to // include more data (for instance, the URL hostname or query parameters) const dataToAuthenticate = `${url.pathname}${timestamp}`; const mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(dataToAuthenticate), ); // Refer to https://developers.cloudflare.com/workers/runtime-apis/nodejs/ // for more details on using NodeJS APIs in Workers const base64Mac = Buffer.from(mac).toString("base64"); url.searchParams.set("verify", `${timestamp}-${base64Mac}`); return new Response(`${url.pathname}${url.search}`); // Verify all non /generate requests } else { // Make sure you have the minimum necessary query parameters. if (!url.searchParams.has("verify")) { return new Response("Missing query parameter", { status: 403 }); } const [timestamp, hmac] = url.searchParams.get("verify").split("-"); const assertedTimestamp = Number(timestamp); const dataToAuthenticate = `${url.pathname}${assertedTimestamp}`; const receivedMac = Buffer.from(hmac, "base64"); // Use crypto.subtle.verify() to guard against timing attacks. Since HMACs use // symmetric keys, you could implement this by calling crypto.subtle.sign() and // then doing a string comparison -- this is insecure, as string comparisons // bail out on the first mismatch, which leaks information to potential // attackers. const verified = await crypto.subtle.verify( "HMAC", key, receivedMac, encoder.encode(dataToAuthenticate), ); if (!verified) { return new Response("Invalid MAC", { status: 403 }); } // Signed requests expire after one minute. Note that this value should depend on your specific use case if (Date.now() / 1000 > assertedTimestamp + EXPIRY) { return new Response( `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`, { status: 403 }, ); } } return fetch(new URL(url.pathname, "https://example.com"), request); }, } satisfies ExportedHandler; ``` ```ts import { Buffer } from "node:buffer"; import { Hono } from "hono"; import { proxy } from "hono/proxy"; const encoder = new TextEncoder(); // How long an HMAC token should be valid for, in seconds const EXPIRY = 60; interface Env { SECRET_DATA: string; } const app = new Hono(); // Handle URL generation requests app.get("/generate/*", async (c) => { const env = c.env; // You will need some secret data to use as a symmetric key const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import the secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); // Replace "/generate/" prefix with "/" let pathname = c.req.path.replace("/generate/", "/"); const timestamp = Math.floor(Date.now() / 1000); // Data to authenticate: pathname + timestamp const dataToAuthenticate = `${pathname}${timestamp}`; // Sign the data const mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(dataToAuthenticate), ); // Convert the signature to base64 const base64Mac = Buffer.from(mac).toString("base64"); // Add verification parameter to URL url.searchParams.set("verify", `${timestamp}-${base64Mac}`); return c.text(`${pathname}${url.search}`); }); // Handle verification for all other requests app.all("*", async (c) => { const env = c.env; const url = c.req.url; // You will need some secret data to use as a symmetric key const secretKeyData = encoder.encode( env.SECRET_DATA ?? "my secret symmetric key", ); // Import the secret as a CryptoKey for both 'sign' and 'verify' operations const key = await crypto.subtle.importKey( "raw", secretKeyData, { name: "HMAC", hash: "SHA-256" }, false, ["sign", "verify"], ); // Make sure the request has the verification parameter if (!c.req.query("verify")) { return c.text("Missing query parameter", 403); } // Extract timestamp and signature const [timestamp, hmac] = c.req.query("verify")!.split("-"); const assertedTimestamp = Number(timestamp); // Recreate the data that should have been signed const dataToAuthenticate = `${c.req.path}${assertedTimestamp}`; // Convert base64 signature back to ArrayBuffer const receivedMac = Buffer.from(hmac, "base64"); // Verify the signature const verified = await crypto.subtle.verify( "HMAC", key, receivedMac, encoder.encode(dataToAuthenticate), ); // If verification fails, return 403 if (!verified) { return c.text("Invalid MAC", 403); } // Check if the signature has expired if (Date.now() / 1000 > assertedTimestamp + EXPIRY) { return c.text( `URL expired at ${new Date((assertedTimestamp + EXPIRY) * 1000)}`, 403, ); } // If verification passes, proxy the request to example.com return proxy(`https://example.com/${c.req.path}`, ...c.req); }); export default app; ``` ```py from pyodide.ffi import to_js as _to_js from js import Response, URL, TextEncoder, Buffer, fetch, Object, crypto def to_js(x): return _to_js(x, dict_converter=Object.fromEntries) encoder = TextEncoder.new() # How long an HMAC token should be valid for, in seconds EXPIRY = 60 async def on_fetch(request, env): # Get the secret key secret_key_data = encoder.encode(env.SECRET_DATA if hasattr(env, "SECRET_DATA") else "my secret symmetric key") # Import the secret as a CryptoKey for both 'sign' and 'verify' operations key = await crypto.subtle.importKey( "raw", secret_key_data, to_js({"name": "HMAC", "hash": "SHA-256"}), False, ["sign", "verify"] ) url = URL.new(request.url) if url.pathname.startswith("/generate/"): url.pathname = url.pathname.replace("/generate/", "/", 1) timestamp = int(Date.now() / 1000) # Data to authenticate data_to_authenticate = f"{url.pathname}{timestamp}" # Sign the data mac = await crypto.subtle.sign( "HMAC", key, encoder.encode(data_to_authenticate) ) # Convert to base64 base64_mac = Buffer.from(mac).toString("base64") # Set the verification parameter url.searchParams.set("verify", f"{timestamp}-{base64_mac}") return Response.new(f"{url.pathname}{url.search}") else: # Verify the request if not "verify" in url.searchParams: return Response.new("Missing query parameter", status=403) verify_param = url.searchParams.get("verify") timestamp, hmac = verify_param.split("-") asserted_timestamp = int(timestamp) data_to_authenticate = f"{url.pathname}{asserted_timestamp}" received_mac = Buffer.from(hmac, "base64") # Verify the signature verified = await crypto.subtle.verify( "HMAC", key, received_mac, encoder.encode(data_to_authenticate) ) if not verified: return Response.new("Invalid MAC", status=403) # Check expiration if Date.now() / 1000 > asserted_timestamp + EXPIRY: expiry_date = Date.new((asserted_timestamp + EXPIRY) * 1000) return Response.new(f"URL expired at {expiry_date}", status=403) # Proxy to example.com if verification passes return fetch(URL.new(f"https://example.com{url.pathname}"), request) ``` ## Validate signed requests using the WAF The provided example code for signing requests is compatible with the [`is_timed_hmac_valid_v0()`](/ruleset-engine/rules-language/functions/#hmac-validation) Rules language function. This means that you can verify requests signed by the Worker script using a [WAF custom rule](/waf/custom-rules/use-cases/configure-token-authentication/#option-2-configure-using-waf-custom-rules). --- # Turnstile with Workers URL: https://developers.cloudflare.com/workers/examples/turnstile-html-rewriter/ import { TabItem, Tabs, Render } from "~/components"; ```js export default { async fetch(request, env) { const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret) const TURNSTILE_ATTR_NAME = "your_id_to_replace"; // The id of the element to put a Turnstile widget in let res = await fetch(request); // Instantiate the API to run on specific elements, for example, `head`, `div` let newRes = new HTMLRewriter() // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API .on("head", { element(element) { // In this case, you are using `append` to add a new script to the `head` element element.append( ``, { html: true }, ); }, }) .on("div", { element(element) { // Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) { element.append( `
`, { html: true }, ); } }, }) .transform(res); return newRes; }, }; ```
```ts export default { async fetch(request, env): Promise { const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret) const TURNSTILE_ATTR_NAME = "your_id_to_replace"; // The id of the element to put a Turnstile widget in let res = await fetch(request); // Instantiate the API to run on specific elements, for example, `head`, `div` let newRes = new HTMLRewriter() // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API .on("head", { element(element) { // In this case, you are using `append` to add a new script to the `head` element element.append( ``, { html: true }, ); }, }) .on("div", { element(element) { // Add a turnstile widget element into if an element with the id of TURNSTILE_ATTR_NAME is found if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) { element.append( `
`, { html: true }, ); } }, }) .transform(res); return newRes; }, } satisfies ExportedHandler; ```
```ts import { Hono } from "hono"; interface Env { SITE_KEY: string; SECRET_KEY: string; TURNSTILE_ATTR_NAME?: string; } const app = new Hono<{ Bindings: Env }>(); // Middleware to inject Turnstile widget app.use("*", async (c, next) => { const SITE_KEY = c.env.SITE_KEY; // The Turnstile Sitekey from environment const TURNSTILE_ATTR_NAME = c.env.TURNSTILE_ATTR_NAME || "your_id_to_replace"; // The target element ID // Process the request through the original endpoint await next(); // Only process HTML responses const contentType = c.res.headers.get("content-type"); if (!contentType || !contentType.includes("text/html")) { return; } // Clone the response to make it modifiable const originalResponse = c.res; const responseBody = await originalResponse.text(); // Create an HTMLRewriter instance to modify the HTML const rewriter = new HTMLRewriter() // Add the Turnstile script to the head .on("head", { element(element) { element.append( ``, { html: true }, ); }, }) // Add the Turnstile widget to the target div .on("div", { element(element) { if (element.getAttribute("id") === TURNSTILE_ATTR_NAME) { element.append( `
`, { html: true }, ); } }, }); // Create a new response with the same properties as the original const modifiedResponse = new Response(responseBody, { status: originalResponse.status, statusText: originalResponse.statusText, headers: originalResponse.headers, }); // Transform the response using HTMLRewriter c.res = rewriter.transform(modifiedResponse); }); // Handle POST requests for form submission with Turnstile validation app.post("*", async (c) => { const formData = await c.req.formData(); const token = formData.get("cf-turnstile-response"); const ip = c.req.header("CF-Connecting-IP"); // If no token, return an error if (!token) { return c.text("Missing Turnstile token", 400); } // Prepare verification data const verifyFormData = new FormData(); verifyFormData.append("secret", c.env.SECRET_KEY || ""); verifyFormData.append("response", token.toString()); if (ip) verifyFormData.append("remoteip", ip); // Verify the token with Turnstile API const verifyResult = await fetch( "https://challenges.cloudflare.com/turnstile/v0/siteverify", { method: "POST", body: verifyFormData, }, ); const outcome = await verifyResult.json<{ success: boolean }>; // If verification fails, return an error if (!outcome.success) { return c.text("The provided Turnstile token was not valid!", 401); } // If verification succeeds, proceed with the original request // You would typically handle the form submission logic here // For this example, we'll just send a success response return c.text("Form submission successful!"); }); // Default handler for GET requests app.get("*", async (c) => { // Fetch the original content (you'd replace this with your actual content source) return await fetch(c.req.raw); }); export default app; ```
```py from pyodide.ffi import create_proxy from js import HTMLRewriter, fetch async def on_fetch(request, env): site_key = env.SITE_KEY attr_name = env.TURNSTILE_ATTR_NAME res = await fetch(request) class Append: def element(self, element): s = '' element.append(s, {"html": True}) class AppendOnID: def __init__(self, name): self.name = name def element(self, element): # You are using the `getAttribute` method here to retrieve the `id` or `class` of an element if element.getAttribute("id") == self.name: div = f'
' element.append(div, { "html": True }) # Instantiate the API to run on specific elements, for example, `head`, `div` head = create_proxy(Append()) div = create_proxy(AppendOnID(attr_name)) new_res = HTMLRewriter.new().on("head", head).on("div", div).transform(res) return new_res ```
:::note This is only half the implementation for Turnstile. The corresponding token that is a result of a widget being rendered also needs to be verified using the [siteverify API](/turnstile/get-started/server-side-validation/). Refer to the example below for one such implementation. ::: ```js async function handlePost(request, env) { const body = await request.formData(); // Turnstile injects a token in `cf-turnstile-response`. const token = body.get('cf-turnstile-response'); const ip = request.headers.get('CF-Connecting-IP'); // Validate the token by calling the `/siteverify` API. let formData = new FormData(); // `secret_key` here is the Turnstile Secret key, which should be set using Wrangler secrets formData.append('secret', env.SECRET_KEY); formData.append('response', token); formData.append('remoteip', ip); //This is optional. const url = 'https://challenges.cloudflare.com/turnstile/v0/siteverify'; const result = await fetch(url, { body: formData, method: 'POST', }); const outcome = await result.json(); if (!outcome.success) { return new Response('The provided Turnstile token was not valid!', { status: 401 }); } // The Turnstile token was successfully validated. Proceed with your application logic. // Validate login, redirect user, etc. // Clone the original request with a new body const newRequest = new Request(request, { body: request.body, // Reuse the body method: request.method, headers: request.headers }); return await fetch(newRequest); } export default { async fetch(request, env) { const SITE_KEY = env.SITE_KEY; // The Turnstile Sitekey of your widget (pass as env or secret) const TURNSTILE_ATTR_NAME = 'your_id_to_replace'; // The id of the element to put a Turnstile widget in let res = await fetch(request) if (request.method === 'POST') { return handlePost(request, env) } // Instantiate the API to run on specific elements, for example, `head`, `div` let newRes = new HTMLRewriter() // `.on` attaches the element handler and this allows you to match on element/attributes or to use the specific methods per the API .on('head', { element(element) { // In this case, you are using `append` to add a new script to the `head` element element.append(``, { html: true }); }, }) .on('div', { element(element) { // You are using the `getAttribute` method here to retrieve the `id` or `class` of an element if (element.getAttribute('id') === ) { element.append(`
`, { html: true }); } }, }) .transform(res); return newRes } } ```
--- # Using the WebSockets API URL: https://developers.cloudflare.com/workers/examples/websockets/ import { TabItem, Tabs } from "~/components"; WebSockets allow you to communicate in real time with your Cloudflare Workers serverless functions. In this guide, you will learn the basics of WebSockets on Cloudflare Workers, both from the perspective of writing WebSocket servers in your Workers functions, as well as connecting to and working with those WebSocket servers as a client. WebSockets are open connections sustained between the client and the origin server. Inside a WebSocket connection, the client and the origin can pass data back and forth without having to reestablish sessions. This makes exchanging data within a WebSocket connection fast. WebSockets are often used for real-time applications such as live chat and gaming. :::note WebSockets utilize an event-based system for receiving and sending messages, much like the Workers runtime model of responding to events. ::: :::note If your application needs to coordinate among multiple WebSocket connections, such as a chat room or game match, you will need clients to send messages to a single-point-of-coordination. Durable Objects provide a single-point-of-coordination for Cloudflare Workers, and are often used in parallel with WebSockets to persist state over multiple clients and connections. In this case, refer to [Durable Objects](/durable-objects/) to get started, and prefer using the Durable Objects' extended [WebSockets API](/durable-objects/best-practices/websockets/). ::: ## Write a WebSocket Server WebSocket servers in Cloudflare Workers allow you to receive messages from a client in real time. This guide will show you how to set up a WebSocket server in Workers. A client can make a WebSocket request in the browser by instantiating a new instance of `WebSocket`, passing in the URL for your Workers function: ```js // In client-side JavaScript, connect to your Workers function using WebSockets: const websocket = new WebSocket( "wss://example-websocket.signalnerve.workers.dev", ); ``` :::note For more details about creating and working with WebSockets in the client, refer to [Writing a WebSocket client](#write-a-websocket-client). ::: When an incoming WebSocket request reaches the Workers function, it will contain an `Upgrade` header, set to the string value `websocket`. Check for this header before continuing to instantiate a WebSocket: ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } } ``` ```rs use worker::\*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } } ```` After you have appropriately checked for the `Upgrade` header, you can create a new instance of `WebSocketPair`, which contains server and client WebSockets. One of these WebSockets should be handled by the Workers function and the other should be returned as part of a `Response` with the [`101` status code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Status/101), indicating the request is switching protocols: ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } const webSocketPair = new WebSocketPair(); const client = webSocketPair[0], server = webSocketPair[1]; return new Response(null, { status: 101, webSocket: client, }); } ```` ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } let ws = WebSocketPair::new()?; let client = ws.client; let server = ws.server; server.accept()?; worker::Response::from_websocket(client) } ```` The `WebSocketPair` constructor returns an Object, with the `0` and `1` keys each holding a `WebSocket` instance as its value. It is common to grab the two WebSockets from this pair using [`Object.values`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_objects/Object/values) and [ES6 destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment), as seen in the below example. In order to begin communicating with the `client` WebSocket in your Worker, call `accept` on the `server` WebSocket. This will tell the Workers runtime that it should listen for WebSocket data and keep the connection open with your `client` WebSocket: ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); server.accept(); return new Response(null, { status: 101, webSocket: client, }); } ```` ```rs use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } let ws = WebSocketPair::new()?; let client = ws.client; let server = ws.server; server.accept()?; worker::Response::from_websocket(client) } ```` WebSockets emit a number of [Events](/workers/runtime-apis/websockets/#events) that can be connected to using `addEventListener`. The below example hooks into the `message` event and emits a `console.log` with the data from it: ```js async function handleRequest(request) { const upgradeHeader = request.headers.get('Upgrade'); if (!upgradeHeader || upgradeHeader !== 'websocket') { return new Response('Expected Upgrade: websocket', { status: 426 }); } const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); server.accept(); server.addEventListener('message', event => { console.log(event.data); }); return new Response(null, { status: 101, webSocket: client, }); } ```` ```rs use futures::StreamExt; use worker::*; #[event(fetch)] async fn fetch(req: HttpRequest, \_env: Env, \_ctx: Context) -> Result { let upgrade_header = match req.headers().get("Upgrade") { Some(h) => h.to_str().unwrap(), None => "", }; if upgrade_header != "websocket" { return worker::Response::error("Expected Upgrade: websocket", 426); } let ws = WebSocketPair::new()?; let client = ws.client; let server = ws.server; server.accept()?; wasm_bindgen_futures::spawn_local(async move { let mut event_stream = server.events().expect("could not open stream"); while let Some(event) = event_stream.next().await { match event.expect("received error in websocket") { WebsocketEvent::Message(msg) => server.send(&msg.text()).unwrap(), WebsocketEvent::Close(event) => console_log!("{:?}", event), } } }); worker::Response::from_websocket(client) } ```` ```ts import { Hono } from 'hono' import { upgradeWebSocket } from 'hono/cloudflare-workers' const app = new Hono() app.get( '*', upgradeWebSocket((c) => { return { onMessage(event, ws) { console.log('Received message from client:', event.data) ws.send(`Echo: ${event.data}`) }, onClose: () => { console.log('WebSocket closed:', event) }, onError: () => { console.error('WebSocket error:', event) }, } }) ) export default app; ```` ### Connect to the WebSocket server from a client Writing WebSocket clients that communicate with your Workers function is a two-step process: first, create the WebSocket instance, and then attach event listeners to it: ```js const websocket = new WebSocket( "wss://websocket-example.signalnerve.workers.dev", ); websocket.addEventListener("message", (event) => { console.log("Message received from server"); console.log(event.data); }); ``` WebSocket clients can send messages back to the server using the [`send`](/workers/runtime-apis/websockets/#send) function: ```js websocket.send("MESSAGE"); ``` When the WebSocket interaction is complete, the client can close the connection using [`close`](/workers/runtime-apis/websockets/#close): ```js websocket.close(); ``` For an example of this in practice, refer to the [`websocket-template`](https://github.com/cloudflare/websocket-template) to get started with WebSockets. ## Write a WebSocket client Cloudflare Workers supports the `new WebSocket(url)` constructor. A Worker can establish a WebSocket connection to a remote server in the same manner as the client implementation described above. Additionally, Cloudflare supports establishing WebSocket connections by making a fetch request to a URL with the `Upgrade` header set. ```js async function websocket(url) { // Make a fetch request including `Upgrade: websocket` header. // The Workers Runtime will automatically handle other requirements // of the WebSocket protocol, like the Sec-WebSocket-Key header. let resp = await fetch(url, { headers: { Upgrade: "websocket", }, }); // If the WebSocket handshake completed successfully, then the // response has a `webSocket` property. let ws = resp.webSocket; if (!ws) { throw new Error("server didn't accept WebSocket"); } // Call accept() to indicate that you'll be handling the socket here // in JavaScript, as opposed to returning it on to a client. ws.accept(); // Now you can send and receive messages like before. ws.send("hello"); ws.addEventListener("message", (msg) => { console.log(msg.data); }); } ``` ## WebSocket compression Cloudflare Workers supports WebSocket compression. Refer to [WebSocket Compression](/workers/configuration/compatibility-flags/#websocket-compression) for more information. --- # Supported bindings in local and remote dev URL: https://developers.cloudflare.com/workers/local-development/bindings-per-env/ import { Render } from "~/components"; --- # Environment variables and secrets URL: https://developers.cloudflare.com/workers/local-development/environment-variables/ import { Aside, PackageManagers, Steps } from "~/components"; During local development, you may need to configure **environment variables** (such as API URLs, feature flags) and **secrets** (API tokens, private keys). You can use a `.dev.vars` file in the root of your project to override environment variables for local development, and both [Wrangler](/workers/configuration/environment-variables/#compare-secrets-and-environment-variables) and the [Vite plugin](/workers/vite-plugin/reference/secrets/) will respect this override. ### Why use a `.dev.vars` file? Use `.dev.vars` to set local overrides for environment variables that should not be checked into your repository. If you want to manage environment-based configuration that you **want checked into your repository** (for example, non-sensitive or shared environment defaults), you can define [environment variables as `[vars]`](/workers/wrangler/environments/#_top) in your Wrangler configuration. Using a `.dev.vars` file is specifically for local-only secrets or configuration that you do not want in version control and only want to inject in local dev sessions. ## Basic setup 1. Create a `.dev.vars` file in your project root. 2. Add key-value pairs: ```ini title=".dev.vars" API_HOST="localhost:3000" DEBUG="true" SECRET_TOKEN="my-local-secret-token" ``` 3. Run your `dev` command **Wrangler** **Vite plugin** ## Multiple local environments with `.dev.vars` To simulate different local environments, you can: 1. Create a file named `.dev.vars.` . For example, we'll use `.dev.vars.staging`. 2. Add key-value pairs: ```ini title=".dev.vars.staging" API_HOST="staging.localhost:3000" DEBUG="false" SECRET_TOKEN="staging-token" ``` 3. Specify the environment when running the `dev` command: **Wrangler** **Vite plugin** Only the values from `.dev.vars.staging` will be applied instead of `.dev.vars`. ## Learn more - To learn how to configure multiple environments in Wrangler configuration, [read the documentation](/workers/wrangler/environments/#_top). - To learn how to use Wrangler environments and Vite environments together, [read the Vite plugin documentation](/workers/vite-plugin/reference/cloudflare-environments/) --- # Local development URL: https://developers.cloudflare.com/workers/local-development/ import { Details, LinkCard, Render, PackageManagers } from "~/components"; When building projects on Cloudflare Workers, you have two options for local development: - [**Wrangler**](/workers/wrangler/), using the built-in [`wrangler dev`](/workers/wrangler/commands/#dev) command. - [Vite](https://vite.dev/), using the [**Cloudflare Vite plugin**](/workers/vite-plugin/). Both Wrangler and the Vite plugin use [Miniflare](/workers/testing/miniflare/) to provide an accurate **local** simulation of the Cloudflare Workers runtime, ([`workerd`](https://github.com/cloudflare/workerd)). If you need to [develop with **remote resources**](/workers/local-development/remote-data/), Wrangler is the only option, and provides remote development via the `wrangler dev --remote` command. ## Choosing between Wrangler or Vite Deciding between Wrangler and the Cloudflare Vite plugin depends on your project's focus and development workflow. Here are some quick guidelines to help you choose: ### When to use Wrangler - **Backend & Workers-focused:** If you're primarily building APIs, serverless functions, or background tasks, use Wrangler. - **Remote development:** If your project needs the ability to develop and test using production resources and data on Cloudflare's network, use Wrangler's `--remote` flag. - **Simple frontends:** If you have minimal frontend requirements and don’t need hot reloading or advanced bundling, Wrangler may be sufficient. ### When to use the Cloudflare Vite Plugin Use the [Vite plugin](/workers/vite-plugin/) for: - **Frontend-centric development:** If you already use Vite with modern frontend frameworks like React, Vue, Svelte, or Solid, the Vite plugin integrates into your development workflow. - **React Router v7:** If you are using [React Router v7](https://reactrouter.com/) (the successor to Remix), it is officially supported by the Vite plugin as a full-stack SSR framework. - **Rapid iteration (HMR):** If you need near-instant updates in the browser, the Vite plugin provides [Hot Module Replacement (HMR)](https://vite.dev/guide/features.html#hot-module-replacement) during local development. - **Advanced optimizations:** If you require more advanced optimizations (code splitting, efficient bundling, CSS handling, build time transformations, etc.), Vite is a strong fit. - **Greater flexibility:** Due to Vite's advanced configuration options and large ecosystem of plugins, there is more flexibility to customize your development experience and build output. --- # Local data URL: https://developers.cloudflare.com/workers/local-development/local-data/ import { Details, LinkCard, Render, PackageManagers, FileTree, Aside, } from "~/components"; Whether you are using Wrangler or the [Cloudflare Vite plugin](https://developers.cloudflare.com/workers/vite-plugin/), your workflow for **accessing** data during local development remains the same. However, you can only [populate local resources with data](/workers/local-development/local-data/#populating-local-resources-with-data) via the Wrangler CLI. ### How it works When you run either `wrangler dev` or [`vite`](https://vite.dev/guide/cli#dev-server), [Miniflare](/workers/testing/miniflare/) automatically creates **local versions** of your resources (like [KV](/kv), [D1](/d1/), or [R2](/r2)). This means you **don’t** need to manually set up separate local instances for each service. However, newly created local resources **won’t** contain any data — you'll need to use Wrangler commands with the `--local` flag to populate them. Changes made to local resources won’t affect production data. ## Populating local resources with data When you first start developing, your local resources will be empty. You'll need to populate them with data using the Wrangler CLI. ### KV namespaces #### [Add a single key-value pair](/workers/wrangler/commands/#kv-key) #### [Bulk upload](/workers/wrangler/commands/#kv-bulk) ### R2 buckets #### [Upload a file](/workers/wrangler/commands/#r2-object) You may also include [other metadata](/workers/wrangler/commands/#r2-object-put). ### D1 databases #### [Execute a SQL statement](/workers/wrangler/commands/#d1-execute) #### [Execute a SQL file](/workers/wrangler/commands/#d1-execute) ### Durable Objects For Durable Objects, unlike KV, D1, and R2, there are no CLI commands to populate them with local data. To add data to Durable Objects during local development, you must write application code that creates Durable Object instances and [calls methods on them that store state](/durable-objects/best-practices/access-durable-objects-storage/). This typically involves creating development endpoints or test routes that initialize your Durable Objects with the desired data. ## Where local data gets stored By default, both Wrangler and the Vite plugin store local binding data in the same location: the `.wrangler/state` folder in your project directory. This folder stores data in subdirectories for all local bindings: KV namespaces, R2 buckets, D1 databases, Durable Objects, etc. ### Clearing local storage You can delete the `.wrangler/state` folder at any time to reset your local environment, and Miniflare will recreate it the next time you run your `dev` command. You can also delete specific sub-folders within `.wrangler/state` for more targeted clean-up. ### Changing the local data directory If you prefer to specify a different directory for local storage, you can do so through the Wranlger CLI or in the Vite plugin's configuration. #### Using Wrangler Use the [`--persist-to`](/workers/wrangler/commands/#dev) flag with `wrangler dev`. You need to specify this flag every time you run the `dev` command: :::note The local persistence folder (like `.wrangler/state` or any custom folder you set) should be added to your `.gitignore` to avoid committing local development data to version control. :::
If you run `wrangler dev --persist-to ` to specify a custom location for local data, you must also include the same `--persist-to ` when running other Wrangler commands that modify local data (and be sure to include the `--local` flag). For example, to create a KV key named `test` with a value of `12345` in a local KV namespace, run: This command: - Sets the KV key `test` to `12345` in the binding `MY_KV_NAMESPACE` (defined in your [Wrangler configuration file](/workers/wrangler/configuration/)). - Uses `--persist-to worker-local` to ensure the data is created in the **worker-local** directory instead of the default `.wrangler/state`. - Adds the `--local` flag, indicating you want to modify local data. If `--persist-to` is not specified, Wrangler defaults to using `.wrangler/state` for local data.
#### Using the Cloudflare Vite plugin To customize where the Vite plugin stores local data, configure the [`persistState` option](/workers/vite-plugin/reference/api/#interface-pluginconfig) in your Vite config file: ```js title="vite.config.js" import { defineConfig } from "vite"; import { cloudflare } from "@cloudflare/vite-plugin"; export default defineConfig({ plugins: [ cloudflare({ persistState: "./my-custom-directory", }), ], }); ``` #### Sharing state between tools If you want Wrangler and the Vite plugin to share the same state, configure them to use the same persistence path. --- # Remote data URL: https://developers.cloudflare.com/workers/local-development/remote-data/ import { Details, LinkCard, Render, PackageManagers, FileTree, } from "~/components"; When developing Workers applications, you can use Wrangler's remote development mode (via [`wrangler dev --remote`](/workers/wrangler/commands/#dev)) to test your code on Cloudflare's global network before deploying to production. Remote development is [**not** supported in the Vite plugin](/workers/local-development/#choosing-between-wrangler-or-vite). ### How It Works The `wrangler dev --remote` command creates a temporary preview deployment on Cloudflare's infrastructure, allowing you to test your Worker in an environment that closely mirrors production. When you run `wrangler dev --remote`: - Your code is uploaded to a temporary preview environment on Cloudflare's infrastructure. - Changes to your code are automatically uploaded as you save. - All requests and execution happen on Cloudflare's global network - The preview automatically terminates when you exit the command ## When to Use Remote Development - You need to develop using [bindings that don't work locally](/workers/local-development/bindings-per-env/) (such as [Browser Rendering](/browser-rendering/)). - You need to verify behavior specifically on Cloudflare's infrastructure. - You want to work with preview resources that mirror production. ## Isolating from Production To protect production data, you can specify preview resources in your [Wrangler configuration](/workers/wrangler/configuration/), such as: - [Preview namespaces for KV stores](/workers/wrangler/configuration/#kv-namespaces):`preview_id`. - This option is **required** when using `wrangler dev --remote`. - [Preview buckets for R2 storage](/workers/wrangler/configuration/#r2-buckets): `preview_bucket_name`. - [Preview database IDs for D1](/workers/wrangler/configuration/#d1-databases): `preview_database_id` This separation ensures your development activities don't impact production data while still providing a realistic testing environment. ## Limitations - When you run a remote development session using the `--remote` flag, a limit of 50 [routes](/workers/configuration/routing/routes/) per zone is enforced. Learn more in[ Workers platform limits](/workers/platform/limits/#number-of-routes-per-zone-when-using-wrangler-dev---remote). --- # Observability URL: https://developers.cloudflare.com/workers/observability/ import { Badge, DirectoryListing } from "~/components"; Understand how your Worker projects are performing via logs, traces, and other data sources. --- # Errors and exceptions URL: https://developers.cloudflare.com/workers/observability/errors/ import { TabItem, Tabs } from "~/components"; Review Workers errors and exceptions. ## Error pages generated by Workers When a Worker running in production has an error that prevents it from returning a response, the client will receive an error page with an error code, defined as follows: | Error code | Meaning | | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | | `1101` | Worker threw a JavaScript exception. | | `1102` | Worker exceeded [CPU time limit](/workers/platform/limits/#cpu-time). | | `1103` | The owner of this worker needs to contact [Cloudflare Support](/support/contacting-cloudflare-support/) | | `1015` | Worker hit the [burst rate limit](/workers/platform/limits/#burst-rate). | | `1019` | Worker hit [loop limit](#loop-limit). | | `1021` | Worker has requested a host it cannot access. | | `1022` | Cloudflare has failed to route the request to the Worker. | | `1024` | Worker cannot make a subrequest to a Cloudflare-owned IP address. | | `1027` | Worker exceeded free tier [daily request limit](/workers/platform/limits/#daily-request). | | `1042` | Worker tried to fetch from another Worker on the same zone, which is only [supported](/workers/runtime-apis/fetch/) when the [`global_fetch_strictly_public` compatibility flag](/workers/configuration/compatibility-flags/#global-fetch-strictly-public) is used. | Other `11xx` errors generally indicate a problem with the Workers runtime itself. Refer to the [status page](https://www.cloudflarestatus.com) if you are experiencing an error. ### Loop limit A Worker cannot call itself or another Worker more than 16 times. In order to prevent infinite loops between Workers, the [`CF-EW-Via`](/fundamentals/reference/http-headers/#cf-ew-via) header's value is an integer that indicates how many invocations are left. Every time a Worker is invoked, the integer will decrement by 1. If the count reaches zero, a [`1019`](#error-pages-generated-by-workers) error is returned. ### "The script will never generate a response" errors Some requests may return a 1101 error with `The script will never generate a response` in the error message. This occurs when the Workers runtime detects that all the code associated with the request has executed and no events are left in the event loop, but a Response has not been returned. #### Cause 1: Unresolved Promises This is most commonly caused by relying on a Promise that is never resolved or rejected, which is required to return a Response. To debug, look for Promises within your code or dependencies' code that block a Response, and ensure they are resolved or rejected. In browsers and other JavaScript runtimes, equivalent code will hang indefinitely, leading to both bugs and memory leaks. The Workers runtime throws an explicit error to help you debug. In the example below, the Response relies on a Promise resolution that never happens. Uncommenting the `resolve` callback solves the issue. ```js null {9} export default { fetch(req) { let response = new Response("Example response"); let { promise, resolve } = Promise.withResolvers(); // If the promise is not resolved, the Workers runtime will // recognize this and throw an error. // setTimeout(resolve, 0) return promise.then(() => response); }, }; ``` You can prevent this by enforcing the [`no-floating-promises` eslint rule](https://typescript-eslint.io/rules/no-floating-promises/), which reports when a Promise is created and not properly handled. #### Cause 2: WebSocket connections that are never closed If a WebSocket is missing the proper code to close its server-side connection, the Workers runtime will throw a `script will never generate a response` error. In the example below, the `'close'` event from the client is not properly handled by calling `server.close()`, and the error is thrown. In order to avoid this, ensure that the WebSocket's server-side connection is properly closed via an event listener or other server-side logic. ```js null {10} async function handleRequest(request) { let webSocketPair = new WebSocketPair(); let [client, server] = Object.values(webSocketPair); server.accept(); server.addEventListener("close", () => { // This missing line would keep a WebSocket connection open indefinitely // and results in "The script will never generate a response" errors // server.close(); }); return new Response(null, { status: 101, webSocket: client, }); } ``` ### "Illegal invocation" errors The error message `TypeError: Illegal invocation: function called with incorrect this reference` can be a source of confusion. This is typically caused by calling a function that calls `this`, but the value of `this` has been lost. For example, given an `obj` object with the `obj.foo()` method which logic relies on `this`, executing the method via `obj.foo();` will make sure that `this` properly references the `obj` object. However, assigning the method to a variable, e.g.`const func = obj.foo;` and calling such variable, e.g. `func();` would result in `this` being `undefined`. This is because `this` is lost when the method is called as a standalone function. This is standard behavior in JavaScript. In practice, this is often seen when destructuring runtime provided Javascript objects that have functions that rely on the presence of `this`, such as `ctx`. The following code will error: ```js export default { async fetch(request, env, ctx) { // destructuring ctx makes waitUntil lose its 'this' reference const { waitUntil } = ctx; // waitUntil errors, as it has no 'this' waitUntil(somePromise); return fetch(request); }, }; ``` Avoid destructuring or re-bind the function to the original context to avoid the error. The following code will run properly: ```js export default { async fetch(request, env, ctx) { // directly calling the method on ctx avoids the error ctx.waitUntil(somePromise); // alternatively re-binding to ctx via apply, call, or bind avoids the error const { waitUntil } = ctx; waitUntil.apply(ctx, [somePromise]); waitUntil.call(ctx, somePromise); const reboundWaitUntil = waitUntil.bind(ctx); reboundWaitUntil(somePromise); return fetch(request); }, }; ``` ### Cannot perform I/O on behalf of a different request ``` Uncaught (in promise) Error: Cannot perform I/O on behalf of a different request. I/O objects (such as streams, request/response bodies, and others) created in the context of one request handler cannot be accessed from a different request's handler. ``` This error occurs when you attempt to share input/output (I/O) objects (such as streams, requests, or responses) created by one invocation of your Worker in the context of a different invocation. In Cloudflare Workers, each invocation is handled independently and has its own execution context. This design ensures optimal performance and security by isolating requests from one another. When you try to share I/O objects between different invocations, you break this isolation. Since these objects are tied to the specific request they were created in, accessing them from another request's handler is not allowed and leads to the error. This error is most commonly caused by attempting to cache an I/O object, like a [Request](/workers/runtime-apis/request/) in global scope, and then access it in a subsequent request. For example, if you create a Worker and run the following code in local development, and make two requests to your Worker in quick succession, you can reproduce this error: ```js let cachedResponse = null; export default { async fetch(request, env, ctx) { if (cachedResponse) { return cachedResponse; } cachedResponse = new Response("Hello, world!"); await new Promise((resolve) => setTimeout(resolve, 5000)); // Sleep for 5s to demonstrate this particular error case return cachedResponse; }, }; ``` You can fix this by instead storing only the data in global scope, rather than the I/O object itself: ```js let cachedData = null; export default { async fetch(request, env, ctx) { if (cachedData) { return new Response(cachedData); } const response = new Response("Hello, world!"); cachedData = await response.text(); return new Response(cachedData, response); }, }; ``` If you need to share state across requests, consider using [Durable Objects](/durable-objects/). If you need to cache data across requests, consider using [Workers KV](/kv/). ## Errors on Worker upload These errors occur when a Worker is uploaded or modified. | Error code | Meaning | | ---------- | ------------------------------------------------------------------------------------------------------------------------------- | | `10006` | Could not parse your Worker's code. | | `10007` | Worker or [workers.dev subdomain](/workers/configuration/routing/workers-dev/) not found. | | `10015` | Account is not entitled to use Workers. | | `10016` | Invalid Worker name. | | `10021` | Validation Error. Refer to [Validation Errors](/workers/observability/errors/#validation-errors-10021) for details. | | `10026` | Could not parse request body. | | `10027` | Your Worker exceeded the size limit of XX MB (for more details see [Worker size limits](/workers/platform/limits/#worker-size)) | | `10035` | Multiple attempts to modify a resource at the same time | | `10037` | An account has exceeded the number of [Workers allowed](/workers/platform/limits/#number-of-workers). | | `10052` | A [binding](/workers/runtime-apis/bindings/) is uploaded without a name. | | `10054` | A environment variable or secret exceeds the [size limit](/workers/platform/limits/#environment-variables). | | `10055` | The number of environment variables or secrets exceeds the [limit/Worker](/workers/platform/limits/#environment-variables). | | `10056` | [Binding](/workers/runtime-apis/bindings/) not found. | | `10068` | The uploaded Worker has no registered [event handlers](/workers/runtime-apis/handlers/). | | `10069` | The uploaded Worker contains [event handlers](/workers/runtime-apis/handlers/) unsupported by the Workers runtime. | ### Validation Errors (10021) The 10021 error code includes all errors that occur when you attempt to deploy a Worker, and Cloudflare then attempts to load and run the top-level scope (everything that happens before your Worker's [handler](/workers/runtime-apis/handlers/) is invoked). For example, if you attempt to deploy a broken Worker with invalid JavaScript that would throw a `SyntaxError` — Cloudflare will not deploy your Worker. Specific error cases include but are not limited to: #### Worker exceeded the upload size limit A Worker can be up to 10 MB in size after compression on the Workers Paid plan, and up to 3 MB on the Workers Free plan. To reduce the upload size of a Worker, you should consider removing unnecessary dependencies and/or using Workers KV, a D1 database or R2 to store configuration files, static assets and binary data instead of attempting to bundle them within your Worker code. Another method to reduce a Worker's file size is to split its functionality across multiple Workers and connect them using [Service bindings](/workers/runtime-apis/bindings/service-bindings/). #### Script startup exceeded CPU time limit This means that you are doing work in the top-level scope of your Worker that takes [more than the startup time limit (400ms)](/workers/platform/limits/#worker-startup-time) of CPU time. This is usually a sign of a bug and/or large performance problem with your code or a dependency you rely on. It's not typical to use more than 400ms of CPU time when your app starts. The more time your Worker's code spends parsing and executing top-level scope, the slower your Worker will be when you deploy a code change or a new [isolate](/workers/reference/how-workers-works/) is created. This error is most commonly caused by attempting to perform expernsive initialization work directly in top level (global) scope, rather than either at build time or when your Worker's handler is invoked. For example, attempting to initialize an app by generating or consuming a large schema. To analyze what is consuming so much CPU time, you should open Chrome DevTools for your Worker and look at the Profiling and/or Performance panels to understand where time is being spent. Is there something glaring that consumes tons of CPU time, especially the first time you make a request to your Worker? ## Runtime errors Runtime errors will occur within the runtime, do not throw up an error page, and are not visible to the end user. Runtime errors are detected by the user with logs. | Error message | Meaning | | -------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | | `Network connection lost` | Connection failure. Catch a `fetch` or binding invocation and retry it. | | `Memory limit`
`would be exceeded`
`before EOF` | Trying to read a stream or buffer that would take you over the [memory limit](/workers/platform/limits/#memory). | | `daemonDown` | A temporary problem invoking the Worker. | ## Identify errors: Workers Metrics To review whether your application is experiencing any downtime or returning any errors: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In **Account Home**, select **Workers & Pages**. 3. In **Overview**, select your Worker and review your Worker's metrics. ### Worker Errors The **Errors by invocation status** chart shows the number of errors broken down into the following categories: | Error | Meaning | | -------------------------- | --------------------------------------------------------------- | | `Uncaught Exception` | Your Worker code threw a JavaScript exception during execution. | | `Exceeded CPU Time Limits` | Worker exceeded CPU time limit or other resource constraints. | | `Exceeded Memory` | Worker exceeded the memory limit during execution. | | `Internal` | An internal error occurred in the Workers runtime. | The **Client disconnected by type** chart shows the number of client disconnect errors broken down into the following categories: | Client Disconnects | Meaning | | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `Response Stream Disconnected` | Connection was terminated during the deferred proxying stage of a Worker request flow. It commonly appears for longer lived connections such as [WebSockets](/workers/runtime-apis/websockets/). | | `Cancelled` | The Client disconnected before the Worker completed its response. | ## Debug exceptions with Workers Logs [Workers Logs](/workers/observability/logs/workers-logs) is a powerful tool for debugging your Workers. It shows all the historic logs generated by your Worker, including any uncaught exceptions that occur during execution. To find all your errors in Workers Logs, you can use the following filter: `$metadata.error EXISTS`. This will show all the logs that have an error associated with them. You can also filter by `$workers.outcome` to find the requests that resulted in an error. For example, you can filter by `$workers.outcome = "exception"` to find all the requests that resulted in an uncaught exception. All the possible outcome values can be found in the [Workers Trace Event](/logs/reference/log-fields/account/workers_trace_events/#outcome) reference. ## Debug exceptions from `Wrangler` To debug your worker via wrangler use `wrangler tail` to inspect and fix the exceptions. Exceptions will show up under the `exceptions` field in the JSON returned by `wrangler tail`. After you have identified the exception that is causing errors, redeploy your code with a fix, and continue tailing the logs to confirm that it is fixed. ## Set up a 3rd party logging service A Worker can make HTTP requests to any HTTP service on the public Internet. You can use a service like [Sentry](https://sentry.io) to collect error logs from your Worker, by making an HTTP request to the service to report the error. Refer to your service’s API documentation for details on what kind of request to make. When using an external logging strategy, remember that outstanding asynchronous tasks are canceled as soon as a Worker finishes sending its main response body to the client. To ensure that a logging subrequest completes, pass the request promise to [`event.waitUntil()`](https://developer.mozilla.org/en-US/docs/Web/API/ExtendableEvent/waitUntil). For example: ```js export default { async fetch(request, env, ctx) { function postLog(data) { return fetch("https://log-service.example.com/", { method: "POST", body: data, }); } // Without ctx.waitUntil(), the `postLog` function may or may not complete. ctx.waitUntil(postLog(stack)); return fetch(request); }, }; ``` ```js addEventListener("fetch", (event) => { event.respondWith(handleEvent(event)); }); async function handleEvent(event) { // ... // Without event.waitUntil(), the `postLog` function may or may not complete. event.waitUntil(postLog(stack)); return fetch(event.request); } function postLog(data) { return fetch("https://log-service.example.com/", { method: "POST", body: data, }); } ``` ## Go to origin on error By using [`event.passThroughOnException`](/workers/runtime-apis/context/#passthroughonexception), a Workers application will forward requests to your origin if an exception is thrown during the Worker's execution. This allows you to add logging, tracking, or other features with Workers, without degrading your application's functionality. ```js export default { async fetch(request, env, ctx) { ctx.passThroughOnException(); // an error here will return the origin response, as if the Worker wasn't present return fetch(request); }, }; ``` ```js addEventListener("fetch", (event) => { event.passThroughOnException(); event.respondWith(handleRequest(event.request)); }); async function handleRequest(request) { // An error here will return the origin response, as if the Worker wasn’t present. // ... return fetch(request); } ``` ## Related resources - [Log from Workers](/workers/observability/logs/) - Learn how to log your Workers. - [Logpush](/workers/observability/logs/logpush/) - Learn how to push Workers Trace Event Logs to supported destinations. - [RPC error handling](/workers/runtime-apis/rpc/error-handling/) - Learn how to handle errors from remote-procedure calls. --- # Metrics and analytics URL: https://developers.cloudflare.com/workers/observability/metrics-and-analytics/ import { GlossaryTooltip } from "~/components" There are two graphical sources of information about your Workers traffic at a given time: Workers metrics and zone-based Workers analytics. Workers metrics can help you diagnose issues and understand your Workers' workloads by showing performance and usage of your Workers. If your Worker runs on a route on a zone, or on a few zones, Workers metrics will show how much traffic your Worker is handling on a per-zone basis, and how many requests your site is getting. Zone analytics show how much traffic all Workers assigned to a zone are handling. ## Workers metrics Workers metrics aggregate request data for an individual Worker (if your Worker is running across multiple domains, and on `*.workers.dev`, metrics will aggregate requests across them). To view your Worker's metrics: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Compute (Workers)**. 3. In **Overview**, select your Worker to view its metrics. There are two metrics that can help you understand the health of your Worker in a given moment: requests success and error metrics, and invocation statuses. ### Requests The first graph shows historical request counts from the Workers runtime broken down into successful requests, errored requests, and subrequests. * **Total**: All incoming requests registered by a Worker. Requests blocked by [WAF](https://www.cloudflare.com/waf/) or other security features will not count. * **Success**: Requests that returned a Success or Client Disconnected invocation status. * **Errors**: Requests that returned a Script Threw Exception, Exceeded Resources, or Internal Error invocation status — refer to [Invocation Statuses](/workers/observability/metrics-and-analytics/#invocation-statuses) for a breakdown of where your errors are coming from. Request traffic data may display a drop off near the last few minutes displayed in the graph for time ranges less than six hours. This does not reflect a drop in traffic, but a slight delay in aggregation and metrics delivery. ### Subrequests Subrequests are requests triggered by calling `fetch` from within a Worker. A subrequest that throws an uncaught error will not be counted. * **Total**: All subrequests triggered by calling `fetch` from within a Worker. * **Cached**: The number of cached responses returned. * **Uncached**: The number of uncached responses returned. ### Wall time per execution Wall time represents the elapsed time in milliseconds between the start of a Worker invocation, and when the Workers runtime determines that no more JavaScript needs to run. Specifically, wall time per execution chart measures the wall time that the JavaScript context remained open — including time spent waiting on I/O, and time spent executing in your Worker's [`waitUntil()`](/workers/runtime-apis/context/#waituntil) handler. Wall time is not the same as the time it takes your Worker to send the final byte of a response back to the client - wall time can be higher, if tasks within `waitUntil()` are still running after the response has been sent, or it can be lower. For example, when returning a response with a large body, the Workers runtime can, in some cases, determine that no more JavaScript needs to run, and closes the JavaScript context before all the bytes have passed through and been sent. The Wall Time per execution chart shows historical wall time data broken down into relevant quantiles using [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling). Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). ### CPU Time per execution The CPU Time per execution chart shows historical CPU time data broken down into relevant quantiles using [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling). Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). In some cases, higher quantiles may appear to exceed [CPU time limits](/workers/platform/limits/#cpu-time) without generating invocation errors because of a mechanism in the Workers runtime that allows rollover CPU time for requests below the CPU limit. ### Execution duration (GB-seconds) The Duration per request chart shows historical [duration](/workers/platform/limits/#duration) per Worker invocation. The data is broken down into relevant quantiles, similar to the CPU time chart. Learn more about [interpreting quantiles](https://www.statisticshowto.com/quantile-definition-find-easy-steps/). Understanding duration on your Worker is especially useful when you are intending to do a significant amount of computation on the Worker itself. ### Invocation statuses To review invocation statuses: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages**. 3. Select your Worker. 4. Find the **Summary** graph in **Metrics**. 5. Select **Errors**. Worker invocation statuses indicate whether a Worker executed successfully or failed to generate a response in the Workers runtime. Invocation statuses differ from HTTP status codes. In some cases, a Worker invocation succeeds but does not generate a successful HTTP status because of another error encountered outside of the Workers runtime. Some invocation statuses result in a [Workers error code](/workers/observability/errors/#error-pages-generated-by-workers) being returned to the client. | Invocation status | Definition | Workers error code | GraphQL field | | ---------------------- | ---------------------------------------------------------------------------- | ------------------ | ---------------------- | | Success | Worker executed successfully | | `success` | | Client disconnected | HTTP client (that is, the browser) disconnected before the request completed | | `clientDisconnected` | | Worker threw exception | Worker threw an unhandled JavaScript exception | 1101 | `scriptThrewException` | | Exceeded resources¹ | Worker exceeded runtime limits | 1102, 1027 | `exceededResources` | | Internal error² | Workers runtime encountered an error | | `internalError` | ¹ The Exceeded Resources status may appear when the Worker exceeds a [runtime limit](/workers/platform/limits/#request-limits). The most common cause is excessive CPU time, but is also caused by a Worker exceeding startup time or free tier limits. ² The Internal Error status may appear when the Workers runtime fails to process a request due to an internal failure in our system. These errors are not caused by any issue with the Worker code nor any resource limit. While requests with Internal Error status are rare, some may appear during normal operation. These requests are not counted towards usage for billing purposes. If you notice an elevated rate of requests with Internal Error status, review [www.cloudflarestatus.com](https://www.cloudflarestatus.com/). To further investigate exceptions, use [`wrangler tail`](/workers/wrangler/commands/#tail). ### Request duration The request duration chart shows how long it took your Worker to respond to requests, including code execution and time spent waiting on I/O. The request duration chart is currently only available when your Worker has [Smart Placement](/workers/configuration/smart-placement) enabled. In contrast to [execution duration](/workers/observability/metrics-and-analytics/#execution-duration-gb-seconds), which measures only the time a Worker is active, request duration measures from the time a request comes into a data center until a response is delivered. The data shows the duration for requests with Smart Placement enabled compared to those with Smart Placement disabled (by default, 1% of requests are routed with Smart Placement disabled). The chart shows a histogram with duration across the x-axis and the percentage of requests that fall into the corresponding duration on the y-axis. ### Metrics retention Worker metrics can be inspected for up to three months in the past in maximum increments of one week. ## Zone analytics Zone analytics aggregate request data for all Workers assigned to any [routes](/workers/configuration/routing/routes/) defined for a zone. To review zone metrics: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select your site. 3. In **Analytics & Logs**, select **Workers**. Zone data can be scoped by time range within the last 30 days. The dashboard includes charts and information described below. ### Subrequests This chart shows subrequests — requests triggered by calling `fetch` from within a Worker — broken down by cache status. * **Uncached**: Requests answered directly by your origin server or other servers responding to subrequests. * **Cached**: Requests answered by Cloudflare’s [cache](https://www.cloudflare.com/learning/cdn/what-is-caching/). As Cloudflare caches more of your content, it accelerates content delivery and reduces load on your origin. ### Bandwidth This chart shows historical bandwidth usage for all Workers on a zone broken down by cache status. ### Status codes This chart shows historical requests for all Workers on a zone broken down by HTTP status code. ### Total requests This chart shows historical data for all Workers on a zone broken down by successful requests, failed requests, and subrequests. These request types are categorized by HTTP status code where `200`-level requests are successful and `400` to `500`-level requests are failed. ## GraphQL Worker metrics are powered by GraphQL. Learn more about querying our data sets in the [Querying Workers Metrics with GraphQL tutorial](/analytics/graphql-api/tutorials/querying-workers-metrics/). --- # Query Builder URL: https://developers.cloudflare.com/workers/observability/query-builder/ import { TabItem, Tabs, Steps, Render, WranglerConfig, YouTube, Markdown } from "~/components" The Query Builder helps you write structured queries to investigate and visualize your telemetry data. The Query Builder searches the Workers Observability dataset, which currently includes all logs stored by [Workers Logs](/workers/observability/logs/workers-logs/). The Query Builder can be found in the [Workers' Observability tab in the Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/investigate/). ## Enable Query Builder The Query Builder is available to all developers and requires no enablement. Queries search all Workers Logs stored by Cloudflare. If you have not yet enabled Workers Logs, you can do so by adding the following setting to your [Worker's Wrangler file](/workers/observability/logs/workers-logs/#enable-workers-logs) and redeploying your Worker. ```toml [observability] enabled = true [observability.logs] invocation_logs = true head_sampling_rate = 1 # optional. default = 1. ``` ## Write a query in the Cloudflare dashboard 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/investigate/) and select your account. 2. In Account Home, go to **Workers & Pages**. 3. Select **Observability** in the left-hand navigation panel, and then the **Investigate** tab. 4. Select a **Visualization**. 5. Optional: Add fields to Filter, Group By, Order By, and Limit. For more information, see what [composes a query](/workers/observability/query-builder/#query-composition). 6. Optional: Select the appropriate time range. 7. Select **Run**. The query will automatically run whenever changes are made. ## Query composition ### Visualization The Query Builder supports many visualization operators, including: | Function | Arguments | Description | | --- | --- | --- | | **Count** | n/a | The total number of rows matching the query conditions | | **Count Distinct** | any field | The number of occurrences of the unique values in the dataset | | **Min** | numeric field | The smallest value for the field in the dataset | | **Max** | numeric field | The largest value for the field in the dataset | | **Sum** | numeric field | The total of all of the values for the field in the dataset | | **Average** | numeric field | The average of the field in the dataset | | **Standard Deviation** | numeric field | The standard deviation of the field in the dataset | | **Variance** | numeric field | The variance of the field in the dataset | | **P001** | numeric field | The value of the field below which 0.1% of the data falls | | **P01** | numeric field | The value of the field below with 1% of the data falls | | **P05** | numeric field | The value of the field below with 5% of the data falls | | **P10** | numeric field | The value of the field below with 10% of the data falls | | **P25** | numeric field | The value of the field below with 25% of the data falls | | **Median (P50)** | numeric field | The value of the field below with 50% of the data falls | | **P75** | numeric field | The value of the field below with 75% of the data falls | | **P90** | numeric field | The value of the field below with 90% of the data falls | | **P95** | numeric field | The value of the field below with 95% of the data falls | | **P99** | numeric field | The value of the field below with 99% of the data falls | | **P999** | numeric field | The value of the field below with 99.9% of the data falls | You can add multiple visualizations in a single query. Each visualization renders a graph. A single summary table is also returned, which shows the raw query results. ![Example of showing the Query Builder with multiple visualization](~/assets/images/workers-observability/query-builder-visualization.png) All methods are aggregate functions. Most methods operate on a specific field in the log event. `Count` is an exception, and is an aggregate function that returns the number of log events matching the filter conditions. ### Filter Filters help return the columns that match the specified conditions. Filters have three components: a key, an operator, and a value. The key is any field in a log event. For example, you may choose `$workers.cpuTimeMs` or `$metadata.message`. The operator is a logical condition that evaluates to true or false. See the table below for supported conditions: | Data Type | Valid Conditions (Operators) | | --- | --- | | Numeric | Equals, Does not equal, Greater, Greater or equals, Less, Less or equals, Exists, Does not exist | | String | Equals, Does not equal, Includes, Does not include, Regex, Exists, Does not exist, Starts with | The value for a numeric field is an integer. The value for a string field is any string. To add a filter: 1. Select **+** in the **Filter** section. 2. Select **Select key...** and input a key name. For example, `$workers.cpuTimeMs`. 3. Select the operator and change it to the operator best suited. For example, `Greater than`. 4. Select **Select value...** and input a value. For example, `100`. When you run the query with the filter specified above, only log events where `$workers.cpuTimeMs > 100` will be returned. Adding multiple filters combines them with an AND operator, meaning that only events matching all the filters will be returned. ### Search Search is a text filter that returns only events containing the specified text. Search can be helpful as a quick filtering mechanism, or to search for unique identifiable values in your logs. ### Group By Group By combines rows that have the same value into summary rows. For example, if a query adds `$workers.event.request.cf.country` as a Group By field, then the summary table will group by country. ### Order By Order By affects how the results are sorted in the summary table. If `asc` is selected, the results are sorted in ascending order - from least to greatest. If `desc` is selected, the results are sorted in descending order - from greatest to least. ### Limit Limit restricts the number of results returned. When paired with [Order By](/workers/observability/query-builder/#order-by), it can be used to return the "top" or "first" N results. ### Select time range When you select a time range, you specify the time interval where you want to look for matching events. The retention period is dependent on your [plan type](/workers/observability/logs/workers-logs/#pricing). ## Viewing query results There are three views for queries: Visualizations, Invocations, and Events. ### Visualizations tab The **Visualizations** tab shows graphs and a summary table for the query. ![Visualization Overview](~/assets/images/workers-observability/query-builder-visualization.png) ### Invocations tab The **Invocations** tab shows all logs, grouped by by the invocation, and ordered by timestamp. Only invocations matching the query criteria are returned. ![Invocations Overview](~/assets/images/workers-observability/query-builder-invocations-overview.png) ### Events tab The **Events** tab shows all logs, ordered by timestamp. Only events matching the query criteria are returned. The Events tab can be customized to add additional fields in the view. ![Overview](~/assets/images/workers-observability/query-builder-events-overview.png) ## Save queries It is recommended to save queries that may be reused for future investigations. You can save a query with a name, description, and custom tags by selecting **Save Query**. Queries are saved at the account-level and are accessible to all users in the account. Saved queries can be re-run by selecting the relevant query from the **Queries** tab. You can edit the query and save edits. Queries can be starred by users. Starred queries are unique to the user, and not to the account. ## Delete queries Saved queries can be deleted from the **Queries** tab. If you delete a query, the query is deleted for all users in the account. 1. Select the [Queries](https://dash.cloudflare.com/?to=/:account/workers-and-pages/observability/queries) tab in the Observability dashboard. 2. On the right-hand side, select the three dots for additional actions. 3. Select **Delete Query** and follow the instructions. ## Share queries Saved queries are assigned a unique URL and can be shared with any user in the account. ## Example: Composing a query In this example, we will construct a query to find and debug all paths that respond with 5xx errors. First, we create a base query. In this base query, we want to visualize by the raw event count. We can add a filter for `$workers.event.response.status` that is greater than 500. Then, we group by `$workers.event.request.path` and `$workers.event.response.status` to identify the number of requests that were affected by this behavior. ![Constructing a query](~/assets/images/workers-observability/query-builder-ex1-query.png) The results show that the `/actuator/env` path has been experiencing 500s. Now, we can apply a filter for this path and investigate. ![Adding an additional field to the query](~/assets/images/workers-observability/query-builder-ex1-query-with-filter.png) Now, we can investigate by selecting the **Invocations** tab. We can see that there were two logged invocations of this error. ![Examining the Invocations tab in the Query Builder](~/assets/images/workers-observability/query-builder-ex1-invocations.png) We can expand a single invocation to view the relevant logs, and continue to debug. ![Viewing the logs for a single Invocation](~/assets/images/workers-observability/query-builder-ex1-invocation-logs.png) --- # Source maps and stack traces URL: https://developers.cloudflare.com/workers/observability/source-maps/ import { Render, WranglerConfig } from "~/components"; import { FileTree } from "@astrojs/starlight/components"; ## Source Maps To enable source maps, add the following to your Worker's [Wrangler configuration](/workers/wrangler/configuration/): ```toml upload_source_maps = true ``` When `upload_source_maps` is set to `true`, Wrangler will automatically generate and upload source map files when you run [`wrangler deploy`](/workers/wrangler/commands/#deploy) or [`wrangler versions deploy`](/workers/wrangler/commands/#deploy-2). ​​ :::note Miniflare can also [output source maps](https://miniflare.dev/developing/source-maps) for use in local development or [testing](/workers/testing/miniflare/writing-tests). ::: ## Stack traces ​​ When your Worker throws an uncaught exception, we fetch the source map and use it to map the stack trace of the exception back to lines of your Worker’s original source code. You can then view the stack trace when streaming [real-time logs](/workers/observability/logs/real-time-logs/) or in [Tail Workers](/workers/observability/logs/tail-workers/). :::note The source map is retrieved after your Worker invocation completes — it's an asynchronous process that does not impact your Worker's CPU utilization or performance. Source maps are not accessible inside the Worker at runtime, if you `console.log()` the [stack property](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Error/stack) within a Worker, you will not get a deobfuscated stack trace. ::: When Cloudflare attempts to remap a stack trace to the Worker's source map, it does so line-by-line, remapping as much as possible. If a line of the stack trace cannot be remapped for any reason, Cloudflare will leave that line of the stack trace unchanged, and continue to the next line of the stack trace. ## Limits :::note[Wrangler version] Minimum required Wrangler version for source maps: 3.46.0. Check your version by running `wrangler --version`. ::: | Description | Limit | | ------------------------------ | ------------- | | Maximum Source Map Size | 15 MB gzipped | ## Example Consider a simple project. `src/index.ts` serves as the entrypoint of the application and `src/calculator.ts` defines a ComplexCalculator class that supports basic arithmetic. - wrangler.jsonc - tsconfig.json - src - calculator.ts - index.ts Let's see how source maps can simplify debugging an error in the ComplexCalculator class. ![Stack Trace without Source Map remapping](~/assets/images/workers-observability/without-source-map.png) With **no source maps uploaded**: notice how all the Javascript has been minified to one file, so the stack trace is missing information on file name, shows incorrect line numbers, and incorrectly references `js` instead of `ts`. ![Stack Trace with Source Map remapping](~/assets/images/workers-observability/with-source-map.png) With **source maps uploaded**: all methods reference the correct files and line numbers. ## Related resources * [Tail Workers](/workers/observability/logs/logpush/) - Learn how to attach Tail Workers to transform your logs and send them to HTTP endpoints. * [Real-time logs](/workers/observability/logs/real-time-logs/) - Learn how to capture Workers logs in real-time. * [RPC error handling](/workers/runtime-apis/rpc/error-handling/) - Learn how exceptions are handled over RPC (Remote Procedure Call). --- # Betas URL: https://developers.cloudflare.com/workers/platform/betas/ These are the current alphas and betas relevant to the Cloudflare Workers platform. * **Public alphas and betas are openly available**, but may have limitations and caveats due to their early stage of development. * Private alphas and betas require explicit access to be granted. Refer to the documentation to join the relevant product waitlist. | Product | Private Beta | Public Beta | More Info | | ------------------------------------------------- | ------------ | ----------- | --------------------------------------------------------------------------- | | Email Workers | | ✅ | [Docs](/email-routing/email-workers/) | | Green Compute | | ✅ | [Blog](https://blog.cloudflare.com/earth-day-2022-green-compute-open-beta/) | | Pub/Sub | ✅ | | [Docs](/pub-sub) | | [TCP Sockets](/workers/runtime-apis/tcp-sockets/) | | ✅ | [Docs](/workers/runtime-apis/tcp-sockets) | --- # Deploy to Cloudflare buttons URL: https://developers.cloudflare.com/workers/platform/deploy-buttons/ import { Tabs, TabItem } from "@astrojs/starlight/components"; If you're building a Workers application and would like to share it with other developers, you can embed a Deploy to Cloudflare button in your README, blog post, or documentation to enable others to quickly deploy your application on their own Cloudflare account. Deploy to Cloudflare buttons eliminate the need for complex setup, allowing developers to get started with your public GitHub or GitLab repository in just a few clicks. [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/staging/saas-admin-template) ## What are Deploy to Cloudflare buttons? Deploy to Cloudflare buttons simplify the deployment of a Workers application by enabling Cloudflare to: * **Clone a Git repository**: Cloudflare clones your source repository into the user's GitHub/GitLab account where they can continue development after deploying. * **Configure a project**: Your users can customize key details such as repository name, Worker name, and required resource names in a single setup page with customizations reflected in the newly created Git repository. * **Build & deploy**: Cloudflare builds the application using [Workers Builds](/workers/ci-cd/builds) and deploys it to the Cloudflare network. Any required resources are automatically provisioned and bound to the Worker without additional setup. ![Deploy to Cloudflare Flow](~/assets/images/workers/dtw-user-flow.png) ## How to Set Up Deploy to Cloudflare buttons Deploy to Cloudflare buttons can be embedded anywhere developers might want to launch your project. To add a Deploy to Cloudflare button, copy the following snippet and replace the Git repository URL with your project's URL. You can also optionally specify a subdirectory. ```md [![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=) ``` ```html Deploy to Cloudflare ``` ``` https://deploy.workers.cloudflare.com/?url= ``` If you have already deployed your application using Workers Builds, you can generate a Deploy to Cloudflare button directly from the Cloudflare dashboard by selecting the share button (located within your Worker details) and copying the provided snippet. ![Share an application](~/assets/images/workers/dtw-share-project.png) Once you have your snippet, you can paste this wherever you would like your button to be displayed. ## Automatic Resource provisioning If your Worker application requires Cloudflare resources, they will be automatically provisioned as part of the deployment. Currently, supported resources include: * **Storage**: [KV namespaces](/kv/), [D1 databases](/d1/), [R2 buckets](/r2/), [Hyperdrive](/hyperdrive/), and [Vectorize databases](/vectorize/) * **Compute**: [Durable Objects](/durable-objects/), [Workers AI](/workers-ai/), and [Queues](/queues/) Cloudflare will read the Wrangler configuration file of your source repo to determine resource requirements for your application. During deployment, Cloudflare will provision any necessary resources and update the Wrangler configuration where applicable for newly created resources (e.g. database IDs and namespace IDs). To ensure successful deployment, please make sure your source repository includes default values for resource names, resource IDs and any other properties for each binding. ## Best practices **Configuring Build/Deploy commands**: If you are using custom `build` and `deploy` scripts in your package.json (for example, if using a full stack framework or running D1 migrations), Cloudflare will automatically detect and pre-populate the build and deploy fields. Users can choose to modify or accept the custom commands during deployment configuration. If no `deploy` script is specified, Cloudflare will preconfigure `npx wrangler deploy` by default. If no `build` script is specified, Cloudflare will leave this field blank. **Running D1 Migrations**: If you would like to run migrations as part of your setup, you can specify this in your `package.json` by running your migrations as part of your `deploy` script. The migration command should reference the binding name rather than the database name to ensure migrations are successful when users specify a database name that is different from that of your source repository. The following is an example of how you can set up the scripts section of your `package.json`: ```json { "scripts": { "build": "astro build", "deploy": "npm run db:migrations:apply && wrangler deploy", "db:migrations:apply": "wrangler d1 migrations apply DB_BINDING --remote" } } ``` ## Limitations * **Monorepos**: Cloudflare does not fully support monorepos * If your repository URL contains a subdirectory, your application must be fully isolated within that subdirectory, including any dependencies. Otherwise, the build will fail. Cloudflare treats this subdirectory as the root of the new repository created as part of the deploy process. * Additionally, if you have a monorepo that contains multiple Workers applications, they will not be deployed together. You must configure a separate Deploy to Cloudflare button for each application. The user will manually create a distinct Workers application for each subdirectory. * **Pages applications**: Deploy to Cloudflare buttons only support Workers applications. * **Non-GitHub/GitLab repositories**: Source repositories from anything other than github.com and gitlab.com are not supported. Self-hosted versions of GitHub and GitLab are also not supported. * **Private repositories**: Repositories must be public in order for others to successfully use your Deploy to Cloudflare button. --- # Platform URL: https://developers.cloudflare.com/workers/platform/ import { DirectoryListing } from "~/components"; Pricing, limits and other information about the Workers platform. --- # Known issues URL: https://developers.cloudflare.com/workers/platform/known-issues/ Below are some known bugs and issues to be aware of when using Cloudflare Workers. ## Route specificity * When defining route specificity, a trailing `/*` in your pattern may not act as expected. Consider two different Workers, each deployed to the same zone. Worker A is assigned the `example.com/images/*` route and Worker B is given the `example.com/images*` route pattern. With these in place, here are how the following URLs will be resolved: ``` // (A) example.com/images/* // (B) example.com/images* "example.com/images" // -> B "example.com/images123" // -> B "example.com/images/hello" // -> B ``` You will notice that all examples trigger Worker B. This includes the final example, which exemplifies the unexpected behavior. When adding a wildcard on a subdomain, here are how the following URLs will be resolved: ``` // (A) *.example.com/a // (B) a.example.com/* "a.example.com/a" // -> B ``` ## wrangler dev * When running `wrangler dev --remote`, all outgoing requests are given the `cf-workers-preview-token` header, which Cloudflare recognizes as a preview request. This applies to the entire Cloudflare network, so making HTTP requests to other Cloudflare zones is currently discarded for security reasons. To enable a workaround, insert the following code into your Worker script: ```js const request = new Request(url, incomingRequest); request.headers.delete('cf-workers-preview-token'); return await fetch(request); ``` ## Fetch API in CNAME setup When you make a subrequest using [`fetch()`](/workers/runtime-apis/fetch/) from a Worker, the Cloudflare DNS resolver is used. When a zone has a [Partial (CNAME) setup](/dns/zone-setups/partial-setup/), all hostnames that the Worker needs to be able to resolve require a dedicated DNS entry in Cloudflare's DNS setup. Otherwise the Fetch API call will fail with status code [530 (1016)](/support/troubleshooting/http-status-codes/cloudflare-1xxx-errors/#error-1016-origin-dns-error). Setup with missing DNS records in Cloudflare DNS ``` // Zone in partial setup: example.com // DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ... // DNS records at Cloudflare DNS: sub1.example.com "sub1.example.com/" // -> Can be resolved by Fetch API "sub2.example.com/" // -> Cannot be resolved by Fetch API, will lead to 530 status code ``` After adding `sub2.example.com` to Cloudflare DNS ``` // Zone in partial setup: example.com // DNS records at Authoritative DNS: sub1.example.com, sub2.example.com, ... // DNS records at Cloudflare DNS: sub1.example.com, sub2.example.com "sub1.example.com/" // -> Can be resolved by Fetch API "sub2.example.com/" // -> Can be resolved by Fetch API ``` ## Fetch to IP addresses For Workers subrequests, requests can only be made to URLs, not to IP addresses directly. To overcome this limitation [add a A or AAAA name record to your zone](https://developers.cloudflare.com/dns/manage-dns-records/how-to/create-dns-records/) and then fetch that resource. For example, in the zone `example.com` create a record of type `A` with the name `server` and value `192.0.2.1`, and then use: ```js await fetch('http://server.example.com') ``` Do not use: ```js await fetch('http://192.0.2.1') ``` --- # Limits URL: https://developers.cloudflare.com/workers/platform/limits/ import { Render, WranglerConfig } from "~/components"; ## Account plan limits | Feature | Workers Free | Workers Paid | | -------------------------------------------------------------------------------- | ------------ | ------------ | | [Subrequests](#subrequests) | 50/request | 1000/request | | [Simultaneous outgoing
connections/request](#simultaneous-open-connections) | 6 | 6 | | [Environment variables](#environment-variables) | 64/Worker | 128/Worker | | [Environment variable
size](#environment-variables) | 5 KB | 5 KB | | [Worker size](#worker-size) | 3 MB | 10 MB | | [Worker startup time](#worker-startup-time) | 400 ms | 400 ms | | [Number of Workers](#number-of-workers)1 | 100 | 500 | | Number of [Cron Triggers](/workers/configuration/cron-triggers/)
per account | 5 | 250 | | Number of [Static Asset](#static-assets) files | 20000 | 20000 | | Individual [Static Asset](#static-assets) file size | 25 MiB | 25 MiB | 1 If you are running into limits, your project may be a good fit for [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/). --- ## Request limits URLs have a limit of 16 KB. Request headers observe a total limit of 32 KB, but each header is limited to 16 KB. Cloudflare has network-wide limits on the request body size. This limit is tied to your Cloudflare account's plan, which is separate from your Workers plan. When the request body size of your `POST`/`PUT`/`PATCH` requests exceed your plan's limit, the request is rejected with a `(413) Request entity too large` error. Cloudflare Enterprise customers may contact their account team or [Cloudflare Support](/support/contacting-cloudflare-support/) to have a request body limit beyond 500 MB. | Cloudflare Plan | Maximum body size | | --------------- | ------------------- | | Free | 100 MB | | Pro | 100 MB | | Business | 200 MB | | Enterprise | 500 MB (by default) | --- ## Response limits Response headers observe a total limit of 32 KB, but each header is limited to 16 KB. Cloudflare does not enforce response limits on response body sizes, but cache limits for [our CDN are observed](/cache/concepts/default-cache-behavior/). Maximum file size is 512 MB for Free, Pro, and Business customers and 5 GB for Enterprise customers. --- ## Worker limits | Feature | Workers Free | Workers Paid | | ------------------------ | ------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Request](#request) | 100,000 requests/day
1000 requests/min | No limit | | [Worker memory](#memory) | 128 MB | 128 MB | | [CPU time](#cpu-time) | 10 ms | 5 min HTTP request
15 min [Cron Trigger](/workers/configuration/cron-triggers/) | | [Duration](#duration) | No limit | No limit for Workers.
15 min duration limit for [Cron Triggers](/workers/configuration/cron-triggers/), [Durable Object Alarms](/durable-objects/api/alarms/) and [Queue Consumers](/queues/configuration/javascript-apis/#consumer) | ### Duration Duration is a measurement of wall-clock time — the total amount of time from the start to end of an invocation of a Worker. There is no hard limit on the duration of a Worker. As long as the client that sent the request remains connected, the Worker can continue processing, making subrequests, and setting timeouts on behalf of that request. When the client disconnects, all tasks associated with that client request are canceled. Use [`event.waitUntil()`](/workers/runtime-apis/handlers/fetch/) to delay cancellation for another 30 seconds or until the promise passed to `waitUntil()` completes. :::note Cloudflare updates the Workers runtime a few times per week. When this happens, any in-flight requests are given a grace period of 30 seconds to finish. If a request does not finish within this time, it is terminated. While your application should follow the best practice of handling disconnects by retrying requests, this scenario is extremely improbable. To encounter it, you would need to have a request that takes longer than 30 seconds that also happens to intersect with the exact time an update to the runtime is happening. ::: ### CPU time CPU time is the amount of time the CPU actually spends doing work during a given request. If a Worker's request makes a sub-request and waits for that request to come back before doing additional work, this time spent waiting **is not** counted towards CPU time. **Most Workers requests consume less than 1-2 milliseconds of CPU time**, but you can increase the maximum CPU time from the default 30 seconds to 5 minutes (300,000 milliseconds) if you have CPU-bound tasks, such as large JSON payloads that need to be serialized, cryptographic key generation, or other data processing tasks. To understand your CPU usage: - CPU time and Wall time are surfaced in the [invocation log](/workers/observability/logs/workers-logs/#invocation-logs) within Workers Logs. - For Tail Workers, CPU time and Wall time are surfaced at the top level of the [Workers Trace Events object](/logs/reference/log-fields/account/workers_trace_events/). - DevTools locally can help identify CPU intensive portions of your code. See the [CPU profiling with DevTools documentation](/workers/observability/dev-tools/cpu-usage/). You can also set a [custom limit](/workers/wrangler/configuration/#limits) on the amount of CPU time that can be used during each invocation of your Worker. ```jsonc { // ...rest of your configuration... "limits": { "cpu_ms": 300000, // default is 30000 (30 seconds) }, // ...rest of your configuration... } ``` You can also customize this in the [Workers dashboard](https://dash.cloudflare.com/?to=/:account/workers). Select the specific Worker you wish to modify -> click on the "Settings" tab -> adjust the CPU time limit. :::note Scheduled Workers ([Cron Triggers](/workers/configuration/cron-triggers/)) have different limits on CPU time based on the schedule interval. When the schedule interval is less than 1 hour, a Scheduled Worker may run for up to 30 seconds. When the schedule interval is more than 1 hour, a scheduled Worker may run for up to 15 minutes. ::: --- ## Cache API limits | Feature | Workers Free | Workers Paid | | ---------------------------------------- | ------------ | ------------ | | [Maximum object size](#cache-api-limits) | 512 MB | 512 MB | | [Calls/request](#cache-api-limits) | 50 | 1,000 | Calls/request means the number of calls to `put()`, `match()`, or `delete()` Cache API method per-request, using the same quota as subrequests (`fetch()`). :::note The size of chunked response bodies (`Transfer-Encoding: chunked`) is not known in advance. Then, `.put()`ing such responses will block subsequent `.put()`s from starting until the current `.put()` completes. ::: --- ## Request Workers automatically scale onto thousands of Cloudflare global network servers around the world. There is no general limit to the number of requests per second Workers can handle. Cloudflare’s abuse protection methods do not affect well-intentioned traffic. However, if you send many thousands of requests per second from a small number of client IP addresses, you can inadvertently trigger Cloudflare’s abuse protection. If you expect to receive `1015` errors in response to traffic or expect your application to incur these errors, [contact Cloudflare support](/support/contacting-cloudflare-support/) to increase your limit. Cloudflare's anti-abuse Workers Rate Limiting does not apply to Enterprise customers. You can also confirm if you have been rate limited by anti-abuse Worker Rate Limiting by logging into the Cloudflare dashboard, selecting your account and zone, and going to **Security** > **Events**. Find the event and expand it. If the **Rule ID** is `worker`, this confirms that it is the anti-abuse Worker Rate Limiting. The burst rate and daily request limits apply at the account level, meaning that requests on your `*.workers.dev` subdomain count toward the same limit as your zones. Upgrade to a [Workers Paid plan](https://dash.cloudflare.com/?account=workers/plans) to automatically lift these limits. :::caution If you are currently being rate limited, upgrade to a [Workers Paid plan](https://dash.cloudflare.com/?account=workers/plans) to lift burst rate and daily request limits. ::: ### Burst rate Accounts using the Workers Free plan are subject to a burst rate limit of 1,000 requests per minute. Users visiting a rate limited site will receive a Cloudflare `1015` error page. However if you are calling your Worker programmatically, you can detect the rate limit page and handle it yourself by looking for HTTP status code `429`. Workers being rate-limited by Anti-Abuse Protection are also visible from the Cloudflare dashboard: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account and your website. 2. Select **Security** > **Events** > scroll to **Sampled logs**. 3. Review the log for a Web Application Firewall block event with a `ruleID` of `worker`. ### Daily request Accounts using the Workers Free plan are subject to a daily request limit of 100,000 requests. Free plan daily requests counts reset at midnight UTC. A Worker that fails as a result of daily request limit errors can be configured by toggling its corresponding [route](/workers/configuration/routing/routes/) in two modes: 1) Fail open and 2) Fail closed. #### Fail open Routes in fail open mode will bypass the failing Worker and prevent it from operating on incoming traffic. Incoming requests will behave as if there was no Worker. #### Fail closed Routes in fail closed mode will display a Cloudflare `1027` error page to visitors, signifying the Worker has been temporarily disabled. Cloudflare recommends this option if your Worker is performing security related tasks. --- ## Memory Only one Workers instance runs on each of the many global Cloudflare global network servers. Each Workers instance can consume up to 128 MB of memory. Use [global variables](/workers/runtime-apis/web-standards/) to persist data between requests on individual nodes. Note however, that nodes are occasionally evicted from memory. If a Worker processes a request that pushes the Worker over the 128 MB limit, the Cloudflare Workers runtime may cancel one or more requests. To view these errors, as well as CPU limit overages: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Workers & Pages** and in **Overview**, select the Worker you would like to investigate. 3. Under **Metrics**, select **Errors** > **Invocation Statuses** and examine **Exceeded Memory**. Use the [TransformStream API](/workers/runtime-apis/streams/transformstream/) to stream responses if you are concerned about memory usage. This avoids loading an entire response into memory. Using DevTools locally can help identify memory leaks in your code. See the [memory profiling with DevTools documentation](/workers/observability/dev-tools/memory-usage/) to learn more. --- ## Subrequests A subrequest is any request that a Worker makes to either Internet resources using the [Fetch API](/workers/runtime-apis/fetch/) or requests to other Cloudflare services like [R2](/r2/), [KV](/kv/), or [D1](/d1/). ### Worker-to-Worker subrequests To make subrequests from your Worker to another Worker on your account, use [Service Bindings](/workers/runtime-apis/bindings/service-bindings/). Service bindings allow you to send HTTP requests to another Worker without those requests going over the Internet. If you attempt to use global [`fetch()`](/workers/runtime-apis/fetch/) to make a subrequest to another Worker on your account that runs on the same [zone](/fundamentals/setup/accounts-and-zones/#zones), without service bindings, the request will fail. If you make a subrequest from your Worker to a target Worker that runs on a [Custom Domain](/workers/configuration/routing/custom-domains/#worker-to-worker-communication) rather than a route, the request will be allowed. ### How many subrequests can I make? You can make 50 subrequests per request on Workers Free, and 1,000 subrequests per request on Workers Paid. Each subrequest in a redirect chain counts against this limit. This means that the number of subrequests a Worker makes could be greater than the number of `fetch(request)` calls in the Worker. For subrequests to internal services like Workers KV and Durable Objects, the subrequest limit is 1,000 per request, regardless of the [usage model](/workers/platform/pricing/#workers) configured for the Worker. ### How long can a subrequest take? There is no set limit on the amount of real time a Worker may use. As long as the client which sent a request remains connected, the Worker may continue processing, making subrequests, and setting timeouts on behalf of that request. When the client disconnects, all tasks associated with that client’s request are proactively canceled. If the Worker passed a promise to [`event.waitUntil()`](/workers/runtime-apis/handlers/fetch/), cancellation will be delayed until the promise has completed or until an additional 30 seconds have elapsed, whichever happens first. --- ## Simultaneous open connections You can open up to six connections simultaneously, for each invocation of your Worker. The connections opened by the following API calls all count toward this limit: - the `fetch()` method of the [Fetch API](/workers/runtime-apis/fetch/). - `get()`, `put()`, `list()`, and `delete()` methods of [Workers KV namespace objects](/kv/api/). - `put()`, `match()`, and `delete()` methods of [Cache objects](/workers/runtime-apis/cache/). - `list()`, `get()`, `put()`, `delete()`, and `head()` methods of [R2](/r2/). - `send()` and `sendBatch()`, methods of [Queues](/queues/). - Opening a TCP socket using the [`connect()`](/workers/runtime-apis/tcp-sockets/) API. Once an invocation has six connections open, it can still attempt to open additional connections. - These attempts are put in a pending queue — the connections will not be initiated until one of the currently open connections has closed. - Earlier connections can delay later ones, if a Worker tries to make many simultaneous subrequests, its later subrequests may appear to take longer to start. If you have cases in your application that use `fetch()` but that do not require consuming the response body, you can avoid the unread response body from consuming a concurrent connection by using `response.body.cancel()`. For example, if you want to check whether the HTTP response code is successful (2xx) before consuming the body, you should explicitly cancel the pending response body: ```ts let resp = await fetch(url); // Only read the response body for successful responses if (resp.statusCode <= 299) { // Call resp.json(), resp.text() or otherwise process the body } else { // Explicitly cancel it resp.body.cancel(); } ``` This will free up an open connection. If the system detects that a Worker is deadlocked on open connections — for example, if the Worker has pending connection attempts but has no in-progress reads or writes on the connections that it already has open — then the least-recently-used open connection will be canceled to unblock the Worker. If the Worker later attempts to use a canceled connection, an exception will be thrown. These exceptions should rarely occur in practice, though, since it is uncommon for a Worker to open a connection that it does not have an immediate use for. :::note Simultaneous Open Connections are measured from the top-level request, meaning any connections open from Workers sharing resources (for example, Workers triggered via [Service bindings](/workers/runtime-apis/bindings/service-bindings/)) will share the simultaneous open connection limit. ::: --- ## Environment variables The maximum number of environment variables (secret and text combined) for a Worker is 128 variables on the Workers Paid plan, and 64 variables on the Workers Free plan. There is no limit to the number of environment variables per account. Each environment variable has a size limitation of 5 KB. --- ## Worker size A Worker can be up to 10 MB in size _after compression_ on the Workers Paid plan, and up to 3 MB on the Workers Free plan. You can assess the size of your Worker bundle after compression by performing a dry-run with `wrangler` and reviewing the final compressed (`gzip`) size output by `wrangler`: ```sh wrangler deploy --outdir bundled/ --dry-run ``` ```sh output # Output will resemble the below: Total Upload: 259.61 KiB / gzip: 47.23 KiB ``` Note that larger Worker bundles can impact the start-up time of the Worker, as the Worker needs to be loaded into memory. You should consider removing unnecessary dependencies and/or using [Workers KV](/kv/), a [D1 database](/d1/) or [R2](/r2/) to store configuration files, static assets and binary data instead of attempting to bundle them within your Worker code. --- ## Worker startup time A Worker must be able to be parsed and execute its global scope (top-level code outside of any handlers) within 400 ms. Worker size can impact startup because there is more code to parse and evaluate. Avoiding expensive code in the global scope can keep startup efficient as well. You can measure your Worker's startup time by deploying it to Cloudflare using [Wrangler](/workers/wrangler/). When you run `npx wrangler@latest deploy` or `npx wrangler@latest versions upload`, Wrangler will output the startup time of your Worker in the command-line output, using the `startup_time_ms` field in the [Workers Script API](/api/resources/workers/subresources/scripts/methods/update/) or [Workers Versions API](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/). If you are having trouble staying under this limit, consider [profiling using DevTools](/workers/observability/dev-tools/) locally to learn how to optimize your code. When you attempt to deploy a Worker using the [Wrangler CLI](/workers/wrangler/), but your deployment is rejected because your Worker exceeds the maximum startup time, Wrangler will automatically generate a CPU profile that you can import into Chrome DevTools or open directly in VSCode. You can use this to learn what code in your Worker uses large amounts of CPU time at startup. Refer to [`wrangler check startup`](/workers/wrangler/commands/#startup) for more details. --- ## Number of Workers You can have up to 500 Workers on your account on the Workers Paid plan, and up to 100 Workers on the Workers Free plan. If you need more than 500 Workers, consider using [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/). --- ## Routes and domains ### Number of routes per zone Each zone has a limit of 1,000 [routes](/workers/configuration/routing/routes/). If you require more than 1,000 routes on your zone, consider using [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/) or request an increase to this limit. ### Number of routes per zone when using `wrangler dev --remote` When you run a [remote development](/workers/local-development/#develop-using-remote-resources-and-bindings) session using the `--remote` flag, a limit of 50 [routes](/workers/configuration/routing/routes/) per zone is enforced. The Quick Editor in the Cloudflare Dashboard also uses `wrangler dev --remote`, so any changes made there are subject to the same 50-route limit. If your zone has more than 50 routes, you **will not be able to run a remote session**. To fix this, you must remove routes until you are under the 50-route limit. ### Number of custom domains per zone Each zone has a limit of 100 [custom domains](/workers/configuration/routing/custom-domains/). If you require more than 100 custom domains on your zone, consider using a wildcard [route](/workers/configuration/routing/routes/) or request an increase to this limit. ### Number of routed zones per Worker When configuring [routing](/workers/configuration/routing/), the maximum number of zones that can be referenced by a Worker is 1,000. If you require more than 1,000 zones on your Worker, consider using [Workers for Platforms](/cloudflare-for-platforms/workers-for-platforms/) or request an increase to this limit. --- ## Image Resizing with Workers When using Image Resizing with Workers, refer to [Image Resizing documentation](/images/transform-images/) for more information on the applied limits. --- ## Log size You can emit a maximum of 256 KB of data (across `console.log()` statements, exceptions, request metadata and headers) to the console for a single request. After you exceed this limit, further context associated with the request will not be recorded in logs, appear when tailing logs of your Worker, or within a [Tail Worker](/workers/observability/logs/tail-workers/). Refer to the [Workers Trace Event Logpush documentation](/workers/observability/logs/logpush/#limits) for information on the maximum size of fields sent to logpush destinations. --- ## Unbound and Bundled plan limits :::note Unbound and Bundled plans have been deprecated and are no longer available for new accounts. ::: If your Worker is on an Unbound plan, your limits are exactly the same as the Workers Paid plan. If your Worker is on a Bundled plan, your limits are the same as the Workers Paid plan except for the following differences: - Your limit for [subrequests](/workers/platform/limits/#subrequests) is 50/request - Your limit for [CPU time](/workers/platform/limits/#cpu-time) is 50ms for HTTP requests and 50ms for [Cron Triggers](/workers/configuration/cron-triggers/) - You have no [Duration](/workers/platform/limits/#duration) limits for [Cron Triggers](/workers/configuration/cron-triggers/), [Durable Object alarms](/durable-objects/api/alarms/), or [Queue consumers](/queues/configuration/javascript-apis/#consumer) - Your Cache API limits for calls/requests is 50 --- ## Static Assets ### Files There is a 20,000 file count limit per [Worker version](/workers/configuration/versions-and-deployments/), and a 25 MiB individual file size limit. This matches the [limits in Cloudflare Pages](/pages/platform/limits/) today. ### Headers A `_headers` file may contain up to 100 rules and each line may contain up to 2,000 characters. The entire line, including spacing, header name, and value, counts towards this limit. ### Redirects A `_redirects` file may contain up to 2,000 static redirects and 100 dynamic redirects, for a combined total of 2,100 redirects. Each redirect declaration has a 1,000-character limit. --- ## Related resources Review other developer platform resource limits. - [KV limits](/kv/platform/limits/) - [Durable Object limits](/durable-objects/platform/limits/) - [Queues limits](/queues/platform/limits/) --- # Pricing URL: https://developers.cloudflare.com/workers/platform/pricing/ import { GlossaryTooltip, Render } from "~/components"; By default, users have access to the Workers Free plan. The Workers Free plan includes limited usage of Workers, Pages Functions, Workers KV and Hyperdrive. Read more about the [Free plan limits](/workers/platform/limits/#worker-limits). The Workers Paid plan includes Workers, Pages Functions, Workers KV, Hyperdrive, and Durable Objects usage for a minimum charge of $5 USD per month for an account. The plan includes increased initial usage allotments, with clear charges for usage that exceeds the base plan. There are no additional charges for data transfer (egress) or throughput (bandwidth). All included usage is on a monthly basis. :::note[Pages Functions billing] All [Pages Functions](/pages/functions/) are billed as Workers. All pricing and inclusions in this document apply to Pages Functions. Refer to [Functions Pricing](/pages/functions/pricing/) for more information on Pages Functions pricing. ::: ## Workers Users on the Workers Paid plan have access to the Standard usage model. Workers Enterprise accounts are billed based on the usage model specified in their contract. To switch to the Standard usage model, reach out to your CSM. | | Requests1, 2 | Duration | CPU time | | ------------ | ------------------------------------------------------------------ | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Free** | 100,000 per day | No charge for duration | 10 milliseconds of CPU time per invocation | | **Standard** | 10 million included per month
+$0.30 per additional million | No charge or limit for duration | 30 million CPU milliseconds included per month
+$0.02 per additional million CPU milliseconds

Max of [5 minutes of CPU time](/workers/platform/limits/#worker-limits) per invocation (default: 30 seconds)
Max of 15 minutes of CPU time per [Cron Trigger](/workers/configuration/cron-triggers/) or [Queue Consumer](/queues/configuration/javascript-apis/#consumer) invocation | 1 Inbound requests to your Worker. Cloudflare does not bill for [subrequests](/workers/platform/limits/#subrequests) you make from your Worker. 2 Requests to static assets are free and unlimited. ### Example pricing #### Example 1 A Worker that serves 15 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs: | | Monthly Costs | Formula | | ---------------- | ------------- | --------------------------------------------------------------------------------------------------------- | | **Subscription** | $5.00 | | | **Requests** | $1.50 | (15,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30 | | **CPU time** | $1.50 | ((7 ms of CPU time per request \* 15,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 | | **Total** | $8.00 | | #### Example 2 A project that serves 15 million requests per month, with 80% (12 million) requests serving [static assets](/workers/static-assets/) and the remaining invoking dynamic Worker code. The Worker uses an average of 7 milliseconds (ms) of time per request. Requests to static assets are free and unlimited. This project would have the following estimated costs: | | Monthly Costs | Formula | | ----------------------------- | ------------- | ------- | | **Subscription** | $5.00 | | | **Requests to static assets** | $0 | - | | **Requests to Worker** | $0 | - | | **CPU time** | $0 | - | | **Total** | $5.00 | | | #### Example 3 A Worker that runs on a [Cron Trigger](/workers/configuration/cron-triggers/) once an hour to collect data from multiple APIs, process the data and create a report. - 720 requests/month - 3 minutes (180,000ms) of CPU time per request In this scenario, the estimated monthly cost would be calculated as: | | Monthly Costs | Formula | | ---------------- | ------------- | -------------------------------------------------------------------------------------------------------- | | **Subscription** | $5.00 | | | **Requests** | $0.00 | - | | **CPU time** | $1.99 | ((180,000 ms of CPU time per request \* 720 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 | | **Total** | $6.99 | | | | | | #### Example 4 A high traffic Worker that serves 100 million requests per month, and uses an average of 7 milliseconds (ms) of CPU time per request, would have the following estimated costs: | | Monthly Costs | Formula | | ---------------- | ------------- | ---------------------------------------------------------------------------------------------------------- | | **Subscription** | $5.00 | | | **Requests** | $27.00 | (100,000,000 requests - 10,000,000 included requests) / 1,000,000 \* $0.30 | | **CPU time** | $13.40 | ((7 ms of CPU time per request \* 100,000,000 requests) - 30,000,000 included CPU ms) / 1,000,000 \* $0.02 | | **Total** | $45.40 | | :::note[Custom limits] To prevent accidental runaway bills or denial-of-wallet attacks, configure the maximum amount of CPU time that can be used per invocation by [defining limits in your Worker's Wrangler file](/workers/wrangler/configuration/#limits), or via the Cloudflare dashboard (**Workers & Pages** > Select your Worker > **Settings** > **CPU Limits**). If you had a Worker on the Bundled usage model prior to the migration to Standard pricing on March 1, 2024, Cloudflare has automatically added a 50 ms CPU limit on your Worker. ::: ### How to switch usage models :::note Some Workers Enterprise customers maintain the ability to change usage models. ::: Users on the Workers Paid plan have access to the Standard usage model. However, some users may still have a legacy usage model configured. Legacy usage models include Workers Unbound and Workers Bundled. Users are advised to move to the Workers Standard usage model. Changing the usage model only affects billable usage, and has no technical implications. To change your default account-wide usage model: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers-and-pages) and select your account. 2. In Account Home, select **Workers & Pages**. 3. Find **Usage Model** on the right-side menu > **Change**. Usage models may be changed at the individual Worker level: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/workers/services/view/:worker/production/settings) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In **Overview**, select your Worker > **Settings** > **Usage Model**. Existing Workers will not be impacted when changing the default usage model. You may change the usage model for individual Workers without affecting your account-wide default usage model. ::: ## Workers Logs :::note[Workers Logs documentation] For more information and [examples of Workers Logs billing](/workers/observability/logs/workers-logs/#example-pricing), refer to the [Workers Logs documentation](/workers/observability/logs/workers-logs). ::: ## Workers Trace Events Logpush Workers Logpush is only available on the Workers Paid plan. | | Paid plan | | --------------------- | ---------------------------------- | | Requests 1 | 10 million / month, +$0.05/million | 1 Workers Logpush charges for request logs that reach your end destination after applying filtering or sampling. ## Workers KV :::note[KV documentation] To learn more about KV, refer to the [KV documentation](/kv/). ::: ## Hyperdrive :::note[Hyperdrive documentation] To learn more about Hyperdrive, refer to the [Hyperdrive documentation](/hyperdrive/). ::: ## Queues :::note[Queues billing examples] To learn more about Queues pricing and review billing examples, refer to [Queues Pricing](/queues/platform/pricing/). ::: ## D1 D1 is available on both the Workers Free and Workers Paid plans. :::note[D1 billing] Refer to [D1 Pricing](/d1/platform/pricing/) to learn more about how D1 is billed. ::: ## Durable Objects :::note[Durable Objects billing examples] For more information and [examples of Durable Objects billing](/durable-objects/platform/pricing#compute-billing-examples), refer to [Durable Objects Pricing](/durable-objects/platform/pricing/). ::: ## Vectorize Vectorize is currently only available on the Workers paid plan. ## Service bindings Requests made from your Worker to another worker via a [Service Binding](/workers/runtime-apis/bindings/service-bindings/) do not incur additional request fees. This allows you to split apart functionality into multiple Workers, without incurring additional costs. For example, if Worker A makes a subrequest to Worker B via a Service Binding, or calls an RPC method provided by Worker B via a Service Binding, this is billed as: - One request (for the initial invocation of Worker A) - The total amount of CPU time used across both Worker A and Worker B :::note[Only available on Workers Standard pricing] If your Worker is on the deprecated Bundled or Unbound pricing plans, incoming requests from Service Bindings are charged the same as requests from the Internet. In the example above, you would be charged for two requests, one to Worker A, and one to Worker B. ::: ## Fine Print Workers Paid plan is separate from any other Cloudflare plan (Free, Professional, Business) you may have. If you are an Enterprise customer, reach out to your account team to confirm pricing details. Only requests that hit a Worker will count against your limits and your bill. Since Cloudflare Workers runs before the Cloudflare cache, the caching of a request still incurs costs. Refer to [Limits](/workers/platform/limits/) to review definitions and behavior after a limit is hit. --- # Choose a data or storage product URL: https://developers.cloudflare.com/workers/platform/storage-options/ import { Render, Details } from "~/components"; Cloudflare Workers support a range of storage and database options for persisting different types of data across different use-cases, from key-value stores (like [Workers KV](/kv/)) through to SQL databases (such as [D1](/d1/)). This guide describes the use-cases suited to each storage option, as well as their performance and consistency properties. :::note[Pages Functions] Storage options can also be used by your front-end application built with Cloudflare Pages. For more information on available storage options for Pages applications, refer to the [Pages Functions bindings documentation](/pages/functions/bindings/). ::: Available storage and persistency products include: - [Workers KV](#workers-kv) for key-value storage. - [R2](#r2) for object storage, including use-cases where S3 compatible storage is required. - [Durable Objects](#durable-objects) for transactional, globally coordinated storage. - [D1](#d1) as a relational, SQL-based database. - [Queues](#queues) for job queueing, batching and inter-Service (Worker to Worker) communication. - [Hyperdrive](/hyperdrive/) for connecting to and speeding up access to existing hosted and on-premises databases. - [Analytics Engine](/analytics/analytics-engine/) for storing and querying (using SQL) time-series data and product metrics at scale. - [Vectorize](/vectorize/) for vector search and storing embeddings from [Workers AI](/workers-ai/). Applications built on the Workers platform may combine one or more storage components as they grow, scale or as requirements demand. ## Choose a storage product ## Performance and consistency The following table highlights the performance and consistency characteristics of the primary storage offerings available to Cloudflare Workers: | Feature | Workers KV | R2 | Durable Objects | D1 | | --------------------------- | ------------------------------------------------ | ------------------------------------- | -------------------------------- | --------------------------------------------------- | | Maximum storage per account | Unlimited [^1] | Unlimited [^2] | Unlimited [^3] | 250 GB [^4] | | Storage grouping name | Namespace | Bucket | Durable Object | Database | | Maximum size per value | 25 MiB | 5 TiB per object | 128 KiB per value | 10 GB per database [^5] | | Consistency model | Eventual: updates take up to 60s to be reflected | Strong (read-after-write) [^6] | Serializable (with transactions) | Serializable (no replicas) / Causal (with replicas) | | Supported APIs | Workers, HTTP/REST API | Workers, S3 compatible | Workers | Workers, HTTP/REST API | [^1]: Free accounts are limited to 1 GiB of KV storage. [^2]: Free accounts are limited to 10 GB of R2 storage. [^3]: Free accounts are limited to 5 GB of storage for SQLite-backed Durable Objects. 50 GB limit applies for KV-backed Durable Objects. Refer to [Durable Objects limits](/durable-objects/platform/limits/). [^4]: Free accounts are limited to 5 GB of database storage. [^5]: Free accounts are limited to 500 MB per database. [^6]: Refer to the [R2 documentation](/r2/reference/consistency/) for more details on R2's consistency model.
1. Free accounts are limited to 1 GiB of KV storage. 2. Free accounts are limited to 10 GB of R2 storage. 3. Free accounts are limited to 5 GB of storage for SQLite-backed Durable Objects. 50 GB limit applies for KV-backed Durable Objects. Refer to [Durable Objects limits](/durable-objects/platform/limits/). 4. Free accounts are limited to 5 GB of database storage. 5. Free accounts are limited to 500 MB per database. 6. Refer to the [R2 documentation](/r2/reference/consistency/) for more details on R2's consistency model.
## Workers KV Workers KV is an eventually consistent key-value data store that caches on the Cloudflare global network. It is ideal for projects that require: - High volumes of reads and/or repeated reads to the same keys. - Per-object time-to-live (TTL). - Distributed configuration. To get started with KV: - Read how [KV works](/kv/concepts/how-kv-works/). - Create a [KV namespace](/kv/concepts/kv-namespaces/). - Review the [KV Runtime API](/kv/api/). - Learn about KV [Limits](/kv/platform/limits/). ## R2 R2 is S3-compatible blob storage that allows developers to store large amounts of unstructured data without egress fees associated with typical cloud storage services. It is ideal for projects that require: - Storage for files which are infrequently accessed. - Large object storage (for example, gigabytes or more per object). - Strong consistency per object. - Asset storage for websites (refer to [caching guide](/r2/buckets/public-buckets/#caching)) To get started with R2: - Read the [Get started guide](/r2/get-started/). - Learn about R2 [Limits](/r2/platform/limits/). - Review the [R2 Workers API](/r2/api/workers/workers-api-reference/). ## Durable Objects Durable Objects provide low-latency coordination and consistent storage for the Workers platform through global uniqueness and a transactional storage API. - Global Uniqueness guarantees that there will be a single instance of a Durable Object class with a given ID running at once, across the world. Requests for a Durable Object ID are routed by the Workers runtime to the Cloudflare data center that owns the Durable Object. - The transactional storage API provides strongly consistent key-value storage to the Durable Object. Each Object can only read and modify keys associated with that Object. Execution of a Durable Object is single-threaded, but multiple request events may still be processed out-of-order from how they arrived at the Object. It is ideal for projects that require: - Real-time collaboration (such as a chat application or a game server). - Consistent storage. - Data locality. To get started with Durable Objects: - Read the [introductory blog post](https://blog.cloudflare.com/introducing-workers-durable-objects/). - Review the [Durable Objects documentation](/durable-objects/). - Get started with [Durable Objects](/durable-objects/get-started/). - Learn about Durable Objects [Limits](/durable-objects/platform/limits/). ## D1 [D1](/d1/) is Cloudflare’s native serverless database. With D1, you can create a database by importing data or defining your tables and writing your queries within a Worker or through the API. D1 is ideal for: - Persistent, relational storage for user data, account data, and other structured datasets. - Use-cases that require querying across your data ad-hoc (using SQL). - Workloads with a high ratio of reads to writes (most web applications). To get started with D1: - Read [the documentation](/d1) - Follow the [Get started guide](/d1/get-started/) to provision your first D1 database. - Review the [D1 Workers Binding API](/d1/worker-api/). :::note If your working data size exceeds 10 GB (the maximum size for a D1 database), consider splitting the database into multiple, smaller D1 databases. ::: ## Queues Cloudflare Queues allows developers to send and receive messages with guaranteed delivery. It integrates with [Cloudflare Workers](/workers) and offers at-least once delivery, message batching, and does not charge for egress bandwidth. Queues is ideal for: - Offloading work from a request to schedule later. - Send data from Worker to Worker (inter-Service communication). - Buffering or batching data before writing to upstream systems, including third-party APIs or [Cloudflare R2](/queues/examples/send-errors-to-r2/). To get started with Queues: - [Set up your first queue](/queues/get-started/). - Learn more [about how Queues works](/queues/reference/how-queues-works/). ## Hyperdrive Hyperdrive is a service that accelerates queries you make to existing databases, making it faster to access your data from across the globe, irrespective of your users’ location. Hyperdrive allows you to: - Connect to an existing database from Workers without connection overhead. - Cache frequent queries across Cloudflare's global network to reduce response times on highly trafficked content. - Reduce load on your origin database with connection pooling. To get started with Hyperdrive: - [Connect Hyperdrive](/hyperdrive/get-started/) to your existing database. - Learn more [about how Hyperdrive speeds up your database queries](/hyperdrive/configuration/how-hyperdrive-works/). ## Analytics Engine Analytics Engine is Cloudflare's time-series and metrics database that allows you to write unlimited-cardinality analytics at scale using a built-in API to write data points from Workers and query that data using SQL directly. Analytics Engine allows you to: - Expose custom analytics to your own customers - Build usage-based billing systems - Understand the health of your service on a per-customer or per-user basis - Add instrumentation to frequently called code paths, without impacting performance or overwhelming external analytics systems with events Cloudflare uses Analytics Engine internally to store and product per-product metrics for products like D1 and R2 at scale. To get started with Analytics Engine: - Learn how to [get started with Analytics Engine](/analytics/analytics-engine/get-started/) - See [an example of writing time-series data to Analytics Engine](/analytics/analytics-engine/recipes/usage-based-billing-for-your-saas-product/) - Understand the [SQL API](/analytics/analytics-engine/sql-api/) for reading data from your Analytics Engine datasets ## Vectorize Vectorize is a globally distributed vector database that enables you to build full-stack, AI-powered applications with Cloudflare Workers and [Workers AI](/workers-ai/). Vectorize allows you to: - Store embeddings from any vector embeddings model (Bring Your Own embeddings) for semantic search and classification tasks. - Add context to Large Language Model (LLM) queries by using vector search as part of a [Retrieval Augmented Generation](/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai/) (RAG) workflow. - [Filter on vector metadata](/vectorize/reference/metadata-filtering/) to reduce the search space and return more relevant results. To get started with Vectorize: - [Create your first vector database](/vectorize/get-started/intro/). - Combine [Workers AI and Vectorize](/vectorize/get-started/embeddings/) to generate, store and query text embeddings. - Learn more about [how vector databases work](/vectorize/reference/what-is-a-vector-database/). ## D1 vs Hyperdrive D1 is a standalone, serverless database that provides a SQL API, using SQLite's SQL semantics, to store and access your relational data. Hyperdrive is a service that lets you connect to your existing, regional PostgreSQL databases and improves database performance by optimizing them for global, scalable data access from Workers. - If you are building a new project on Workers or are considering migrating your data, use D1. - If you are building a Workers project with an existing PostgreSQL database, use Hyperdrive. :::note You cannot use D1 with Hyperdrive. However, D1 does not need to be used with Hyperdrive because it does not have slow connection setups which would benefit from Hyperdrive's connection pooling. D1 data can also be cached within Workers using the [Cache API](/workers/runtime-apis/cache/). ::: --- # How the Cache works URL: https://developers.cloudflare.com/workers/reference/how-the-cache-works/ Workers was designed and built on top of Cloudflare's global network to allow developers to interact directly with the Cloudflare cache. The cache can provide ephemeral, data center-local storage, as a convenient way to frequently access static or dynamic content. By allowing developers to write to the cache, Workers provide a way to customize cache behavior on Cloudflare’s CDN. To learn about the benefits of caching, refer to the Learning Center’s article on [What is Caching?](https://www.cloudflare.com/learning/cdn/what-is-caching/). Cloudflare Workers run before the cache but can also be utilized to modify assets once they are returned from the cache. Modifying assets returned from cache allows for the ability to sign or personalize responses while also reducing load on an origin and reducing latency to the end user by serving assets from a nearby location. ## Interact with the Cloudflare Cache Conceptually, there are two ways to interact with Cloudflare’s Cache using a Worker: - Call to [`fetch()`](/workers/runtime-apis/fetch/) in a Workers script. Requests proxied through Cloudflare are cached even without Workers according to a zone’s default or configured behavior (for example, static assets like files ending in `.jpg` are cached by default). Workers can further customize this behavior by: - Setting Cloudflare cache rules (that is, operating on the `cf` object of a [request](/workers/runtime-apis/request/)). - Store responses using the [Cache API](/workers/runtime-apis/cache/) from a Workers script. This allows caching responses that did not come from an origin and also provides finer control by: - Customizing cache behavior of any asset by setting headers such as `Cache-Control` on the response passed to `cache.put()`. - Caching responses generated by the Worker itself through `cache.put()`. :::caution[Tiered caching] The Cache API is not compatible with tiered caching. To take advantage of tiered caching, use the [fetch API](/workers/runtime-apis/fetch/). ::: ### Single file purge assets cached by a worker When using single-file purge to purge assets cached by a Worker, make sure not to purge the end user URL. Instead, purge the URL that is in the `fetch` request. For example, you have a Worker that runs on `https://example.com/hello` and this Worker makes a `fetch` request to `https://notexample.com/hello`. As far as cache is concerned, the asset in the `fetch` request (`https://notexample.com/hello`) is the asset that is cached. To purge it, you need to purge `https://notexample.com/hello`. Purging the end user URL, `https://example.com/hello`, will not work because that is not the URL that cache sees. You need to confirm in your Worker which URL you are actually fetching, so you can purge the correct asset. In the previous example, `https://notexample.com/hello` is not proxied through Cloudflare. If `https://notexample.com/hello` was proxied ([orange-clouded](/dns/proxy-status/)) through Cloudflare, then you must own `notexample.com` and purge `https://notexample.com/hello` from the `notexample.com` zone. To better understand the example, review the following diagram: ```mermaid flowchart TD accTitle: Single file purge assets cached by a worker accDescr: This diagram is meant to help choose how to purge a file. A("You have a Worker script that runs on https://example.com/hello
and this Worker makes a fetch request to https://notexample.com/hello.") --> B(Is notexample.com
an active zone on Cloudflare?) B -- Yes --> C(Is https://notexample.com/
proxied through Cloudflare?) B -- No --> D(Purge https://notexample.com/hello
from the original example.com zone.) C -- Yes --> E(Do you own
notexample.com?) C -- No --> F(Purge https://notexample.com/hello
from the original example.com zone.) E -- Yes --> G(Purge https://notexample.com/hello
from the notexample.com zone.) E -- No --> H(Sorry, you can not purge the asset.
Only the owner of notexample.com can purge it.) ``` ### Purge assets stored with the Cache API Assets stored in the cache through [Cache API](/workers/runtime-apis/cache/) operations can be purged in a couple of ways: - Call `cache.delete` within a Worker to invalidate the cache for the asset with a matching request variable. - Assets purged in this way are only purged locally to the data center the Worker runtime was executed. - To purge an asset globally, you must use the standard cache purge options. Based on cache API implementation, not all cache purge endpoints function for purging assets stored by the Cache API. - All assets on a zone can be purged by using the [Purge Everything](/cache/how-to/purge-cache/purge-everything/) cache operation. This purge will remove all assets associated with a Cloudflare zone from cache in all data centers regardless of the method set. - [Cache Tags](/cache/how-to/purge-cache/purge-by-tags/#add-cache-tag-http-response-headers) can be added to requests dynamically in a Worker by calling `response.headers.append()` and appending `Cache-Tag` values dynamically to that request. Once set, those tags can be used to selectively purge assets from cache without invalidating all cached assets on a zone. - Currently, it is not possible to purge a URL stored through Cache API that uses a custom cache key set by a Worker. Instead, use a [custom key created via Cache Rules](/cache/how-to/cache-rules/settings/#cache-key). Alternatively, purge your assets using purge everything, purge by tag, purge by host or purge by prefix. ## Edge versus browser caching The browser cache is controlled through the `Cache-Control` header sent in the response to the client (the `Response` instance return from the handler). Workers can customize browser cache behavior by setting this header on the response. Other means to control Cloudflare’s cache that are not mentioned in this documentation include: Page Rules and Cloudflare cache settings. Refer to the [How to customize Cloudflare’s cache](/cache/concepts/customize-cache/) if you wish to avoid writing JavaScript with still some granularity of control. :::note[What should I use: the Cache API or fetch for caching objects on Cloudflare?] For requests where Workers are behaving as middleware (that is, Workers are sending a subrequest via `fetch`) it is recommended to use `fetch`. This is because preexisting settings are in place that optimize caching while preventing unintended dynamic caching. For projects where there is no backend (that is, the entire project is on Workers as in [Workers Sites](/workers/configuration/sites/start-from-scratch)) the Cache API is the only option to customize caching. The asset will be cached under the hostname specified within the Worker's subrequest — not the Worker's own hostname. Therefore, in order to purge the cached asset, the purge will have to be performed for the hostname included in the Worker subrequest. ::: ### `fetch` In the context of Workers, a [`fetch`](/workers/runtime-apis/fetch/) provided by the runtime communicates with the Cloudflare cache. First, `fetch` checks to see if the URL matches a different zone. If it does, it reads through that zone’s cache (or Worker). Otherwise, it reads through its own zone’s cache, even if the URL is for a non-Cloudflare site. Cache settings on `fetch` automatically apply caching rules based on your Cloudflare settings. `fetch` does not allow you to modify or inspect objects before they reach the cache, but does allow you to modify how it will cache. When a response fills the cache, the response header contains `CF-Cache-Status: HIT`. You can tell an object is attempting to cache if one sees the `CF-Cache-Status` at all. This [template](/workers/examples/cache-using-fetch/) shows ways to customize Cloudflare cache behavior on a given request using fetch. ### Cache API The [Cache API](/workers/runtime-apis/cache/) can be thought of as an ephemeral key-value store, whereby the `Request` object (or more specifically, the request URL) is the key, and the `Response` is the value. There are two types of cache namespaces available to the Cloudflare Cache: - **`caches.default`** – You can access the default cache (the same cache shared with `fetch` requests) by accessing `caches.default`. This is useful when needing to override content that is already cached, after receiving the response. - **`caches.open()`** – You can access a namespaced cache (separate from the cache shared with `fetch` requests) using `let cache = await caches.open(CACHE_NAME)`. Note that [`caches.open`](https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/open) is an async function, unlike `caches.default`. When to use the Cache API: - When you want to programmatically save and/or delete responses from a cache. For example, say an origin is responding with a `Cache-Control: max-age:0` header and cannot be changed. Instead, you can clone the `Response`, adjust the header to the `max-age=3600` value, and then use the Cache API to save the modified `Response` for an hour. - When you want to programmatically access a Response from a cache without relying on a `fetch` request. For example, you can check to see if you have already cached a `Response` for the `https://example.com/slow-response` endpoint. If so, you can avoid the slow request. This [template](/workers/examples/cache-api/) shows ways to use the cache API. For limits of the cache API, refer to [Limits](/workers/platform/limits/#cache-api-limits). :::caution[Tiered caching and the Cache API] Cache API within Workers does not support tiered caching. Tiered Cache concentrates connections to origin servers so they come from a small number of data centers rather than the full set of network locations. Cache API is local to a data center, this means that `cache.match` does a lookup, `cache.put` stores a response, and `cache.delete` removes a stored response only in the cache of the data center that the Worker handling the request is in. Because these methods apply only to local cache, they will not work with tiered cache. ::: ## Related resources - [Cache API](/workers/runtime-apis/cache/) - [Customize cache behavior with Workers](/cache/interaction-cloudflare-products/workers/) --- # Workers for Platforms URL: https://developers.cloudflare.com/workers/platform/workers-for-platforms/ Deploy custom code on behalf of your users or let your users directly deploy their own code to your platform, managing infrastructure. --- # How Workers works URL: https://developers.cloudflare.com/workers/reference/how-workers-works/ import { Render, NetworkMap, WorkersIsolateDiagram } from "~/components" Though Cloudflare Workers behave similarly to [JavaScript](https://www.cloudflare.com/learning/serverless/serverless-javascript/) in the browser or in Node.js, there are a few differences in how you have to think about your code. Under the hood, the Workers runtime uses the [V8 engine](https://www.cloudflare.com/learning/serverless/glossary/what-is-chrome-v8/) — the same engine used by Chromium and Node.js. The Workers runtime also implements many of the standard [APIs](/workers/runtime-apis/) available in most modern browsers. The differences between JavaScript written for the browser or Node.js happen at runtime. Rather than running on an individual's machine (for example, [a browser application or on a centralized server](https://www.cloudflare.com/learning/serverless/glossary/client-side-vs-server-side/)), Workers functions run on [Cloudflare's global network](https://www.cloudflare.com/network) - a growing global network of thousands of machines distributed across hundreds of locations. Each of these machines hosts an instance of the Workers runtime, and each of those runtimes is capable of running thousands of user-defined applications. This guide will review some of those differences. For more information, refer to the [Cloud Computing without Containers blog post](https://blog.cloudflare.com/cloud-computing-without-containers). The three largest differences are: Isolates, Compute per Request, and Distributed Execution. ## Isolates [V8](https://v8.dev) orchestrates isolates: lightweight contexts that provide your code with variables it can access and a safe environment to be executed within. You could even consider an isolate a sandbox for your function to run in. A given isolate has its own scope, but isolates are not necessarily long-lived. An isolate may be spun down and evicted for a number of reasons: * Resource limitations on the machine. * A suspicious script - anything seen as trying to break out of the isolate sandbox. * Individual [resource limits](/workers/platform/limits/). Because of this, it is generally advised that you not store mutable state in your global scope unless you have accounted for this contingency. If you are interested in how Cloudflare handles security with the Workers runtime, you can [read more about how Isolates relate to Security and Spectre Threat Mitigation](/workers/reference/security-model/). ## Compute per request ## Distributed execution Isolates are resilient and continuously available for the duration of a request, but in rare instances isolates may be evicted. When a Worker hits official [limits](/workers/platform/limits/) or when resources are exceptionally tight on the machine the request is running on, the runtime will selectively evict isolates after their events are properly resolved. Like all other JavaScript platforms, a single Workers instance may handle multiple requests including concurrent requests in a single-threaded event loop. That means that other requests may (or may not) be processed during awaiting any `async` tasks (such as `fetch`) if other requests come in while processing a request. Because there is no guarantee that any two user requests will be routed to the same or a different instance of your Worker, Cloudflare recommends you do not use or mutate global state. ## Related resources * [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) - Review how incoming HTTP requests to a Worker are passed to the `fetch()` handler. * [Request](/workers/runtime-apis/request/) - Learn how incoming HTTP requests are passed to the `fetch()` handler. * [Workers limits](/workers/platform/limits/) - Learn about Workers limits including Worker size, startup time, and more. --- # Reference URL: https://developers.cloudflare.com/workers/reference/ import { DirectoryListing } from "~/components"; Conceptual knowledge about how Workers works. --- # Migrate from Service Workers to ES Modules URL: https://developers.cloudflare.com/workers/reference/migrate-to-module-workers/ import { WranglerConfig } from "~/components"; This guide will show you how to migrate your Workers from the [Service Worker](https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API) format to the [ES modules](https://blog.cloudflare.com/workers-javascript-modules/) format. ## Advantages of migrating There are several reasons to migrate your Workers to the ES modules format: 1. Your Worker will run faster. With service workers, bindings are exposed as globals. This means that for every request, the Workers runtime must create a new JavaScript execution context, which adds overhead and time. Workers written using ES modules can reuse the same execution context across multiple requests. 2. Implementing [Durable Objects](/durable-objects/) requires Workers that use ES modules. 3. Bindings for [D1](/d1/), [Workers AI](/workers-ai/), [Vectorize](/vectorize/), [Workflows](/workflows/), and [Images](/images/transform-images/bindings/) can only be used from Workers that use ES modules. 4. You can [gradually deploy changes to your Worker](/workers/configuration/versions-and-deployments/gradual-deployments/) when you use the ES modules format. 5. You can easily publish Workers using ES modules to `npm`, allowing you to import and reuse Workers within your codebase. ## Migrate a Worker The following example demonstrates a Worker that redirects all incoming requests to a URL with a `301` status code. With the Service Worker syntax, the example Worker looks like: ```js async function handler(request) { const base = 'https://example.com'; const statusCode = 301; const destination = new URL(request.url, base); return Response.redirect(destination.toString(), statusCode); } // Initialize Worker addEventListener('fetch', event => { event.respondWith(handler(event.request)); }); ``` Workers using ES modules format replace the `addEventListener` syntax with an object definition, which must be the file's default export (via `export default`). The previous example code becomes: ```js export default { fetch(request) { const base = "https://example.com"; const statusCode = 301; const source = new URL(request.url); const destination = new URL(source.pathname, base); return Response.redirect(destination.toString(), statusCode); }, }; ``` ## Bindings [Bindings](/workers/runtime-apis/bindings/) allow your Workers to interact with resources on the Cloudflare developer platform. Workers using ES modules format do not rely on any global bindings. However, Service Worker syntax accesses bindings on the global scope. To understand bindings, refer the following `TODO` KV namespace binding example. To create a `TODO` KV namespace binding, you will: 1. Create a KV namespace named `My Tasks` and receive an ID that you will use in your binding. 2. Create a Worker. 3. Find your Worker's [Wrangler configuration file](/workers/wrangler/configuration/) and add a KV namespace binding: ```toml kv_namespaces = [ { binding = "TODO", id = "" } ] ``` In the following sections, you will use your binding in Service Worker and ES modules format. :::note[Reference KV from Durable Objects and Workers] To learn more about how to reference KV from Workers, refer to the [KV bindings documentation](/kv/concepts/kv-bindings/). ::: ### Bindings in Service Worker format In Service Worker syntax, your `TODO` KV namespace binding is defined in the global scope of your Worker. Your `TODO` KV namespace binding is available to use anywhere in your Worker application's code. ```js addEventListener("fetch", async (event) => { return await getTodos() }); async function getTodos() { // Get the value for the "to-do:123" key // NOTE: Relies on the TODO KV binding that maps to the "My Tasks" namespace. let value = await TODO.get("to-do:123"); // Return the value, as is, for the Response event.respondWith(new Response(value)); } ``` ### Bindings in ES modules format In ES modules format, bindings are only available inside the `env` parameter that is provided at the entry point to your Worker. To access the `TODO` KV namespace binding in your Worker code, the `env` parameter must be passed from the `fetch` handler in your Worker to the `getTodos` function. ```js import { getTodos } from './todos' export default { async fetch(request, env, ctx) { // Passing the env parameter so other functions // can reference the bindings available in the Workers application return await getTodos(env) }, }; ``` The following code represents a `getTodos` function that calls the `get` function on the `TODO` KV binding. ```js async function getTodos(env) { // NOTE: Relies on the TODO KV binding which has been provided inside of // the env parameter of the `getTodos` function let value = await env.TODO.get("to-do:123"); return new Response(value); } export { getTodos } ``` ## Environment variables [Environment variables](/workers/configuration/environment-variables/) are accessed differently in code written in ES modules format versus Service Worker format. Review the following example environment variable configuration in the [Wrangler configuration file](/workers/wrangler/configuration/): ```toml name = "my-worker-dev" # Define top-level environment variables # under the `[vars]` block using # the `key = "value"` format [vars] API_ACCOUNT_ID = "" ``` ### Environment variables in Service Worker format In Service Worker format, the `API_ACCOUNT_ID` is defined in the global scope of your Worker application. Your `API_ACCOUNT_ID` environment variable is available to use anywhere in your Worker application's code. ```js addEventListener("fetch", async (event) => { console.log(API_ACCOUNT_ID) // Logs "" return new Response("Hello, world!") }) ``` ### Environment variables in ES modules format In ES modules format, environment variables are only available inside the `env` parameter that is provided at the entrypoint to your Worker application. ```js export default { async fetch(request, env, ctx) { console.log(env.API_ACCOUNT_ID) // Logs "" return new Response("Hello, world!") }, }; ``` ## Cron Triggers To handle a [Cron Trigger](/workers/configuration/cron-triggers/) event in a Worker written with ES modules syntax, implement a [`scheduled()` event handler](/workers/runtime-apis/handlers/scheduled/#syntax), which is the equivalent of listening for a `scheduled` event in Service Worker syntax. This example code: ```js addEventListener("scheduled", (event) => { // ... }); ``` Then becomes: ```js export default { async scheduled(event, env, ctx) { // ... }, }; ``` ## Access `event` or `context` data Workers often need access to data not in the `request` object. For example, sometimes Workers use [`waitUntil`](/workers/runtime-apis/context/#waituntil) to delay execution. Workers using ES modules format can access `waitUntil` via the `context` parameter. Refer to [ES modules parameters](/workers/runtime-apis/handlers/fetch/#parameters) for more information. This example code: ```js async function triggerEvent(event) { // Fetch some data console.log('cron processed', event.scheduledTime); } // Initialize Worker addEventListener('scheduled', event => { event.waitUntil(triggerEvent(event)); }); ``` Then becomes: ```js async function triggerEvent(event) { // Fetch some data console.log('cron processed', event.scheduledTime); } export default { async scheduled(event, env, ctx) { ctx.waitUntil(triggerEvent(event)); }, }; ``` ## Service Worker syntax A Worker written in Service Worker syntax consists of two parts: 1. An event listener that listens for `FetchEvents`. 2. An event handler that returns a [Response](/workers/runtime-apis/response/) object which is passed to the event’s `.respondWith()` method. When a request is received on one of Cloudflare’s global network servers for a URL matching a Worker, Cloudflare's server passes the request to the Workers runtime. This dispatches a `FetchEvent` in the [isolate](/workers/reference/how-workers-works/#isolates) where the Worker is running. ```js addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)); }); async function handleRequest(request) { return new Response('Hello worker!', { headers: { 'content-type': 'text/plain' }, }); } ``` Below is an example of the request response workflow: 1. An event listener for the `FetchEvent` tells the script to listen for any request coming to your Worker. The event handler is passed the `event` object, which includes `event.request`, a [`Request`](/workers/runtime-apis/request/) object which is a representation of the HTTP request that triggered the `FetchEvent`. 2. The call to `.respondWith()` lets the Workers runtime intercept the request in order to send back a custom response (in this example, the plain text `'Hello worker!'`). * The `FetchEvent` handler typically culminates in a call to the method `.respondWith()` with either a [`Response`](/workers/runtime-apis/response/) or `Promise` that determines the response. * The `FetchEvent` object also provides [two other methods](/workers/runtime-apis/handlers/fetch/) to handle unexpected exceptions and operations that may complete after a response is returned. Learn more about [the lifecycle methods of the `fetch()` handler](/workers/runtime-apis/rpc/lifecycle/). ### Supported `FetchEvent` properties * `event.type` string * The type of event. This will always return `"fetch"`. * `event.request` Request * The incoming HTTP request. * event.respondWith(responseResponse|Promise) : void * Refer to [`respondWith`](#respondwith). * event.waitUntil(promisePromise) : void * Refer to [`waitUntil`](#waituntil). * event.passThroughOnException() : void * Refer to [`passThroughOnException`](#passthroughonexception). ### `respondWith` Intercepts the request and allows the Worker to send a custom response. If a `fetch` event handler does not call `respondWith`, the runtime delivers the event to the next registered `fetch` event handler. In other words, while not recommended, this means it is possible to add multiple `fetch` event handlers within a Worker. If no `fetch` event handler calls `respondWith`, then the runtime forwards the request to the origin as if the Worker did not. However, if there is no origin – or the Worker itself is your origin server, which is always true for `*.workers.dev` domains – then you must call `respondWith` for a valid response. ```js // Format: Service Worker addEventListener('fetch', event => { let { pathname } = new URL(event.request.url); // Allow "/ignore/*" URLs to hit origin if (pathname.startsWith('/ignore/')) return; // Otherwise, respond with something event.respondWith(handler(event)); }); ``` ### `waitUntil` The `waitUntil` command extends the lifetime of the `"fetch"` event. It accepts a `Promise`-based task which the Workers runtime will execute before the handler terminates but without blocking the response. For example, this is ideal for [caching responses](/workers/runtime-apis/cache/#put) or handling logging. With the Service Worker format, `waitUntil` is available within the `event` because it is a native `FetchEvent` property. With the ES modules format, `waitUntil` is moved and available on the `context` parameter object. ```js // Format: Service Worker addEventListener('fetch', event => { event.respondWith(handler(event)); }); async function handler(event) { // Forward / Proxy original request let res = await fetch(event.request); // Add custom header(s) res = new Response(res.body, res); res.headers.set('x-foo', 'bar'); // Cache the response // NOTE: Does NOT block / wait event.waitUntil(caches.default.put(event.request, res.clone())); // Done return res; } ``` ### `passThroughOnException` The `passThroughOnException` method prevents a runtime error response when the Worker throws an unhandled exception. Instead, the script will [fail open](https://community.microfocus.com/cyberres/b/sws-22/posts/security-fundamentals-part-1-fail-open-vs-fail-closed), which will proxy the request to the origin server as though the Worker was never invoked. To prevent JavaScript errors from causing entire requests to fail on uncaught exceptions, `passThroughOnException()` causes the Workers runtime to yield control to the origin server. With the Service Worker format, `passThroughOnException` is added to the `FetchEvent` interface, making it available within the `event`. With the ES modules format, `passThroughOnException` is available on the `context` parameter object. ```js // Format: Service Worker addEventListener('fetch', event => { // Proxy to origin on unhandled/uncaught exceptions event.passThroughOnException(); throw new Error('Oops'); }); ``` --- # Protocols URL: https://developers.cloudflare.com/workers/reference/protocols/ Cloudflare Workers support the following protocols and interfaces: | Protocol | Inbound | Outbound | | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | **HTTP / HTTPS** | Handle incoming HTTP requests using the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) | Make HTTP subrequests using the [`fetch()` API](/workers/runtime-apis/fetch/) | | **Direct TCP sockets** | Support for handling inbound TCP connections is [coming soon](https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/) | Create outbound TCP connections using the [`connect()` API](/workers/runtime-apis/tcp-sockets/) | | **WebSockets** | Accept incoming WebSocket connections using the [`WebSocket` API](/workers/runtime-apis/websockets/), or with [MQTT over WebSockets (Pub/Sub)](/pub-sub/learning/websockets-browsers/) | [MQTT over WebSockets (Pub/Sub)](/pub-sub/learning/websockets-browsers/) | | **MQTT** | Handle incoming messages to an MQTT broker with [Pub Sub](/pub-sub/learning/integrate-workers/) | Support for publishing MQTT messages to an MQTT topic is [coming soon](/pub-sub/learning/integrate-workers/) | | **HTTP/3 (QUIC)** | Accept inbound requests over [HTTP/3](https://www.cloudflare.com/learning/performance/what-is-http3/) by enabling it on your [zone](/fundamentals/setup/accounts-and-zones/#zones) in **Speed** > **Optimization** > **Protocol Optimization** area of the [Cloudflare dashboard](https://dash.cloudflare.com/). | | | **SMTP** | Use [Email Workers](/email-routing/email-workers/) to process and forward email, without having to manage TCP connections to SMTP email servers | [Email Workers](/email-routing/email-workers/) | --- # Security model URL: https://developers.cloudflare.com/workers/reference/security-model/ import { WorkersArchitectureDiagram } from "~/components" This article includes an overview of Cloudflare security architecture, and then addresses two frequently asked about issues: V8 bugs and Spectre. Since the very start of the Workers project, security has been a high priority — there was a concern early on that when hosting a large number of tenants on shared infrastructure, side channels of various kinds would pose a threat. The Cloudflare Workers runtime is carefully designed to defend against side channel attacks. To this end, Workers is designed to make it impossible for code to measure its own execution time locally. For example, the value returned by `Date.now()` is locked in place while code is executing. No other timers are provided. Moreover, Cloudflare provides no access to concurrency (for example, multi-threading), as it could allow attackers to construct ad hoc timers. These design choices cannot be introduced retroactively into other platforms — such as web browsers — because they remove APIs that existing applications depend on. They were possible in Workers only because of runtime design choices from the start. While these early design decisions have proven effective, Cloudflare is continuing to add defense-in-depth, including techniques to disrupt attacks by rescheduling Workers to create additional layers of isolation between suspicious Workers and high-value Workers. The Workers approach is very different from the approach taken by most of the industry. It is resistant to the entire range of [Spectre-style attacks](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/), without requiring special attention paid to each one and without needing to block speculation in general. However, because the Workers approach is different, it requires careful study. Cloudflare is currently working with researchers at Graz University of Technology (TU Graz) to study what has been done. These researchers include some of the people who originally discovered Spectre. Cloudflare will publish the results of this research as they becomes available. For more details, refer to [this talk](https://www.infoq.com/presentations/cloudflare-v8/) by Kenton Varda, architect of Cloudflare Workers. Spectre is covered near the end. ## Architectural overview Beginning with a quick overview of the Workers runtime architecture: There are two fundamental parts of designing a code sandbox: secure isolation and API design. ### Isolation First, a secure execution environment needed to be created wherein code cannot access anything it is not supposed to. For this, the primary tool is V8, the JavaScript engine developed by Google for use in Chrome. V8 executes code inside isolates, which prevent that code from accessing memory outside the isolate — even within the same process. Importantly, this means Cloudflare can run many isolates within a single process. This is essential for an edge compute platform like Workers where Cloudflare must host many thousands of guest applications on every machine and rapidly switch between these guests thousands of times per second with minimal overhead. If Cloudflare had to run a separate process for every guest, the number of tenants Cloudflare could support would be drastically reduced, and Cloudflare would have to limit edge compute to a small number of big Enterprise customers. With isolate technology, Cloudflare can make edge compute available to everyone. Sometimes, though, Cloudflare does decide to schedule a Worker in its own private process. Cloudflare does this if the Worker uses certain features that needs an extra layer of isolation. For example, when a developer uses the devtools debugger to inspect their Worker, Cloudflare runs that Worker in a separate process. This is because historically, in the browser, the inspector protocol has only been usable by the browser’s trusted operator, and therefore has not received as much security scrutiny as the rest of V8. In order to hedge against the increased risk of bugs in the inspector protocol, Cloudflare moves inspected Workers into a separate process with a process-level sandbox. Cloudflare also uses process isolation as an extra defense against Spectre. Additionally, even for isolates that run in a shared process with other isolates, Cloudflare runs multiple instances of the whole runtime on each machine, which is called cordons. Workers are distributed among cordons by assigning each Worker a level of trust and separating low-trusted Workers from those trusted more highly. As one example of this in operation: a customer who signs up for the Free plan will not be scheduled in the same process as an Enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8. At the whole-process level, Cloudflare applies another layer of sandboxing for defense in depth. The layer 2 sandbox uses Linux namespaces and `seccomp` to prohibit all access to the filesystem and network. Namespaces and `seccomp` are commonly used to implement containers. However, Cloudflare's use of these technologies is much stricter than what is usually possible in container engines, because Cloudflare configures namespaces and `seccomp` after the process has started but before any isolates have been loaded. This means, for example, Cloudflare can (and does) use a totally empty filesystem (mount namespace) and uses `seccomp` to block absolutely all filesystem-related system calls. Container engines cannot normally prohibit all filesystem access because doing so would make it impossible to use `exec()` to start the guest program from disk. In the Workers case, Cloudflare's guest programs are not native binaries and the Workers runtime itself has already finished loading before Cloudflare blocks filesystem access. The layer 2 sandbox also totally prohibits network access. Instead, the process is limited to communicating only over local UNIX domain sockets to talk to other processes on the same system. Any communication to the outside world must be mediated by some other local process outside the sandbox. One such process in particular, which is called the supervisor, is responsible for fetching Worker code and configuration from disk or from other internal services. The supervisor ensures that the sandbox process cannot read any configuration except that which is relevant to the Workers that it should be running. For example, when the sandbox process receives a request for a Worker it has not seen before, that request includes the encryption key for that Worker’s code, including attached secrets. The sandbox can then pass that key to the supervisor in order to request the code. The sandbox cannot request any Worker for which it has not received the appropriate key. It cannot enumerate known Workers. It also cannot request configuration it does not need; for example, it cannot request the TLS key used for HTTPS traffic to the Worker. Aside from reading configuration, the other reason for the sandbox to talk to other processes on the system is to implement APIs exposed to Workers. ### API design There is a saying: If a tree falls in the forest, but no one is there to hear it, does it make a sound? A Cloudflare saying: If a Worker executes in a fully-isolated environment in which it is totally prevented from communicating with the outside world, does it actually run? Complete code isolation is, in fact, useless. In order for Workers to do anything useful, they have to be allowed to communicate with users. At the very least, a Worker needs to be able to receive requests and respond to them. For Workers to send requests to the world safely, APIs are needed. In the context of sandboxing, API design takes on a new level of responsibility. Cloudflare APIs define exactly what a Worker can and cannot do. Cloudflare must be very careful to design each API so that it can only express allowed operations and no more. For example, Cloudflare wants to allow Workers to make and receive HTTP requests, while not allowing them to be able to access the local filesystem or internal network services. Currently, Workers does not allow any access to the local filesystem. Therefore, Cloudflare does not expose a filesystem API at all. No API means no access. But, imagine if Workers did want to support local filesystem access in the future. How can that be done? Workers should not see the whole filesystem. Imagine, though, if each Worker had its own private directory on the filesystem where it can store whatever it wants. To do this, Workers would use a design based on [capability-based security](https://en.wikipedia.org/wiki/Capability-based_security). Capabilities are a big topic, but in this case, what it would mean is that Cloudflare would give the Worker an object of type `Directory`, representing a directory on the filesystem. This object would have an API that allows creating and opening files and subdirectories, but does not permit traversing up the parent directory. Effectively, each Worker would see its private `Directory` as if it were the root of their own filesystem. How would such an API be implemented? As described above, the sandbox process cannot access the real filesystem. Instead, file access would be mediated by the supervisor process. The sandbox talks to the supervisor using [Cap’n Proto RPC](https://capnproto.org/rpc.html), a capability-based RPC protocol. (Cap’n Proto is an open source project currently maintained by the Cloudflare Workers team.) This protocol makes it very easy to implement capability-based APIs, so that Cloudflare can strictly limit the sandbox to accessing only the files that belong to the Workers it is running. Now what about network access? Today, Workers are allowed to talk to the rest of the world only via HTTP — both incoming and outgoing. There is no API for other forms of network access, therefore it is prohibited; although, Cloudflare plans to support other protocols in the future. As mentioned before, the sandbox process cannot connect directly to the network. Instead, all outbound HTTP requests are sent over a UNIX domain socket to a local proxy service. That service implements restrictions on the request. For example, it verifies that the request is either addressed to a public Internet service or to the Worker’s zone’s own origin server, not to internal services that might be visible on the local machine or network. It also adds a header to every request identifying the Worker from which it originates, so that abusive requests can be traced and blocked. Once everything is in order, the request is sent on to the Cloudflare network's HTTP caching layer and then out to the Internet. Similarly, inbound HTTP requests do not go directly to the Workers runtime. They are first received by an inbound proxy service. That service is responsible for TLS termination (the Workers runtime never sees TLS keys), as well as identifying the correct Worker script to run for a particular request URL. Once everything is in order, the request is passed over a UNIX domain socket to the sandbox process. ## V8 bugs and the patch gap Every non-trivial piece of software has bugs and sandboxing technologies are no exception. Virtual machines, containers, and isolates — which Workers use — also have bugs. Workers rely heavily on isolation provided by V8, the JavaScript engine built by Google for use in Chrome. This has pros and cons. On one hand, V8 is an extraordinarily complicated piece of technology, creating a wider attack surface than virtual machines. More complexity means more opportunities for something to go wrong. However, an extraordinary amount of effort goes into finding and fixing V8 bugs, owing to its position as arguably the most popular sandboxing technology in the world. Google regularly pays out 5-figure bounties to anyone finding a V8 sandbox escape. Google also operates fuzzing infrastructure that automatically finds bugs faster than most humans can. Google’s investment does a lot to minimize the danger of V8 zero-days — bugs that are found by malicious actors and not known to Google. But, what happens after a bug is found and reported? V8 is open source, so fixes for security bugs are developed in the open and released to everyone at the same time. It is important that any patch be rolled out to production as fast as possible, before malicious actors can develop an exploit. The time between publishing the fix and deploying it is known as the patch gap. Google previously [announced that Chrome’s patch gap had been reduced from 33 days to 15 days](https://www.zdnet.com/article/google-cuts-chrome-patch-gap-in-half-from-33-to-15-days/). Fortunately, Cloudflare directly controls the machines on which the Workers runtime operates. Nearly the entire build and release process has been automated, so the moment a V8 patch is published, Cloudflare systems automatically build a new release of the Workers runtime and, after one-click sign-off from the necessary (human) reviewers, automatically push that release out to production. As a result, the Workers patch gap is now under 24 hours. A patch published by V8’s team in Munich during their work day will usually be in production before the end of the US work day. ## Spectre: Introduction The V8 team at Google has stated that [V8 itself cannot defend against Spectre](https://arxiv.org/abs/1902.05178). Workers does not need to depend on V8 for this. The Workers environment presents many alternative approaches to mitigating Spectre. ### What is it? Spectre is a class of attacks in which a malicious program can trick the CPU into speculatively performing computation using data that the program is not supposed to have access to. The CPU eventually realizes the problem and does not allow the program to see the results of the speculative computation. However, the program may be able to derive bits of the secret data by looking at subtle side effects of the computation, such as the effects on the cache. For more information about Spectre, refer to the [Learning Center page on the topic](https://www.cloudflare.com/learning/security/threats/meltdown-spectre/). ### Why does it matter for Workers? Spectre encompasses a wide variety of vulnerabilities present in modern CPUs. The specific vulnerabilities vary by architecture and model and it is likely that many vulnerabilities exist which have not yet been discovered. These vulnerabilities are a problem for every cloud compute platform. Any time you have more than one tenant running code on the same machine, Spectre attacks are possible. However, the closer together the tenants are, the more difficult it can be to mitigate specific vulnerabilities. Many of the known issues can be mitigated at the kernel level (protecting processes from each other) or at the hypervisor level (protecting VMs), often with the help of CPU microcode updates and various defenses (many of which can come with serious performance impact). In Cloudflare Workers, tenants are isolated from each other using V8 isolates — not processes nor VMs. This means that Workers cannot necessarily rely on OS or hypervisor patches to prevent Spectre. Workers need its own strategy. ### Why not use process isolation? Cloudflare Workers is designed to run your code in every single Cloudflare location. Workers is designed to be a platform accessible to everyone. It needs to handle a huge number of tenants, where many tenants get very little traffic. Combine these two points and planning becomes difficult. A typical, non-edge serverless provider could handle a low-traffic tenant by sending all of that tenant’s traffic to a single machine, so that only one copy of the application needs to be loaded. If the machine can handle, say, a dozen tenants, that is plenty. That machine can be hosted in a massive data center with millions of machines, achieving economies of scale. However, this centralization incurs latency and worldwide bandwidth costs when the users are not nearby. With Workers, on the other hand, every tenant, regardless of traffic level, currently runs in every Cloudflare location. And in the quest to get as close to the end user as possible, Cloudflare sometimes chooses locations that only have space for a limited number of machines. The net result is that Cloudflare needs to be able to host thousands of active tenants per machine, with the ability to rapidly spin up inactive ones on-demand. That means that each guest cannot take more than a couple megabytes of memory — hardly enough space for a call stack, much less everything else that a process needs. Moreover, Cloudflare need context switching to be computationally efficient. Many Workers resident in memory will only handle an event every now and then, and many Workers spend less than a fraction of a millisecond on any particular event. In this environment, a single core can easily find itself switching between thousands of different tenants every second. To handle one event, a significant amount of communication needs to happen between the guest application and its host, meaning still more switching and communications overhead. If each tenant lives in its own process, all this overhead is orders of magnitude larger than if many tenants live in a single process. When using strict process isolation in Workers, the CPU cost can easily be 10x what it is with a shared process. In order to keep Workers inexpensive, fast, and accessible to everyone, Cloudflare needed to find a way to host multiple tenants in a single process. ### There is no fix for Spectre Spectre does not have an official solution. Not even when using heavyweight virtual machines. Everyone is still vulnerable. The industry encounters new Spectre attacks. Every couple months, researchers uncover a new Spectre vulnerability, CPU vendors release new microcode, and OS vendors release kernel patches. Everyone must continue updating. But is it enough to merely deploy the latest patches? More vulnerabilities exist but have not yet been publicized. To defend against Spectre, Cloudflare needed to take a different approach. It is not enough to block individual known vulnerabilities. Instead, entire classes of vulnerabilities must be addressed at once. ### Building a defense It is unlikely that any all-encompassing fix for Spectre will be found. However, the following thought experiment raises points to consider: Fundamentally, all Spectre vulnerabilities use side channels to detect hidden processor state. Side channels, by definition, involve observing some non-deterministic behavior of a system. Conveniently, most software execution environments try hard to eliminate non-determinism, because non-deterministic execution makes applications unreliable. However, there are a few sorts of non-determinism that are still common. The most obvious among these is timing. The industry long ago gave up on the idea that a program should take the same amount of time every time it runs, because deterministic timing is fundamentally at odds with heuristic performance optimization. Most Spectre attacks focus on timing as a way to detect the hidden microarchitectural state of the CPU. Some have proposed that this can be solved by making timers inaccurate or adding random noise. However, it turns out that this does not stop attacks; it only makes them slower. If the timer tracks real time at all, then anything you can do to make it inaccurate can be overcome by running an attack multiple times and using statistics to filter out inconsistencies. Many security researchers see this as the end of the story. What good is slowing down an attack if the attack is still possible? ### Cascading slow-downs However, measures that slow down an attack can be powerful. The key insight is this: as an attack becomes slower, new techniques become practical to make it even slower still. The goal, then, is to chain together enough techniques that an attack becomes so slow as to be uninteresting. Much of cryptography, after all, is technically vulnerable to brute force attacks — technically, with enough time, you can break it. But when the time required is thousands (or even billions) of years, this is a sufficient defense. What can be done to slow down Spectre attacks to the point of meaninglessness? ## Freezing a Spectre attack ### Step 0: Do not allow native code Workers does not allow our customers to upload native-code binaries to run on the Cloudflare network — only JavaScript and WebAssembly. Many other languages, like Python, Rust, or even Cobol, can be compiled or transpiled to one of these two formats. Both are passed through V8 to convert these formats into true native code. This, in itself, does not necessarily make Spectre attacks harder. However, this is presented as step 0 because it is fundamental to enabling the following steps. Accepting native code programs implies being beholden to an existing CPU architecture (typically, x86). In order to execute code with reasonable performance, it is usually necessary to run the code directly on real hardware, severely limiting the host’s control over how that execution plays out. For example, a kernel or hypervisor has no ability to prohibit applications from invoking the `CLFLUSH` instruction, an instruction [which is useful in side channel attacks](https://gruss.cc/files/flushflush.pdf) and almost nothing else. Moreover, supporting native code typically implies supporting whole existing operating systems and software stacks, which bring with them decades of expectations about how the architecture works under them. For example, x86 CPUs allow a kernel or hypervisor to disable the RDTSC instruction, which reads a high-precision timer. Realistically, though, disabling it will break many programs because they are implemented to use RDTSC any time they want to know the current time. Supporting native code would limit choice in future mitigation techniques. There is greater freedom in using an abstract intermediate format. ### Step 1: Disallow timers and multi-threading In Workers, you can get the current time using the JavaScript Date API by calling `Date.now()`. However, the time value returned is not the current time. `Date.now()` returns the time of the last I/O. It does not advance during code execution. For example, if an attacker writes: ```js let start = Date.now(); for (let i = 0; i < 1e6; i++) { doSpectreAttack(); } let end = Date.now(); ``` The values of `start` and `end` will always be exactly the same. The attacker cannot use `Date` to measure the execution time of their code, which they would need to do to carry out an attack. :::note This measure was implemented in mid-2017, before Spectre was announced. This measure was implemented because Cloudflare was already concerned about side channel timing attacks. The Workers team has designed the system with side channels in mind. ::: Similarly, multi-threading and shared memory are not permitted in Workers. Everything related to the processing of one event happens on the same thread. Otherwise, one would be able to race threads in order to guess and check the underlying timer. Multiple Workers are not allowed to operate on the same request concurrently. For example, if you have installed a Cloudflare App on your zone which is implemented using Workers, and your zone itself also uses Workers, then a request to your zone may actually be processed by two Workers in sequence. These run in the same thread. At this point, measuring code execution time locally is prevented. However, it can still be measured remotely. For example, the HTTP client that is sending a request to trigger the execution of the Worker can measure how long it takes for the Worker to respond. Such a measurement is likely to be very noisy, as it would have to traverse the Internet and incur general networking costs. Such noise can be overcome, in theory, by executing the attack many times and taking an average. :::note It has been suggested that if Workers reset its execution environment on every request, that Workers would be in a much safer position against timing attacks. Unfortunately, it is not so simple. The execution state could be stored in a client — not the Worker itself — allowing a Worker to resume its previous state on every new request. ::: In adversarial testing and with help from leading Spectre experts, Cloudflare has not been able to develop a remote timing attack that works in production. However, the lack of a working attack does not mean that Workers should stop building defenses. Instead, the Workers team is currently testing some more advanced measures. ### Step 2: Dynamic process isolation If an attack is possible at all, it would take a long time to run — hours at the very least, maybe as long as weeks. But once an attack has been running even for a second, there is a large amount of new data that can be used to trigger further measures. Spectre attacks exhibit abnormal behavior that would not usually be seen in a normal program. These attacks intentionally try to create pathological performance scenarios in order to amplify microarchitectural effects. This is especially true when the attack has already been forced to run billions of times in a loop in order to overcome other mitigations, like those discussed above. This tends to show up in metrics like CPU performance counters. Now, the usual problem with using performance metrics to detect Spectre attacks is that there are sometimes false positives. Sometimes, a legitimate program behaves poorly. The runtime cannot shut down every application that has poor performance. Instead, the runtime chooses to reschedule any Worker with suspicious performance metrics into its own process. As described above, the runtime cannot do this with every Worker because the overhead would be too high. However, it is acceptable to isolate a few Worker processes as a defense mechanism. If the Worker is legitimate, it will keep operating, with a little more overhead. Fortunately, Cloudflare can relocate a Worker into its own process at basically any time. In fact, elaborate performance-counter based triggering may not even be necessary here. If a Worker uses a large amount of CPU time per event, then the overhead of isolating it in its own process is relatively less because it switches context less often. So, the runtime might as well use process isolation for any Worker that is CPU-hungry. Once a Worker is isolated, Cloudflare can rely on the operating system’s Spectre defenses, as most desktop web browsers do. Cloudflare has been working with the experts at Graz Technical University to develop this approach. TU Graz’s team co-discovered Spectre itself and has been responsible for a huge number of the follow-on discoveries since then. Cloudflare has developed the ability to dynamically isolate Workers and has identified metrics which reliably detect attacks. As mentioned previously, process isolation is not a complete defense. Over time, Spectre attacks tend to be slower to carry out which means Cloudflare has the ability to reasonably guess and identify malicious actors. Isolating the process further slows down the potential attack. ### Step 3: Periodic whole-memory shuffling At this point, all known attacks have been prevented. This leaves Workers susceptible to unknown attacks in the future, as with all other CPU-based systems. However, all new attacks will generally be very slow, taking days or longer, leaving Cloudflare with time to prepare a defense. For example, it is within reason to restart the entire Workers runtime on a daily basis. This will reset the locations of everything in memory, forcing attacks to restart the process of discovering the locations of secrets. Cloudflare can also reschedule Workers across physical machines or cordons, so that the window to attack any particular neighbor is limited. In general, because Workers are fundamentally preemptible (unlike containers or VMs), Cloudflare has a lot of freedom to frustrate attacks. Cloudflare sees this as an ongoing investment — not something that will ever be done. --- # Cache URL: https://developers.cloudflare.com/workers/runtime-apis/cache/ ## Background The [Cache API](https://developer.mozilla.org/en-US/docs/Web/API/Cache) allows fine grained control of reading and writing from the [Cloudflare global network](https://www.cloudflare.com/network/) cache. The Cache API is available globally but the contents of the cache do not replicate outside of the originating data center. A `GET /users` response can be cached in the originating data center, but will not exist in another data center unless it has been explicitly created. :::caution[Tiered caching] The `cache.put` method is not compatible with tiered caching. Refer to [Cache API](/workers/reference/how-the-cache-works/#cache-api) for more information. To perform tiered caching, use the [fetch API](/workers/reference/how-the-cache-works/#interact-with-the-cloudflare-cache). ::: Workers deployed to custom domains have access to functional `cache` operations. So do [Pages functions](/pages/functions/), whether attached to custom domains or `*.pages.dev` domains. However, any Cache API operations in the Cloudflare Workers dashboard editor and [Playground](/workers/playground/) previews will have no impact. For Workers fronted by [Cloudflare Access](https://www.cloudflare.com/teams/access/), the Cache API is not currently available. :::note This individualized zone cache object differs from Cloudflare’s Global CDN. For details, refer to [How the cache works](/workers/reference/how-the-cache-works/). ::: *** ## Accessing Cache The `caches.default` API is strongly influenced by the web browsers’ Cache API, but there are some important differences. For instance, Cloudflare Workers runtime exposes a single global cache object. ```js let cache = caches.default; await cache.match(request); ``` You may create and manage additional Cache instances via the [`caches.open`](https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage/open) method. ```js let myCache = await caches.open('custom:cache'); await myCache.match(request); ``` :::note When using the cache API, avoid overriding the hostname in cache requests, as this can lead to unnecessary DNS lookups and cache inefficiencies. Always use the hostname that matches the domain associated with your Worker. ```js // recommended approach: use your Worker hostname to ensure efficient caching request.url = "https://your-Worker-hostname.com/"; let myCache = await caches.open('custom:cache'); let response = await myCache.match(request); ``` ::: *** ## Headers Our implementation of the Cache API respects the following HTTP headers on the response passed to `put()`: * `Cache-Control` * Controls caching directives. This is consistent with [Cloudflare Cache-Control Directives](/cache/concepts/cache-control#cache-control-directives). Refer to [Edge TTL](/cache/how-to/configure-cache-status-code#edge-ttl) for a list of HTTP response codes and their TTL when `Cache-Control` directives are not present. * `Cache-Tag` * Allows resource purging by tag(s) later. * `ETag` * Allows `cache.match()` to evaluate conditional requests with `If-None-Match`. * `Expires` string * A string that specifies when the resource becomes invalid. * `Last-Modified` * Allows `cache.match()` to evaluate conditional requests with `If-Modified-Since`. This differs from the web browser Cache API as they do not honor any headers on the request or response. :::note Responses with `Set-Cookie` headers are never cached, because this sometimes indicates that the response contains unique data. To store a response with a `Set-Cookie` header, either delete that header or set `Cache-Control: private=Set-Cookie` on the response before calling `cache.put()`. Use the `Cache-Control` method to store the response without the `Set-Cookie` header. ::: *** ## Methods ### Put ```js cache.put(request, response); ``` * put(request, response) : Promise * Attempts to add a response to the cache, using the given request as the key. Returns a promise that resolves to `undefined` regardless of whether the cache successfully stored the response. :::note The `stale-while-revalidate` and `stale-if-error` directives are not supported when using the `cache.put` or `cache.match` methods. ::: #### Parameters * `request` string | Request * Either a string or a [`Request`](/workers/runtime-apis/request/) object to serve as the key. If a string is passed, it is interpreted as the URL for a new Request object. * `response` Response * A [`Response`](/workers/runtime-apis/response/) object to store under the given key. #### Invalid parameters `cache.put` will throw an error if: * The `request` passed is a method other than `GET`. * The `response` passed has a `status` of [`206 Partial Content`](https://www.webfx.com/web-development/glossary/http-status-codes/what-is-a-206-status-code/). * The `response` passed contains the header `Vary: *`. The value of the `Vary` header is an asterisk (`*`). Refer to the [Cache API specification](https://w3c.github.io/ServiceWorker/#cache-put) for more information. #### Errors `cache.put` returns a `413` error if `Cache-Control` instructs not to cache or if the response is too large. ### `Match` ```js cache.match(request, options); ``` * match(request, options) : Promise`` * Returns a promise wrapping the response object keyed to that request. :::note The `stale-while-revalidate` and `stale-if-error` directives are not supported when using the `cache.put` or `cache.match` methods. ::: #### Parameters * `request` string | Request * The string or [`Request`](/workers/runtime-apis/request/) object used as the lookup key. Strings are interpreted as the URL for a new `Request` object. * `options` * Can contain one possible property: `ignoreMethod` (Boolean). When `true`, the request is considered to be a `GET` request regardless of its actual value. Unlike the browser Cache API, Cloudflare Workers do not support the `ignoreSearch` or `ignoreVary` options on `match()`. You can accomplish this behavior by removing query strings or HTTP headers at `put()` time. Our implementation of the Cache API respects the following HTTP headers on the request passed to `match()`: * `Range` * Results in a `206` response if a matching response with a Content-Length header is found. Your Cloudflare cache always respects range requests, even if an `Accept-Ranges` header is on the response. * `If-Modified-Since` * Results in a `304` response if a matching response is found with a `Last-Modified` header with a value after the time specified in `If-Modified-Since`. * `If-None-Match` * Results in a `304` response if a matching response is found with an `ETag` header with a value that matches a value in `If-None-Match`. * `cache.match()` * Never sends a subrequest to the origin. If no matching response is found in cache, the promise that `cache.match()` returns is fulfilled with `undefined`. #### Errors `cache.match` generates a `504` error response when the requested content is missing or expired. The Cache API does not expose this `504` directly to the Worker script, instead returning `undefined`. Nevertheless, the underlying `504` is still visible in Cloudflare Logs. If you use Cloudflare Logs, you may see these `504` responses with the `RequestSource` of `edgeWorkerCacheAPI`. Again, these are expected if the cached asset was missing or expired. Note that `edgeWorkerCacheAPI` requests are already filtered out in other views, such as Cache Analytics. To filter out these requests or to filter requests by end users of your website only, refer to [Filter end users](/analytics/graphql-api/features/filtering/#filter-end-users). ### `Delete` ```js cache.delete(request, options); ``` * delete(request, options) : Promise`` Deletes the `Response` object from the cache and returns a `Promise` for a Boolean response: * `true`: The response was cached but is now deleted * `false`: The response was not in the cache at the time of deletion. :::caution[Global purges] The `cache.delete` method only purges content of the cache in the data center that the Worker was invoked. For global purges, refer to [Purging assets stored with the Cache API](/workers/reference/how-the-cache-works/#purge-assets-stored-with-the-cache-api). ::: #### Parameters * `request` string | Request * The string or [`Request`](/workers/runtime-apis/request/) object used as the lookup key. Strings are interpreted as the URL for a new `Request` object. * `options` object * Can contain one possible property: `ignoreMethod` (Boolean). Consider the request method a GET regardless of its actual value. *** ## Related resources * [How the cache works](/workers/reference/how-the-cache-works/) * [Example: Cache using `fetch()`](/workers/examples/cache-using-fetch/) * [Example: using the Cache API](/workers/examples/cache-api/) * [Example: caching POST requests](/workers/examples/cache-post-request/) --- # Console URL: https://developers.cloudflare.com/workers/runtime-apis/console/ The `console` object provides a set of methods to help you emit logs, warnings, and debug code. All standard [methods of the `console` API](https://developer.mozilla.org/en-US/docs/Web/API/console) are present on the `console` object in Workers. However, some methods are no ops — they can be called, and do not emit an error, but do not do anything. This ensures compatibility with libraries which may use these APIs. The table below enumerates each method, and the extent to which it is supported in Workers. All methods noted as "✅ supported" have the following behavior: * They will be written to the console in local dev (`npx wrangler@latest dev`) * They will appear in live logs, when tailing logs in the dashboard or running [`wrangler tail`](https://developers.cloudflare.com/workers/observability/log-from-workers/#use-wrangler-tail) * They will create entries in the `logs` field of [Tail Worker](https://developers.cloudflare.com/workers/observability/tail-workers/) events and [Workers Trace Events](https://developers.cloudflare.com/logs/reference/log-fields/account/workers_trace_events/), which can be pushed to a destination of your choice via [Logpush](https://developers.cloudflare.com/workers/observability/logpush/). All methods noted as "🟡 partial support" have the following behavior: * In both production and local development the method can be safely called, but will do nothing (no op) * In the [Workers Playground](https://workers.cloudflare.com/playground), Quick Editor in the Workers dashboard, and remote preview mode (`wrangler dev --remote`) calling the method will behave as expected, print to the console, etc. Refer to [Log from Workers](https://developers.cloudflare.com/workers/observability/log-from-workers/) for more on debugging and adding logs to Workers. | Method | Behavior | | -------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- | | [`console.debug()`](https://developer.mozilla.org/en-US/docs/Web/API/console/debug_static) | ✅ supported | | [`console.error()`](https://developer.mozilla.org/en-US/docs/Web/API/console/error_static) | ✅ supported | | [`console.info()`](https://developer.mozilla.org/en-US/docs/Web/API/console/info_static) | ✅ supported | | [`console.log()`](https://developer.mozilla.org/en-US/docs/Web/API/console/log_static) | ✅ supported | | [`console.warn()`](https://developer.mozilla.org/en-US/docs/Web/API/console/warn_static) | ✅ supported | | [`console.clear()`](https://developer.mozilla.org/en-US/docs/Web/API/console/clear_static) | 🟡 partial support | | [`console.count()`](https://developer.mozilla.org/en-US/docs/Web/API/console/count_static) | 🟡 partial support | | [`console.group()`](https://developer.mozilla.org/en-US/docs/Web/API/console/group_static) | 🟡 partial support | | [`console.table()`](https://developer.mozilla.org/en-US/docs/Web/API/console/table_static) | 🟡 partial support | | [`console.trace()`](https://developer.mozilla.org/en-US/docs/Web/API/console/trace_static) | 🟡 partial support | | [`console.assert()`](https://developer.mozilla.org/en-US/docs/Web/API/console/assert_static) | ⚪ no op | | [`console.countReset()`](https://developer.mozilla.org/en-US/docs/Web/API/console/countreset_static) | ⚪ no op | | [`console.dir()`](https://developer.mozilla.org/en-US/docs/Web/API/console/dir_static) | ⚪ no op | | [`console.dirxml()`](https://developer.mozilla.org/en-US/docs/Web/API/console/dirxml_static) | ⚪ no op | | [`console.groupCollapsed()`](https://developer.mozilla.org/en-US/docs/Web/API/console/groupcollapsed_static) | ⚪ no op | | [`console.groupEnd`](https://developer.mozilla.org/en-US/docs/Web/API/console/groupend_static) | ⚪ no op | | [`console.profile()`](https://developer.mozilla.org/en-US/docs/Web/API/console/profile_static) | ⚪ no op | | [`console.profileEnd()`](https://developer.mozilla.org/en-US/docs/Web/API/console/profileend_static) | ⚪ no op | | [`console.time()`](https://developer.mozilla.org/en-US/docs/Web/API/console/time_static) | ⚪ no op | | [`console.timeEnd()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timeend_static) | ⚪ no op | | [`console.timeLog()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timelog_static) | ⚪ no op | | [`console.timeStamp()`](https://developer.mozilla.org/en-US/docs/Web/API/console/timestamp_static) | ⚪ no op | | [`console.createTask()`](https://developer.chrome.com/blog/devtools-modern-web-debugging/#linked-stack-traces) | 🔴 Will throw an exception in production, but works in local dev, Quick Editor, and remote preview | --- # Context (ctx) URL: https://developers.cloudflare.com/workers/runtime-apis/context/ The Context API provides methods to manage the lifecycle of your Worker or Durable Object. Context is exposed via the following places: * As the third parameter in all [handlers](/workers/runtime-apis/handlers/), including the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/). (`fetch(request, env, ctx)`) * As a class property of the [`WorkerEntrypoint` class](/workers/runtime-apis/bindings/service-bindings/rpc) ## `waitUntil` `ctx.waitUntil()` extends the lifetime of your Worker, allowing you to perform work without blocking returning a response, and that may continue after a response is returned. It accepts a `Promise`, which the Workers runtime will continue executing, even after a response has been returned by the Worker's [handler](/workers/runtime-apis/handlers/). `waitUntil` is commonly used to: * Fire off events to external analytics providers. (note that when you use [Workers Analytics Engine](/analytics/analytics-engine/), you do not need to use `waitUntil`) * Put items into cache using the [Cache API](/workers/runtime-apis/cache/) :::note[Alternatives to waitUntil] If you are using `waitUntil()` to emit logs or exceptions, we recommend using [Tail Workers](/workers/observability/logs/tail-workers/) instead. Even if your Worker throws an uncaught exception, the Tail Worker will execute, ensuring that you can emit logs or exceptions regardless of the Worker's invocation status. [Cloudflare Queues](/queues/) is purpose-built for performing work out-of-band, without blocking returning a response back to the client Worker. ::: You can call `waitUntil()` multiple times. Similar to `Promise.allSettled`, even if a promise passed to one `waitUntil` call is rejected, promises passed to other `waitUntil()` calls will still continue to execute. For example: ```js export default { async fetch(request, env, ctx) { // Forward / proxy original request let res = await fetch(request); // Add custom header(s) res = new Response(res.body, res); res.headers.set('x-foo', 'bar'); // Cache the response // NOTE: Does NOT block / wait ctx.waitUntil(caches.default.put(request, res.clone())); // Done return res; }, }; ``` ## `passThroughOnException` :::caution[Reuse of body] The Workers Runtime uses streaming for request and response bodies. It does not buffer the body. Hence, if an exception occurs after the body has been consumed, `passThroughOnException()` cannot send the body again. If this causes issues, we recommend cloning the request body and handling exceptions in code. This will protect against uncaught code exceptions. However some exception times such as exceed CPU or memory limits will not be mitigated. ::: The `passThroughOnException` method allows a Worker to [fail open](https://community.microfocus.com/cyberres/b/sws-22/posts/security-fundamentals-part-1-fail-open-vs-fail-closed), and pass a request through to an origin server when a Worker throws an unhandled exception. This can be useful when using Workers as a layer in front of an existing service, allowing the service behind the Worker to handle any unexpected error cases that arise in your Worker. ```js export default { async fetch(request, env, ctx) { // Proxy to origin on unhandled/uncaught exceptions ctx.passThroughOnException(); throw new Error('Oops'); }, }; ``` --- # Encoding URL: https://developers.cloudflare.com/workers/runtime-apis/encoding/ ## TextEncoder ### Background The `TextEncoder` takes a stream of code points as input and emits a stream of bytes. Encoding types passed to the constructor are ignored and a UTF-8 `TextEncoder` is created. [`TextEncoder()`](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoder/TextEncoder) returns a newly constructed `TextEncoder` that generates a byte stream with UTF-8 encoding. `TextEncoder` takes no parameters and throws no exceptions. ### Constructor ```js let encoder = new TextEncoder(); ``` ### Properties * `encoder.encoding` DOMString read-only * The name of the encoder as a string describing the method the `TextEncoder` uses (always `utf-8`). ### Methods * encode(inputUSVString) : Uint8Array * Encodes a string input. *** ## TextDecoder ### Background The `TextDecoder` interface represents a UTF-8 decoder. Decoders take a stream of bytes as input and emit a stream of code points. [`TextDecoder()`](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder/TextDecoder) returns a newly constructed `TextDecoder` that generates a code-point stream. ### Constructor ```js let decoder = new TextDecoder(); ``` ### Properties * `decoder.encoding` DOMString read-only * The name of the decoder that describes the method the `TextDecoder` uses. * `decoder.fatal` boolean read-only * Indicates if the error mode is fatal. * `decoder.ignoreBOM` boolean read-only * Indicates if the byte-order marker is ignored. ### Methods * `decode()` : DOMString * Decodes using the method specified in the `TextDecoder` object. Learn more at [MDN’s `TextDecoder` documentation](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoder/decode). --- # EventSource URL: https://developers.cloudflare.com/workers/runtime-apis/eventsource/ ## Background The [`EventSource`](https://developer.mozilla.org/en-US/docs/Web/API/EventSource) interface is a server-sent event API that allows a server to push events to a client. The `EventSource` object is used to receive server-sent events. It connects to a server over HTTP and receives events in a text-based format. ### Constructor ```js let eventSource = new EventSource(url, options); ``` * `url` USVString - The URL to which to connect. * `options` EventSourceInit - An optional dictionary containing any optional settings. By default, the `EventSource` will use the global `fetch()` function under the covers to make requests. If you need to use a different fetch implementation as provided by a Cloudflare Workers binding, you can pass the `fetcher` option: ```js export default { async fetch(req, env) { let eventSource = new EventSource(url, { fetcher: env.MYFETCHER }); // ... } }; ``` Note that the `fetcher` option is a Cloudflare Workers specific extension. ### Properties * `eventSource.url` USVString read-only * The URL of the event source. * `eventSource.readyState` USVString read-only * The state of the connection. * `eventSource.withCredentials` Boolean read-only * A Boolean indicating whether the `EventSource` object was instantiated with cross-origin (CORS) credentials set (`true`), or not (`false`). ### Methods * `eventSource.close()` * Closes the connection. * `eventSource.onopen` * An event handler called when a connection is opened. * `eventSource.onmessage` * An event handler called when a message is received. * `eventSource.onerror` * An event handler called when an error occurs. ### Events * `message` * Fired when a message is received. * `open` * Fired when the connection is opened. * `error` * Fired when an error occurs. ### Class Methods * EventSource.from(readableStreamReadableStream) : EventSource * This is a Cloudflare Workers specific extension that creates a new `EventSource` object from an existing `ReadableStream`. Such an instance does not initiate a new connection but instead attaches to the provided stream. --- # Headers URL: https://developers.cloudflare.com/workers/runtime-apis/headers/ ## Background All HTTP request and response headers are available through the [Headers API](https://developer.mozilla.org/en-US/docs/Web/API/Headers). When a header name possesses multiple values, those values will be concatenated as a single, comma-delimited string value. This means that `Headers.get` will always return a string or a `null` value. This applies to all header names except for `Set-Cookie`, which requires `Headers.getAll`. This is documented below in [Differences](#differences). ```js let headers = new Headers(); headers.get('x-foo'); //=> null headers.set('x-foo', '123'); headers.get('x-foo'); //=> "123" headers.set('x-foo', 'hello'); headers.get('x-foo'); //=> "hello" headers.append('x-foo', 'world'); headers.get('x-foo'); //=> "hello, world" ``` ## Differences * Despite the fact that the `Headers.getAll` method has been made obsolete, Cloudflare still offers this method but only for use with the `Set-Cookie` header. This is because cookies will often contain date strings, which include commas. This can make parsing multiple values in a `Set-Cookie` header more difficult. Any attempts to use `Headers.getAll` with other header names will throw an error. A brief history `Headers.getAll` is available in this [GitHub issue](https://github.com/whatwg/fetch/issues/973). * Due to [RFC 6265](https://www.rfc-editor.org/rfc/rfc6265) prohibiting folding multiple `Set-Cookie` headers into a single header, the `Headers.append` method will allow you to set multiple `Set-Cookie` response headers instead of appending the value onto the existing header. ```js const headers = new Headers(); headers.append("Set-Cookie", "cookie1=value_for_cookie_1; Path=/; HttpOnly;"); headers.append("Set-Cookie", "cookie2=value_for_cookie_2; Path=/; HttpOnly;"); console.log(headers.getAll("Set-Cookie")); // Array(2) [ cookie1=value_for_cookie_1; Path=/; HttpOnly;, cookie2=value_for_cookie_2; Path=/; HttpOnly; ] ``` * In Cloudflare Workers, the `Headers.get` method returns a [`USVString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String) instead of a [`ByteString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String), which is specified by the spec. For most scenarios, this should have no noticeable effect. To compare the differences between these two string classes, refer to this [Playground example](https://workers.cloudflare.com/playground#LYVwNgLglgDghgJwgegGYHsHALQBM4RwDcABAEbogB2+CAngLzbMutvvsCMALAJx-cAzAHZeANkG8AHAAZOU7t2EBWAEy9eqsXNWdOALg5HjbHv34jxk2fMUr1m7Z12cAsACgAwuioQApr7YACJQAM4w6KFQ0D76JBhYeATEJFRwwH4MAERQNH4AHgB0AFahWaSoUGAB6Zk5eUWlWR7evgEQ2AAqdDB+cXAwMGBQAMYEUD7IxXAAbnChIwiwEADUwOi44H4eHgURSCS4fqhw4BAkAN7uAJDzdFQj8X4QIwAWABQIfgCOIH6hEAAlJcbtdqucxucGCQsoBeDcAXHtZUHgkggCCoKSeAgkaFUPwAdxInQKEAAog8Nn4EO9AYUAiNKe9IYDkc8SPTKbgsVCSABlCBLKgAc0KqAQ6GAnleiG8R3ehQVaIx3JZoIZVFC6GqhTA6CF7yynVeYRIJrgJAAqryAGr8wVCkj46KvEjmyH6LIAGhIzLVPk12t1+sNxtCprD5oAQnR-Hbcg6nRAXW7sT5LZ0AGLYKQe70co5cgiq67XZDIEgACT8cCOCAjXxIoRAg0iflwJAg6EdmAA1iQfGA6I7nSRo7GBfHQt6yGj+yAEKCy6bgEM-BlfOM0yBQv9LTa48LQoUiaHUiSSMM8cOwGASDBBec4Ivy-jEFR466KLOk2FCqzzq81a1mGuIEpWQFUqE7wXDC+ZttgkJZHEcGFucAC+xbXF8EDzlQZ6EgASv8EQan4BpSn4Ix9pQ5xJn4JAAAatAGfgMa6NAdoBJBEeE-r0YBNaQR2XY7vRdFzhAMCzgyK6IGE-qFF6lwkAJwEkBhNxoe4aEeCYelGGYAiWBI0hyAoShqBoWg6HoLQ+P4gQhLxUQxFQcQJDg+CEKQaQZNkGSEF5cDlPEVQ1H5WRkLqZDNF49ntF0PR9K6gzDJCExUFMmpUDs7gXFkwBwLkAD66ybNUSH1EcjRlDp7j6Q1rCGRYogmTY5n2FZTguMwHhAA). ## Cloudflare headers Cloudflare sets a number of its own custom headers on incoming requests and outgoing responses. While some may be used for its own tracking and bookkeeping, many of these can be useful to your own applications – or Workers – too. For a list of documented Cloudflare request headers, refer to [Cloudflare HTTP headers](/fundamentals/reference/http-headers/). ## Related resources * [Logging headers to console](/workers/examples/logging-headers/) - Review how to log headers in the console. * [Cloudflare HTTP headers](/fundamentals/reference/http-headers/) - Contains a list of specific headers that Cloudflare adds. --- # Fetch URL: https://developers.cloudflare.com/workers/runtime-apis/fetch/ import { TabItem, Tabs } from "~/components"; The [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) provides an interface for asynchronously fetching resources via HTTP requests inside of a Worker. :::note Asynchronous tasks such as `fetch` must be executed within a [handler](/workers/runtime-apis/handlers/). If you try to call `fetch()` within [global scope](https://developer.mozilla.org/en-US/docs/Glossary/Global_scope), your Worker will throw an error. Learn more about [the Request context](/workers/runtime-apis/request/#the-request-context). ::: :::caution[Worker to Worker] Worker-to-Worker `fetch` requests are possible with [Service bindings](/workers/runtime-apis/bindings/service-bindings/) or by enabling the [`global_fetch_strictly_public` compatibility flag](/workers/configuration/compatibility-flags/#global-fetch-strictly-public). ::: ## Syntax ```js null {3-7} export default { async scheduled(controller, env, ctx) { return await fetch("https://example.com", { headers: { "X-Source": "Cloudflare-Workers", }, }); }, }; ``` ```js null {8} addEventListener("fetch", (event) => { // NOTE: can’t use fetch here, as we’re not in an async scope yet event.respondWith(eventHandler(event)); }); async function eventHandler(event) { // fetch can be awaited here since `event.respondWith()` waits for the Promise it receives to settle const resp = await fetch(event.request); return resp; } ``` ```python from workers import fetch, handler @handler async def on_scheduled(controller, env, ctx): return await fetch("https://example.com", headers={"X-Source": "Cloudflare-Workers"}) ``` - fetch(resource, options optional) : Promise`` - Fetch returns a promise to a Response. ### Parameters - [`resource`](https://developer.mozilla.org/en-US/docs/Web/API/fetch#resource) Request | string | URL - `options` options - `cache` `undefined | 'no-store'` optional - Standard HTTP `cache` header. Only `cache: 'no-store'` is supported. Any other `cache` header will result in a `TypeError` with the message `Unsupported cache mode: `. _ For all requests this forwards the `Pragma: no-cache` and `Cache-Control: no-cache` headers to the origin. _ For requests to origins not hosted by Cloudflare, `no-store` bypasses the use of Cloudflare's caches. - An object that defines the content and behavior of the request. --- ## How the `Accept-Encoding` header is handled When making a subrequest with the `fetch()` API, you can specify which forms of compression to prefer that the server will respond with (if the server supports it) by including the [`Accept-Encoding`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Encoding) header. Workers supports both the gzip and brotli compression algorithms. Usually it is not necessary to specify `Accept-Encoding` or `Content-Encoding` headers in the Workers Runtime production environment – brotli or gzip compression is automatically requested when fetching from an origin and applied to the response when returning data to the client, depending on the capabilities of the client and origin server. To support requesting brotli from the origin, you must enable the [`brotli_content_encoding`](/workers/configuration/compatibility-flags/#brotli-content-encoding-support) compatibility flag in your Worker. Soon, this compatibility flag will be enabled by default for all Workers past an upcoming compatibility date. ### Passthrough behavior One scenario where the Accept-Encoding header is useful is for passing through compressed data from a server to the client, where the Accept-Encoding allows the worker to directly receive the compressed data stream from the server without it being decompressed beforehand. As long as you do not read the body of the compressed response prior to returning it to the client and keep the `Content-Encoding` header intact, it will "pass through" without being decompressed and then recompressed again. This can be helpful when using Workers in front of origin servers or when fetching compressed media assets, to ensure that the same compression used by the origin server is used in the response that your Worker returns. In addition to a change in the content encoding, recompression is also needed when a response uses an encoding not supported by the client. As an example, when a Worker requests either brotli or gzip as the encoding but the client only supports gzip, recompression will still be needed if the server returns brotli-encoded data to the server (and will be applied automatically). Note that this behavior may also vary based on the [compression rules](/rules/compression-rules/), which can be used to configure what compression should be applied for different types of data on the server side. ```typescript export default { async fetch(request) { // Accept brotli or gzip compression const headers = new Headers({ "Accept-Encoding": "br, gzip", }); let response = await fetch("https://developers.cloudflare.com", { method: "GET", headers, }); // As long as the original response body is returned and the Content-Encoding header is // preserved, the same encoded data will be returned without needing to be compressed again. return new Response(response.body, { status: response.status, statusText: response.statusText, headers: response.headers, }); }, }; ``` ## Related resources - [Example: use `fetch` to respond with another site](/workers/examples/respond-with-another-site/) - [Example: Fetch HTML](/workers/examples/fetch-html/) - [Example: Fetch JSON](/workers/examples/fetch-json/) - [Example: cache using Fetch](/workers/examples/cache-using-fetch/) - Write your Worker code in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # HTMLRewriter URL: https://developers.cloudflare.com/workers/runtime-apis/html-rewriter/ import { Render } from "~/components"; ## Background The `HTMLRewriter` class allows developers to build comprehensive and expressive HTML parsers inside of a Cloudflare Workers application. It can be thought of as a jQuery-like experience directly inside of your Workers application. Leaning on a powerful JavaScript API to parse and transform HTML, `HTMLRewriter` allows developers to build deeply functional applications. The `HTMLRewriter` class should be instantiated once in your Workers script, with a number of handlers attached using the `on` and `onDocument` functions. --- ## Constructor ```js new HTMLRewriter() .on("*", new ElementHandler()) .onDocument(new DocumentHandler()); ``` --- ## Global types Throughout the `HTMLRewriter` API, there are a few consistent types that many properties and methods use: - `Content` string | Response | ReadableStream - Content inserted in the output stream should be a string, [`Response`](/workers/runtime-apis/response/), or [`ReadableStream`](/workers/runtime-apis/streams/readablestream/). - `ContentOptions` Object - `{ html: Boolean }` Controls the way the HTMLRewriter treats inserted content. If the `html` boolean is set to true, content is treated as raw HTML. If the `html` boolean is set to false or not provided, content will be treated as text and proper HTML escaping will be applied to it. --- ## Handlers There are two handler types that can be used with `HTMLRewriter`: element handlers and document handlers. ### Element Handlers An element handler responds to any incoming element, when attached using the `.on` function of an `HTMLRewriter` instance. The element handler should respond to `element`, `comments`, and `text`. The example processes `div` elements with an `ElementHandler` class. ```js class ElementHandler { element(element) { // An incoming element, such as `div` console.log(`Incoming element: ${element.tagName}`); } comments(comment) { // An incoming comment } text(text) { // An incoming piece of text } } async function handleRequest(req) { const res = await fetch(req); return new HTMLRewriter().on("div", new ElementHandler()).transform(res); } ``` ### Document Handlers A document handler represents the incoming HTML document. A number of functions can be defined on a document handler to query and manipulate a document’s `doctype`, `comments`, `text`, and `end`. Unlike an element handler, a document handler’s `doctype`, `comments`, `text`, and `end` functions are not scoped by a particular selector. A document handler's functions are called for all the content on the page including the content outside of the top-level HTML tag: ```js class DocumentHandler { doctype(doctype) { // An incoming doctype, such as } comments(comment) { // An incoming comment } text(text) { // An incoming piece of text } end(end) { // The end of the document } } ``` #### Async Handlers All functions defined on both element and document handlers can return either `void` or a `Promise`. Making your handler function `async` allows you to access external resources such as an API via fetch, Workers KV, Durable Objects, or the cache. ```js class UserElementHandler { async element(element) { let response = await fetch(new Request("/user")); // fill in user info using response } } async function handleRequest(req) { const res = await fetch(req); // run the user element handler via HTMLRewriter on a div with ID `user_info` return new HTMLRewriter() .on("div#user_info", new UserElementHandler()) .transform(res); } ``` ### Element The `element` argument, used only in element handlers, is a representation of a DOM element. A number of methods exist on an element to query and manipulate it: #### Properties - `tagName` string - The name of the tag, such as `"h1"` or `"div"`. This property can be assigned different values, to modify an element’s tag. - `attributes` Iterator read-only - A `[name, value]` pair of the tag’s attributes. - `removed` boolean - Indicates whether the element has been removed or replaced by one of the previous handlers. - `namespaceURI` String - Represents the [namespace URI](https://infra.spec.whatwg.org/#namespaces) of an element. #### Methods - getAttribute(namestring) : string | null - Returns the value for a given attribute name on the element, or `null` if it is not found. - hasAttribute(namestring) : boolean - Returns a boolean indicating whether an attribute exists on the element. - setAttribute(namestring, valuestring) : Element - Sets an attribute to a provided value, creating the attribute if it does not exist. - removeAttribute(namestring) : Element - Removes the attribute. - before(contentContent, contentOptionsContentOptionsoptional) : Element - Inserts content before the element. - after(contentContent, contentOptionsContentOptionsoptional) : Element - Inserts content right after the element. - prepend(contentContent, contentOptionsContentOptionsoptional) : Element - Inserts content right after the start tag of the element. - append(contentContent, contentOptionsContentOptionsoptional) : Element - Inserts content right before the end tag of the element. - replace(contentContent, contentOptionsContentOptionsoptional) : Element - Removes the element and inserts content in place of it. - setInnerContent(contentContent, contentOptionsContentOptionsoptional) : Element - Replaces content of the element. - remove() : Element - Removes the element with all its content. - removeAndKeepContent() : Element - Removes the start tag and end tag of the element but keeps its inner content intact. - `onEndTag(handlerFunction)` : void - Registers a handler that is invoked when the end tag of the element is reached. ### EndTag The `endTag` argument, used only in handlers registered with `element.onEndTag`, is a limited representation of a DOM element. #### Properties - `name` string - The name of the tag, such as `"h1"` or `"div"`. This property can be assigned different values, to modify an element's tag. #### Methods - before(contentContent, contentOptionsContentOptionsoptional) : EndTag - Inserts content right before the end tag. - after(contentContent, contentOptionsContentOptionsoptional) : EndTag - Inserts content right after the end tag. - remove() : EndTag - Removes the element with all its content. ### Text chunks Since Cloudflare performs zero-copy streaming parsing, text chunks are not the same thing as text nodes in the lexical tree. A lexical tree text node can be represented by multiple chunks, as they arrive over the wire from the origin. Consider the following markup: `
Hey. How are you?
`. It is possible that the Workers script will not receive the entire text node from the origin at once; instead, the `text` element handler will be invoked for each received part of the text node. For example, the handler might be invoked with `“Hey. How ”,` then `“are you?”`. When the last chunk arrives, the text’s `lastInTextNode` property will be set to `true`. Developers should make sure to concatenate these chunks together. #### Properties - `removed` boolean - Indicates whether the element has been removed or replaced by one of the previous handlers. - `text` string read-only - The text content of the chunk. Could be empty if the chunk is the last chunk of the text node. - `lastInTextNode` boolean read-only - Specifies whether the chunk is the last chunk of the text node. #### Methods - before(contentContent, contentOptionsContentOptionsoptional) : Element - Inserts content before the element. - after(contentContent, contentOptionsContentOptionsoptional) : Element - Inserts content right after the element. - replace(contentContent, contentOptionsContentOptionsoptional) : Element - Removes the element and inserts content in place of it. - remove() : Element - Removes the element with all its content. ### Comments The `comments` function on an element handler allows developers to query and manipulate HTML comment tags. ```js class ElementHandler { comments(comment) { // An incoming comment element, such as } } ``` #### Properties - `comment.removed` boolean - Indicates whether the element has been removed or replaced by one of the previous handlers. - `comment.text` string - The text of the comment. This property can be assigned different values, to modify comment’s text. #### Methods - before(contentContent, contentOptionsContentOptionsoptional) : Element - Inserts content before the element. - after(contentContent, contentOptionsContentOptionsoptional) : Element - Inserts content right after the element. - replace(contentContent, contentOptionsContentOptionsoptional) : Element - Removes the element and inserts content in place of it. - remove() : Element - Removes the element with all its content. ### Doctype The `doctype` function on a document handler allows developers to query a document’s [doctype](https://developer.mozilla.org/en-US/docs/Glossary/Doctype). ```js class DocumentHandler { doctype(doctype) { // An incoming doctype element, such as // } } ``` #### Properties - `doctype.name` string | null read-only - The doctype name. - `doctype.publicId` string | null read-only - The quoted string in the doctype after the PUBLIC atom. - `doctype.systemId` string | null read-only - The quoted string in the doctype after the SYSTEM atom or immediately after the `publicId`. ### End The `end` function on a document handler allows developers to append content to the end of a document. ```js class DocumentHandler { end(end) { // The end of the document } } ``` #### Methods - append(contentContent, contentOptionsContentOptionsoptional) : DocumentEnd - Inserts content after the end of the document. --- ## Selectors This is what selectors are and what they are used for. - `*` - Any element. - `E` - Any element of type E. - `E:nth-child(n)` - An E element, the n-th child of its parent. - `E:first-child` - An E element, first child of its parent. - `E:nth-of-type(n)` - An E element, the n-th sibling of its type. - `E:first-of-type` - An E element, first sibling of its type. - `E:not(s)` - An E element that does not match either compound selectors. - `E.warning` - An E element belonging to the class warning. - `E#myid` - An E element with ID equal to myid. - `E[foo]` - An E element with a foo attribute. - `E[foo="bar"]` - An E element whose foo attribute value is exactly equal to bar. - `E[foo="bar" i]` - An E element whose foo attribute value is exactly equal to any (ASCII-range) case-permutation of bar. - `E[foo="bar" s]` - An E element whose foo attribute value is exactly and case-sensitively equal to bar. - `E[foo~="bar"]` - An E element whose foo attribute value is a list of whitespace-separated values, one of which is exactly equal to bar. - `E[foo^="bar"]` - An E element whose foo attribute value begins exactly with the string bar. - `E[foo$="bar"]` - An E element whose foo attribute value ends exactly with the string bar. - `E[foo*="bar"]` - An E element whose foo attribute value contains the substring bar. - `E[foo|="en"]` - An E element whose foo attribute value is a hyphen-separated list of values beginning with en. - `E F` - An F element descendant of an E element. - `E > F` - An F element child of an E element. --- ## Errors If a handler throws an exception, parsing is immediately halted, the transformed response body is errored with the thrown exception, and the untransformed response body is canceled (closed). If the transformed response body was already partially streamed back to the client, the client will see a truncated response. ```js async function handle(request) { let oldResponse = await fetch(request); let newResponse = new HTMLRewriter() .on("*", { element(element) { throw new Error("A really bad error."); }, }) .transform(oldResponse); // At this point, an expression like `await newResponse.text()` // will throw `new Error("A really bad error.")`. // Thereafter, any use of `newResponse.body` will throw the same error, // and `oldResponse.body` will be closed. // Alternatively, this will produce a truncated response to the client: return newResponse; } ``` --- ## Related resources - [Introducing `HTMLRewriter`](https://blog.cloudflare.com/introducing-htmlrewriter/) - [Tutorial: Localize a Website](/pages/tutorials/localize-a-website/) - [Example: rewrite links](/workers/examples/rewrite-links/) - [Example: Inject Turnstile](/workers/examples/turnstile-html-rewriter/) --- # Runtime APIs URL: https://developers.cloudflare.com/workers/runtime-apis/ import { DirectoryListing } from "~/components"; The Workers runtime is designed to be [JavaScript standards compliant](https://ecma-international.org/publications-and-standards/standards/ecma-262/) and web-interoperable. Wherever possible, it uses web platform APIs, so that code can be reused across client and server, as well as across [WinterCG](https://wintercg.org/) JavaScript runtimes. [Workers runtime features](/workers/runtime-apis/) are [compatible with a subset of Node.js APIs](/workers/runtime-apis/nodejs) and the ability to set a [compatibility date or compatibility flag](/workers/configuration/compatibility-dates/). --- # Performance and timers URL: https://developers.cloudflare.com/workers/runtime-apis/performance/ ## Background The Workers runtime supports a subset of the [`Performance` API](https://developer.mozilla.org/en-US/docs/Web/API/Performance), used to measure timing and performance, as well as timing of subrequests and other operations. ### `performance.now()` The [`performance.now()` method](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) returns timestamp in milliseconds, representing the time elapsed since `performance.timeOrigin`. When Workers are deployed to Cloudflare, as a security measure to [mitigate against Spectre attacks](/workers/reference/security-model/#step-1-disallow-timers-and-multi-threading), APIs that return timers, including [`performance.now()`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) and [`Date.now()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now), only advance or increment after I/O occurs. Consider the following examples: ```typescript title="Time is frozen — start will have the exact same value as end." const start = performance.now(); for (let i = 0; i < 1e6; i++) { // do expensive work } const end = performance.now(); const timing = end - start; // 0 ``` ```typescript title="Time advances, because a subrequest has occurred between start and end." const start = performance.now(); const response = await fetch("https://developers.cloudflare.com/"); const end = performance.now(); const timing = end - start; // duration of the subrequest to developers.cloudflare.com ``` By wrapping a subrequest in calls to `performance.now()` or `Date.now()` APIs, you can measure the timing of a subrequest, fetching a key from KV, an object from R2, or any other form of I/O in your Worker. In local development, however, timers will increment regardless of whether I/O happens or not. This means that if you need to measure timing of a piece of code that is CPU intensive, that does not involve I/O, you can run your Worker locally, via [Wrangler](/workers/wrangler/), which uses the open-source Workers runtime, [workerd](https://github.com/cloudflare/workerd) — the same runtime that your Worker runs in when deployed to Cloudflare. ### `performance.timeOrigin` The [`performance.timeOrigin`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/timeOrigin) API is a read-only property that returns a baseline timestamp to base other measurements off of. In the Workers runtime, the `timeOrigin` property returns 0. --- # Request URL: https://developers.cloudflare.com/workers/runtime-apis/request/ import { Type, MetaInfo } from "~/components"; The [`Request`](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request) interface represents an HTTP request and is part of the [Fetch API](/workers/runtime-apis/fetch/). ## Background The most common way you will encounter a `Request` object is as a property of an incoming request: ```js null {2} export default { async fetch(request, env, ctx) { return new Response('Hello World!'); }, }; ``` You may also want to construct a `Request` yourself when you need to modify a request object, because the incoming `request` parameter that you receive from the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) is immutable. ```js export default { async fetch(request, env, ctx) { const url = "https://example.com"; const modifiedRequest = new Request(url, request); // ... }, }; ``` The [`fetch() handler`](/workers/runtime-apis/handlers/fetch/) invokes the `Request` constructor. The [`RequestInit`](#options) and [`RequestInitCfProperties`](#the-cf-property-requestinitcfproperties) types defined below also describe the valid parameters that can be passed to the [`fetch() handler`](/workers/runtime-apis/handlers/fetch/). *** ## Constructor ```js let request = new Request(input, options) ``` ### Parameters * `input` string | Request * Either a string that contains a URL, or an existing `Request` object. * `options` options optional * Optional options object that contains settings to apply to the `Request`. #### `options` An object containing properties that you want to apply to the request. * `cache` `undefined | 'no-store'` optional * Standard HTTP `cache` header. Only `cache: 'no-store'` is supported. Any other cache header will result in a `TypeError` with the message `Unsupported cache mode: `. * `cf` RequestInitCfProperties optional * Cloudflare-specific properties that can be set on the `Request` that control how Cloudflare’s global network handles the request. * `method` * The HTTP request method. The default is `GET`. In Workers, all [HTTP request methods](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Methods) are supported, except for [`CONNECT`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Methods/CONNECT). * `headers` Headers optional * A [`Headers` object](https://developer.mozilla.org/en-US/docs/Web/API/Headers). * `body` string | ReadableStream | FormData | URLSearchParams optional * The request body, if any. * Note that a request using the GET or HEAD method cannot have a body. * `redirect` * The redirect mode to use: `follow`, `error`, or `manual`. The default for a new `Request` object is `follow`. Note, however, that the incoming `Request` property of a `FetchEvent` will have redirect mode `manual`. #### The `cf` property (`RequestInitCfProperties`) An object containing Cloudflare-specific properties that can be set on the `Request` object. For example: ```js // Disable ScrapeShield for this request. fetch(event.request, { cf: { scrapeShield: false } }) ``` Invalid or incorrectly-named keys in the `cf` object will be silently ignored. Consider using TypeScript and [`@cloudflare/workers-types`](https://www.npmjs.com/package/@cloudflare/workers-types) to ensure proper use of the `cf` object. * `apps` * Whether [Cloudflare Apps](https://www.cloudflare.com/apps/) should be enabled for this request. Defaults to `true`. * `cacheEverything` * Treats all content as static and caches all [file types](/cache/concepts/default-cache-behavior#default-cached-file-extensions) beyond the Cloudflare default cached content. Respects cache headers from the origin web server. This is equivalent to setting the Page Rule [**Cache Level** (to **Cache Everything**)](/rules/page-rules/reference/settings/). Defaults to `false`. This option applies to `GET` and `HEAD` request methods only. * `cacheKey` * A request’s cache key is what determines if two requests are the same for caching purposes. If a request has the same cache key as some previous request, then Cloudflare can serve the same cached response for both. * `cacheTags` Array\ optional * This option appends additional [**Cache-Tag**](/cache/how-to/purge-cache/purge-by-tags/) headers to the response from the origin server. This allows for purges of cached content based on tags provided by the Worker, without modifications to the origin server. This is performed using the [**Purge by Tag**](/cache/how-to/purge-cache/purge-by-tags/#purge-using-cache-tags) feature. * `cacheTtl` * This option forces Cloudflare to cache the response for this request, regardless of what headers are seen on the response. This is equivalent to setting two Page Rules: [**Edge Cache TTL**](/cache/how-to/edge-browser-cache-ttl/) and [**Cache Level** (to **Cache Everything**)](/rules/page-rules/reference/settings/). The value must be zero or a positive number. A value of `0` indicates that the cache asset expires immediately. This option applies to `GET` and `HEAD` request methods only. * `cacheTtlByStatus` `{ [key: string]: number }` optional * This option is a version of the `cacheTtl` feature which chooses a TTL based on the response’s status code. If the response to this request has a status code that matches, Cloudflare will cache for the instructed time and override cache instructives sent by the origin. For example: `{ "200-299": 86400, "404": 1, "500-599": 0 }`. The value can be any integer, including zero and negative integers. A value of `0` indicates that the cache asset expires immediately. Any negative value instructs Cloudflare not to cache at all. This option applies to `GET` and `HEAD` request methods only. * `image` Object | null optional * Enables [Image Resizing](/images/transform-images/) for this request. The possible values are described in [Transform images via Workers](/images/transform-images/transform-via-workers/) documentation. * `mirage` * Whether [Mirage](https://www.cloudflare.com/website-optimization/mirage/) should be enabled for this request, if otherwise configured for this zone. Defaults to `true`. * `polish` * Sets [Polish](https://blog.cloudflare.com/introducing-polish-automatic-image-optimizati/) mode. The possible values are `lossy`, `lossless` or `off`. * `resolveOverride` * Directs the request to an alternate origin server by overriding the DNS lookup. The value of `resolveOverride` specifies an alternate hostname which will be used when determining the origin IP address, instead of using the hostname specified in the URL. The `Host` header of the request will still match what is in the URL. Thus, `resolveOverride` allows a request to be sent to a different server than the URL / `Host` header specifies. However, `resolveOverride` will only take effect if both the URL host and the host specified by `resolveOverride` are within your zone. If either specifies a host from a different zone / domain, then the option will be ignored for security reasons. If you need to direct a request to a host outside your zone (while keeping the `Host` header pointing within your zone), first create a CNAME record within your zone pointing to the outside host, and then set `resolveOverride` to point at the CNAME record. Note that, for security reasons, it is not possible to set the `Host` header to specify a host outside of your zone unless the request is actually being sent to that host. * `scrapeShield` * Whether [ScrapeShield](https://blog.cloudflare.com/introducing-scrapeshield-discover-defend-dete/) should be enabled for this request, if otherwise configured for this zone. Defaults to `true`. * `webp` * Enables or disables [WebP](https://blog.cloudflare.com/a-very-webp-new-year-from-cloudflare/) image format in [Polish](/images/polish/). *** ## Properties All properties of an incoming `Request` object (the request you receive from the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/)) are read-only. To modify the properties of an incoming request, create a new `Request` object and pass the options to modify to its [constructor](#constructor). * `body` ReadableStream read-only * Stream of the body contents. * `bodyUsed` Boolean read-only * Declares whether the body has been used in a response yet. * `cf` IncomingRequestCfProperties read-only * An object containing properties about the incoming request provided by Cloudflare’s global network. * This property is read-only (unless created from an existing `Request`). To modify its values, pass in the new values on the [`cf` key of the `init` options argument](/workers/runtime-apis/request/#the-cf-property-requestinitcfproperties) when creating a new `Request` object. * `headers` Headers read-only * A [`Headers` object](https://developer.mozilla.org/en-US/docs/Web/API/Headers). * Compared to browsers, Cloudflare Workers imposes very few restrictions on what headers you are allowed to send. For example, a browser will not allow you to set the `Cookie` header, since the browser is responsible for handling cookies itself. Workers, however, has no special understanding of cookies, and treats the `Cookie` header like any other header. :::caution If the response is a redirect and the redirect mode is set to `follow` (see below), then all headers will be forwarded to the redirect destination, even if the destination is a different hostname or domain. This includes sensitive headers like `Cookie`, `Authorization`, or any application-specific headers. If this is not the behavior you want, you should set redirect mode to `manual` and implement your own redirect policy. Note that redirect mode defaults to `manual` for requests that originated from the Worker's client, so this warning only applies to `fetch()`es made by a Worker that are not proxying the original request. ::: * `method` string read-only * Contains the request’s method, for example, `GET`, `POST`, etc. * `redirect` string read-only * The redirect mode to use: `follow`, `error`, or `manual`. The `fetch` method will automatically follow redirects if the redirect mode is set to `follow`. If set to `manual`, the `3xx` redirect response will be returned to the caller as-is. The default for a new `Request` object is `follow`. Note, however, that the incoming `Request` property of a `FetchEvent` will have redirect mode `manual`. * `url` string read-only * Contains the URL of the request. ### `IncomingRequestCfProperties` In addition to the properties on the standard [`Request`](https://developer.mozilla.org/en-US/docs/Web/API/Request) object, the `request.cf` object on an inbound `Request` contains information about the request provided by Cloudflare’s global network. All plans have access to: * `asn` Number * ASN of the incoming request, for example, `395747`. * `asOrganization` string * The organization which owns the ASN of the incoming request, for example, `Google Cloud`. * `botManagement` Object | null * Only set when using Cloudflare Bot Management. Object with the following properties: `score`, `verifiedBot`, `staticResource`, `ja3Hash`, `ja4`, and `detectionIds`. Refer to [Bot Management Variables](/bots/reference/bot-management-variables/) for more details. * `clientAcceptEncoding` string | null * If Cloudflare replaces the value of the `Accept-Encoding` header, the original value is stored in the `clientAcceptEncoding` property, for example, `"gzip, deflate, br"`. * `colo` string * The three-letter [`IATA`](https://en.wikipedia.org/wiki/IATA_airport_code) airport code of the data center that the request hit, for example, `"DFW"`. * `country` string | null * Country of the incoming request. The two-letter country code in the request. This is the same value as that provided in the `CF-IPCountry` header, for example, `"US"`. * `isEUCountry` string | null * If the country of the incoming request is in the EU, this will return `"1"`. Otherwise, this property will be omitted. * `httpProtocol` string * HTTP Protocol, for example, `"HTTP/2"`. * `requestPriority` string | null * The browser-requested prioritization information in the request object, for example, `"weight=192;exclusive=0;group=3;group-weight=127"`. * `tlsCipher` string * The cipher for the connection to Cloudflare, for example, `"AEAD-AES128-GCM-SHA256"`. * `tlsClientAuth` Object | null * Only set when using Cloudflare Access or API Shield (mTLS). Object with the following properties: `certFingerprintSHA1`, `certFingerprintSHA256`, `certIssuerDN`, `certIssuerDNLegacy`, `certIssuerDNRFC2253`, `certIssuerSKI`, `certIssuerSerial`, `certNotAfter`, `certNotBefore`, `certPresented`, `certRevoked`, `certSKI`, `certSerial`, `certSubjectDN`, `certSubjectDNLegacy`, `certSubjectDNRFC2253`, `certVerified`. * `tlsClientCiphersSha1` string * The SHA-1 hash (Base64-encoded) of the cipher suite sent by the client during the TLS handshake, encoded in big-endian format. For example, `"GXSPDLP4G3X+prK73a4wBuOaHRc="`. * `tlsClientExtensionsSha1` string * The SHA-1 hash (Base64-encoded) of the TLS client extensions sent during the handshake, encoded in big-endian format. For example, `"OWFiM2I5ZDc0YWI0YWYzZmFkMGU0ZjhlYjhiYmVkMjgxNTU5YTU2Mg=="`. * `tlsClientExtensionsSha1Le` string * The SHA-1 hash (Base64-encoded) of the TLS client extensions sent during the handshake, encoded in little-endian format. For example, `"7zIpdDU5pvFPPBI2/PCzqbaXnRA="`. * `tlsClientHelloLength` string * The length of the client hello message sent in a [TLS handshake](https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/). For example, `"508"`. Specifically, the length of the bytestring of the client hello. * `tlsClientRandom` string * The value of the 32-byte random value provided by the client in a [TLS handshake](https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/). Refer to [RFC 8446](https://datatracker.ietf.org/doc/html/rfc8446#section-4.1.2) for more details. * `tlsVersion` string * The TLS version of the connection to Cloudflare, for example, `TLSv1.3`. * `city` string | null * City of the incoming request, for example, `"Austin"`. * `continent` string | null * Continent of the incoming request, for example, `"NA"`. * `latitude` string | null * Latitude of the incoming request, for example, `"30.27130"`. * `longitude` string | null * Longitude of the incoming request, for example, `"-97.74260"`. * `postalCode` string | null * Postal code of the incoming request, for example, `"78701"`. * `metroCode` string | null * Metro code (DMA) of the incoming request, for example, `"635"`. * `region` string | null * If known, the [ISO 3166-2](https://en.wikipedia.org/wiki/ISO_3166-2) name for the first level region associated with the IP address of the incoming request, for example, `"Texas"`. * `regionCode` string | null * If known, the [ISO 3166-2](https://en.wikipedia.org/wiki/ISO_3166-2) code for the first-level region associated with the IP address of the incoming request, for example, `"TX"`. * `timezone` string * Timezone of the incoming request, for example, `"America/Chicago"`. :::caution The `request.cf` object is not available in the Cloudflare Workers dashboard or Playground preview editor. ::: *** ## Methods ### Instance methods These methods are only available on an instance of a `Request` object or through its prototype. * `clone()` : Promise\ * Creates a copy of the `Request` object. * `arrayBuffer()` : Promise\ * Returns a promise that resolves with an [`ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer) representation of the request body. * `formData()` : Promise\ * Returns a promise that resolves with a [`FormData`](https://developer.mozilla.org/en-US/docs/Web/API/FormData) representation of the request body. * `json()` : Promise\ * Returns a promise that resolves with a JSON representation of the request body. * `text()` : Promise\ * Returns a promise that resolves with a string (text) representation of the request body. *** ## The `Request` context Each time a Worker is invoked by an incoming HTTP request, the [`fetch()` handler](/workers/runtime-apis/handlers/fetch) is called on your Worker. The `Request` context starts when the `fetch()` handler is called, and asynchronous tasks (such as making a subrequest using the [`fetch() API`](/workers/runtime-apis/fetch/)) can only be run inside the `Request` context: ```js export default { async fetch(request, env, ctx) { // Request context starts here return new Response('Hello World!'); }, }; ``` ### When passing a promise to fetch event `.respondWith()` If you pass a Response promise to the fetch event `.respondWith()` method, the request context is active during any asynchronous tasks which run before the Response promise has settled. You can pass the event to an async handler, for example: ```js addEventListener("fetch", event => { event.respondWith(eventHandler(event)) }) // No request context available here async function eventHandler(event){ // Request context available here return new Response("Hello, Workers!") } ``` ### Errors when attempting to access an inactive `Request` context Any attempt to use APIs such as `fetch()` or access the `Request` context during script startup will throw an exception: ```js const promise = fetch("https://example.com/") // Error async function eventHandler(event){..} ``` This code snippet will throw during script startup, and the `"fetch"` event listener will never be registered. *** ### Set the `Content-Length` header The `Content-Length` header will be automatically set by the runtime based on whatever the data source for the `Request` is. Any value manually set by user code in the `Headers` will be ignored. To have a `Content-Length` header with a specific value specified, the `body` of the `Request` must be either a `FixedLengthStream` or a fixed-length value just as a string or `TypedArray`. A `FixedLengthStream` is an identity `TransformStream` that permits only a fixed number of bytes to be written to it. ```js const { writable, readable } = new FixedLengthStream(11); const enc = new TextEncoder(); const writer = writable.getWriter(); writer.write(enc.encode("hello world")); writer.end(); const req = new Request('https://example.org', { method: 'POST', body: readable }); ``` Using any other type of `ReadableStream` as the body of a request will result in Chunked-Encoding being used. *** ## Related resources * [Examples: Modify request property](/workers/examples/modify-request-property/) * [Examples: Accessing the `cf` object](/workers/examples/accessing-the-cloudflare-object/) * [Reference: `Response`](/workers/runtime-apis/response/) * Write your Worker code in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # TCP sockets URL: https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/ The Workers runtime provides the `connect()` API for creating outbound [TCP connections](https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/) from Workers. Many application-layer protocols are built on top of the Transmission Control Protocol (TCP). These application-layer protocols, including SSH, MQTT, SMTP, FTP, IRC, and most database wire protocols including MySQL, PostgreSQL, MongoDB, require an underlying TCP socket API in order to work. :::note Connecting to a PostgreSQL database? You should use [Hyperdrive](/hyperdrive/), which provides the `connect()` API with built-in connection pooling and query caching. ::: :::note TCP Workers outbound connections are sourced from a prefix that is not part of [list of IP ranges](https://www.cloudflare.com/ips/). ::: ## `connect()` The `connect()` function returns a TCP socket, with both a [readable](/workers/runtime-apis/streams/readablestream/) and [writable](/workers/runtime-apis/streams/writablestream/) stream of data. This allows you to read and write data on an ongoing basis, as long as the connection remains open. `connect()` is provided as a [Runtime API](/workers/runtime-apis/), and is accessed by importing the `connect` function from `cloudflare:sockets`. This process is similar to how one imports built-in modules in Node.js. Refer to the following codeblock for an example of creating a TCP socket, writing to it, and returning the readable side of the socket as a response: ```typescript import { connect } from 'cloudflare:sockets'; export default { async fetch(req): Promise { const gopherAddr = { hostname: "gopher.floodgap.com", port: 70 }; const url = new URL(req.url); try { const socket = connect(gopherAddr); const writer = socket.writable.getWriter() const encoder = new TextEncoder(); const encoded = encoder.encode(url.pathname + "\r\n"); await writer.write(encoded); await writer.close(); return new Response(socket.readable, { headers: { "Content-Type": "text/plain" } }); } catch (error) { return new Response("Socket connection failed: " + error, { status: 500 }); } } } satisfies ExportedHandler; ``` * connect(address: SocketAddress | string, options?: optional SocketOptions) : `Socket` * `connect()` accepts either a URL string or [`SocketAddress`](/workers/runtime-apis/tcp-sockets/#socketaddress) to define the hostname and port number to connect to, and an optional configuration object, [`SocketOptions`](/workers/runtime-apis/tcp-sockets/#socketoptions). It returns an instance of a [`Socket`](/workers/runtime-apis/tcp-sockets/#socket). ### `SocketAddress` * `hostname` string * The hostname to connect to. Example: `cloudflare.com`. * `port` number * The port number to connect to. Example: `5432`. ### `SocketOptions` * `secureTransport` "off" | "on" | "starttls" — Defaults to `off` * Specifies whether or not to use [TLS](https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/) when creating the TCP socket. * `off` — Do not use TLS. * `on` — Use TLS. * `starttls` — Do not use TLS initially, but allow the socket to be upgraded to use TLS by calling [`startTls()`](/workers/runtime-apis/tcp-sockets/#opportunistic-tls-starttls). * `allowHalfOpen` boolean — Defaults to `false` * Defines whether the writable side of the TCP socket will automatically close on end-of-file (EOF). When set to `false`, the writable side of the TCP socket will automatically close on EOF. When set to `true`, the writable side of the TCP socket will remain open on EOF. * This option is similar to that offered by the Node.js [`net` module](https://nodejs.org/api/net.html) and allows interoperability with code which utilizes it. ### `SocketInfo` * `remoteAddress` string | null * The address of the remote peer the socket is connected to. May not always be set. * `localAddress` string | null * The address of the local network endpoint for this socket. May not always be set. ### `Socket` * readable : ReadableStream * Returns the readable side of the TCP socket. * writable : WritableStream * Returns the writable side of the TCP socket. * The `WritableStream` returned only accepts chunks of `Uint8Array` or its views. * `opened` `Promise` * This promise is resolved when the socket connection is established and is rejected if the socket encounters an error. * `closed` `Promise` * This promise is resolved when the socket is closed and is rejected if the socket encounters an error. * `close()` `Promise` * Closes the TCP socket. Both the readable and writable streams are forcibly closed. * startTls() : Socket * Upgrades an insecure socket to a secure one that uses TLS, returning a new [Socket](/workers/runtime-apis/tcp-sockets#socket). Note that in order to call `startTls()`, you must set [`secureTransport`](/workers/runtime-apis/tcp-sockets/#socketoptions) to `starttls` when initially calling `connect()` to create the socket. ## Opportunistic TLS (StartTLS) Many TCP-based systems, including databases and email servers, require that clients use opportunistic TLS (otherwise known as [StartTLS](https://en.wikipedia.org/wiki/Opportunistic_TLS)) when connecting. In this pattern, the client first creates an insecure TCP socket, without TLS, and then upgrades it to a secure TCP socket, that uses TLS. The `connect()` API simplifies this by providing a method, `startTls()`, which returns a new `Socket` instance that uses TLS: ```typescript import { connect } from "cloudflare:sockets" const address = { hostname: "example-postgres-db.com", port: 5432 }; const socket = connect(address, { secureTransport: "starttls" }); const secureSocket = socket.startTls(); ``` * `startTls()` can only be called if `secureTransport` is set to `starttls` when creating the initial TCP socket. * Once `startTls()` is called, the initial socket is closed and can no longer be read from or written to. In the example above, anytime after `startTls()` is called, you would use the newly created `secureSocket`. Any existing readers and writers based off the original socket will no longer work. You must create new readers and writers from the newly created `secureSocket`. * `startTls()` should only be called once on an existing socket. ## Handle errors To handle errors when creating a new TCP socket, reading from a socket, or writing to a socket, wrap these calls inside `try..catch` blocks. The following example opens a connection to Google.com, initiates a HTTP request, and returns the response. If any of this fails and throws an exception, it returns a `500` response: ```typescript import { connect } from 'cloudflare:sockets'; const connectionUrl = { hostname: "google.com", port: 80 }; export interface Env { } export default { async fetch(req, env, ctx): Promise { try { const socket = connect(connectionUrl); const writer = socket.writable.getWriter(); const encoder = new TextEncoder(); const encoded = encoder.encode("GET / HTTP/1.0\r\n\r\n"); await writer.write(encoded); await writer.close(); return new Response(socket.readable, { headers: { "Content-Type": "text/plain" } }); } catch (error) { return new Response(`Socket connection failed: ${error}`, { status: 500 }); } } } satisfies ExportedHandler; ``` ## Close TCP connections You can close a TCP connection by calling `close()` on the socket. This will close both the readable and writable sides of the socket. ```typescript import { connect } from "cloudflare:sockets" const socket = connect({ hostname: "my-url.com", port: 70 }); const reader = socket.readable.getReader(); socket.close(); // After close() is called, you can no longer read from the readable side of the socket const reader = socket.readable.getReader(); // This fails ``` ## Considerations * Outbound TCP sockets to [Cloudflare IP ranges](https://www.cloudflare.com/ips/) are temporarily blocked, but will be re-enabled shortly. * TCP sockets cannot be created in global scope and shared across requests. You should always create TCP sockets within a handler (ex: [`fetch()`](/workers/get-started/guide/#3-write-code), [`scheduled()`](/workers/runtime-apis/handlers/scheduled/), [`queue()`](/queues/configuration/javascript-apis/#consumer)) or [`alarm()`](/durable-objects/api/alarms/). * Each open TCP socket counts towards the maximum number of [open connections](/workers/platform/limits/#simultaneous-open-connections) that can be simultaneously open. * By default, Workers cannot create outbound TCP connections on port `25` to send email to SMTP mail servers. [Cloudflare Email Workers](/email-routing/email-workers/) provides APIs to process and forward email. * Support for handling inbound TCP connections is [coming soon](https://blog.cloudflare.com/workers-tcp-socket-api-connect-databases/). Currently, it is not possible to make an inbound TCP connection to your Worker, for example, by using the `CONNECT` HTTP method. ## Troubleshooting Review descriptions of common error messages you may see when working with TCP Sockets, what the error messages mean, and how to solve them. ### `proxy request failed, cannot connect to the specified address` Your socket is connecting to an address that was disallowed. Examples of a disallowed address include Cloudflare IPs, `localhost`, and private network IPs. If you need to connect to addresses on port `80` or `443` to make HTTP requests, use [`fetch`](/workers/runtime-apis/fetch/). ### `TCP Loop detected` Your socket is connecting back to the Worker that initiated the outbound connection. In other words, the Worker is connecting back to itself. This is currently not supported. ### `Connections to port 25 are prohibited` Your socket is connecting to an address on port `25`. This is usually the port used for SMTP mail servers. Workers cannot create outbound connections on port `25`. Consider using [Cloudflare Email Workers](/email-routing/email-workers/) instead. --- # Response URL: https://developers.cloudflare.com/workers/runtime-apis/response/ The `Response` interface represents an HTTP response and is part of the Fetch API. *** ## Constructor ```js let response = new Response(body, init); ``` ### Parameters * `body` optional * An object that defines the body text for the response. Can be `null` or any one of the following types: * BufferSource * FormData * ReadableStream * URLSearchParams * USVString * `init` optional * An `options` object that contains custom settings to apply to the response. Valid options for the `options` object include: * `cf` any | null * An object that contains Cloudflare-specific information. This object is not part of the Fetch API standard and is only available in Cloudflare Workers. This field is only used by consumers of the Response for informational purposes and does not have any impact on Workers behavior. * `encodeBody` string * Workers have to compress data according to the `content-encoding` header when transmitting, to serve data that is already compressed, this property has to be set to `"manual"`, otherwise the default is `"automatic"`. * `headers` Headers | ByteString * Any headers to add to your response that are contained within a [`Headers`](/workers/runtime-apis/request/#parameters) object or object literal of [`ByteString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String) key-value pairs. * `status` int * The status code for the response, such as `200`. * `statusText` string * The status message associated with the status code, such as, `OK`. * `webSocket` WebSocket | null * This is present in successful WebSocket handshake responses. For example, if a client sends a WebSocket upgrade request to an origin and a Worker intercepts the request and then forwards it to the origin and the origin replies with a successful WebSocket upgrade response, the Worker sees `response.webSocket`. This establishes a WebSocket connection proxied through a Worker. Note that you cannot intercept data flowing over a WebSocket connection. ## Properties * `response.body` Readable Stream * A getter to get the body contents. * `response.bodyUsed` boolean * A boolean indicating if the body was used in the response. * `response.headers` Headers * The headers for the response. * `response.ok` boolean * A boolean indicating if the response was successful (status in the range `200`-`299`). * `response.redirected` boolean * A boolean indicating if the response is the result of a redirect. If so, its URL list has more than one entry. * `response.status` int * The status code of the response (for example, `200` to indicate success). * `response.statusText` string * The status message corresponding to the status code (for example, `OK` for `200`). * `response.url` string * The URL of the response. The value is the final URL obtained after any redirects. * `response.webSocket` WebSocket? * This is present in successful WebSocket handshake responses. For example, if a client sends a WebSocket upgrade request to an origin and a Worker intercepts the request and then forwards it to the origin and the origin replies with a successful WebSocket upgrade response, the Worker sees `response.webSocket`. This establishes a WebSocket connection proxied through a Worker. Note that you cannot intercept data flowing over a WebSocket connection. ## Methods ### Instance methods * `clone()` : Response * Creates a clone of a [`Response`](#response) object. * `json()` : Response * Creates a new response with a JSON-serialized payload. * `redirect()` : Response * Creates a new response with a different URL. ### Additional instance methods `Response` implements the [`Body`](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#body) mixin of the [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API), and therefore `Response` instances additionally have the following methods available: * arrayBuffer() : Promise\ * Takes a [`Response`](#response) stream, reads it to completion, and returns a promise that resolves with an [`ArrayBuffer`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer). * formData() : Promise\ * Takes a [`Response`](#response) stream, reads it to completion, and returns a promise that resolves with a [`FormData`](https://developer.mozilla.org/en-US/docs/Web/API/FormData) object. * json() : Promise\ * Takes a [`Response`](#response) stream, reads it to completion, and returns a promise that resolves with the result of parsing the body text as [`JSON`](https://developer.mozilla.org/en-US/docs/Web/). * text() : Promise\ * Takes a [`Response`](#response) stream, reads it to completion, and returns a promise that resolves with a [`USVString`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String) (text). ### Set the `Content-Length` header The `Content-Length` header will be automatically set by the runtime based on whatever the data source for the `Response` is. Any value manually set by user code in the `Headers` will be ignored. To have a `Content-Length` header with a specific value specified, the `body` of the `Response` must be either a `FixedLengthStream` or a fixed-length value just as a string or `TypedArray`. A `FixedLengthStream` is an identity `TransformStream` that permits only a fixed number of bytes to be written to it. ```js const { writable, readable } = new FixedLengthStream(11); const enc = new TextEncoder(); const writer = writable.getWriter(); writer.write(enc.encode("hello world")); writer.end(); return new Response(readable); ``` Using any other type of `ReadableStream` as the body of a response will result in chunked encoding being used. *** ## Related resources * [Examples: Modify response](/workers/examples/modify-response/) * [Examples: Conditional response](/workers/examples/conditional-response/) * [Reference: `Request`](/workers/runtime-apis/request/) * Write your Worker code in [ES modules syntax](/workers/reference/migrate-to-module-workers/) for an optimized experience. --- # Web Crypto URL: https://developers.cloudflare.com/workers/runtime-apis/web-crypto/ import { TabItem, Tabs } from "~/components" ## Background The Web Crypto API provides a set of low-level functions for common cryptographic tasks. The Workers runtime implements the full surface of this API, but with some differences in the [supported algorithms](#supported-algorithms) compared to those implemented in most browsers. Performing cryptographic operations using the Web Crypto API is significantly faster than performing them purely in JavaScript. If you want to perform CPU-intensive cryptographic operations, you should consider using the Web Crypto API. The Web Crypto API is implemented through the `SubtleCrypto` interface, accessible via the global `crypto.subtle` binding. A simple example of calculating a digest (also known as a hash) is: ```js const myText = new TextEncoder().encode('Hello world!'); const myDigest = await crypto.subtle.digest( { name: 'SHA-256', }, myText // The data you want to hash as an ArrayBuffer ); console.log(new Uint8Array(myDigest)); ``` Some common uses include [signing requests](/workers/examples/signing-requests/). :::caution The Web Crypto API differs significantly from the [Node.js Crypto API](/workers/runtime-apis/nodejs/crypto/). If you are working with code that relies on the Node.js Crypto API, you can use it by enabling the [`nodejs_compat` compatibility flag](/workers/runtime-apis/nodejs/). ::: *** ## Constructors * crypto.DigestStream(algorithm) DigestStream * A non-standard extension to the `crypto` API that supports generating a hash digest from streaming data. The `DigestStream` itself is a [`WritableStream`](/workers/runtime-apis/streams/writablestream/) that does not retain the data written into it. Instead, it generates a hash digest automatically when the flow of data has ended. ### Parameters * algorithmstring | object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest#Syntax). ### Usage ```js export default { async fetch(req) { // Fetch from origin const res = await fetch(req); // We need to read the body twice so we `tee` it (get two instances) const [bodyOne, bodyTwo] = res.body.tee(); // Make a new response so we can set the headers (responses from `fetch` are immutable) const newRes = new Response(bodyOne, res); // Create a SHA-256 digest stream and pipe the body into it const digestStream = new crypto.DigestStream("SHA-256"); bodyTwo.pipeTo(digestStream); // Get the final result const digest = await digestStream.digest; // Turn it into a hex string const hexString = [...new Uint8Array(digest)] .map(b => b.toString(16).padStart(2, '0')) .join('') // Set a header with the SHA-256 hash and return the response newRes.headers.set("x-content-digest", `SHA-256=${hexString}`); return newRes; } } ``` ```ts export default { async fetch(req): Promise { // Fetch from origin const res = await fetch(req); // We need to read the body twice so we `tee` it (get two instances) const [bodyOne, bodyTwo] = res.body.tee(); // Make a new response so we can set the headers (responses from `fetch` are immutable) const newRes = new Response(bodyOne, res); // Create a SHA-256 digest stream and pipe the body into it const digestStream = new crypto.DigestStream("SHA-256"); bodyTwo.pipeTo(digestStream); // Get the final result const digest = await digestStream.digest; // Turn it into a hex string const hexString = [...new Uint8Array(digest)] .map(b => b.toString(16).padStart(2, '0')) .join('') // Set a header with the SHA-256 hash and return the response newRes.headers.set("x-content-digest", `SHA-256=${hexString}`); return newRes; } } satisfies ExportedHandler; ``` ## Methods * crypto.randomUUID() : string * Generates a new random (version 4) UUID as defined in [RFC 4122](https://www.rfc-editor.org/rfc/rfc4122.txt). * crypto.getRandomValues(bufferArrayBufferView) : ArrayBufferView * Fills the passed ArrayBufferView with cryptographically sound random values and returns the buffer. ### Parameters * bufferArrayBufferView * Must be an Int8Array | Uint8Array | Uint8ClampedArray | Int16Array | Uint16Array | Int32Array | Uint32Array | BigInt64Array | BigUint64Array. ## SubtleCrypto Methods These methods are all accessed via [`crypto.subtle`](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto#Methods), which is also documented in detail on MDN. ### encrypt * encrypt(algorithm, key, data) : Promise\ * Returns a Promise that fulfills with the encrypted data corresponding to the clear text, algorithm, and key given as parameters. #### Parameters * algorithmobject * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/encrypt#Syntax). * keyCryptoKey * dataBufferSource ### decrypt * decrypt(algorithm, key, data) : Promise\ * Returns a Promise that fulfills with the clear data corresponding to the ciphertext, algorithm, and key given as parameters. #### Parameters * algorithmobject * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/decrypt#Syntax). * keyCryptoKey * dataBufferSource ### sign * sign(algorithm, key, data) : Promise\ * Returns a Promise that fulfills with the signature corresponding to the text, algorithm, and key given as parameters. #### Parameters * algorithmstring | object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/sign#Syntax). * keyCryptoKey * dataArrayBuffer ### verify * verify(algorithm, key, signature, data) : Promise\ * Returns a Promise that fulfills with a Boolean value indicating if the signature given as a parameter matches the text, algorithm, and key that are also given as parameters. #### Parameters * algorithmstring | object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/verify#Syntax). * keyCryptoKey * signatureArrayBuffer * dataArrayBuffer ### digest * digest(algorithm, data) : Promise\ * Returns a Promise that fulfills with a digest generated from the algorithm and text given as parameters. #### Parameters * algorithmstring | object * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/digest#Syntax). * dataArrayBuffer ### generateKey * generateKey(algorithm, extractable, keyUsages) : Promise\ | Promise\ * Returns a Promise that fulfills with a newly-generated `CryptoKey`, for symmetrical algorithms, or a `CryptoKeyPair`, containing two newly generated keys, for asymmetrical algorithms. For example, to generate a new AES-GCM key: ```js let keyPair = await crypto.subtle.generateKey( { name: 'AES-GCM', length: 256, }, true, ['encrypt', 'decrypt'] ); ``` #### Parameters * algorithmobject * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/generateKey#Syntax). * extractablebool * keyUsagesArray * An Array of strings indicating the [possible usages of the new key](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/generateKey#Syntax). ### deriveKey * deriveKey(algorithm, baseKey, derivedKeyAlgorithm, extractable, keyUsages) : Promise\ * Returns a Promise that fulfills with a newly generated `CryptoKey` derived from the base key and specific algorithm given as parameters. #### Parameters * algorithmobject * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#Syntax). * baseKeyCryptoKey * derivedKeyAlgorithmobject * Defines the algorithm the derived key will be used for in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#Syntax). * extractablebool * keyUsagesArray * An Array of strings indicating the [possible usages of the new key](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey#Syntax) ### deriveBits * deriveBits(algorithm, baseKey, length) : Promise\ * Returns a Promise that fulfills with a newly generated buffer of pseudo-random bits derived from the base key and specific algorithm given as parameters. It returns a Promise which will be fulfilled with an `ArrayBuffer` containing the derived bits. This method is very similar to `deriveKey()`, except that `deriveKey()` returns a `CryptoKey` object rather than an `ArrayBuffer`. Essentially, `deriveKey()` is composed of `deriveBits()` followed by `importKey()`. #### Parameters * algorithmobject * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveBits#Syntax). * baseKeyCryptoKey * lengthint * Length of the bit string to derive. ### importKey * importKey(format, keyData, algorithm, extractable, keyUsages) : Promise\ * Transform a key from some external, portable format into a `CryptoKey` for use with the Web Crypto API. #### Parameters * formatstring * Describes [the format of the key to be imported](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey#Syntax). * keyDataArrayBuffer * algorithmobject * Describes the algorithm to be used, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey#Syntax). * extractablebool * keyUsagesArray * An Array of strings indicating the [possible usages of the new key](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/importKey#Syntax) ### exportKey * exportKey(formatstring, keyCryptoKey) : Promise\ * Transform a `CryptoKey` into a portable format, if the `CryptoKey` is `extractable`. #### Parameters * formatstring * Describes the [format in which the key will be exported](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/exportKey#Syntax). * keyCryptoKey ### wrapKey * wrapKey(format, key, wrappingKey, wrapAlgo) : Promise\ * Transform a `CryptoKey` into a portable format, and then encrypt it with another key. This renders the `CryptoKey` suitable for storage or transmission in untrusted environments. #### Parameters * formatstring * Describes the [format in which the key will be exported](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/wrapKey#Syntax) before being encrypted. * keyCryptoKey * wrappingKeyCryptoKey * wrapAlgoobject * Describes the algorithm to be used to encrypt the exported key, including any required parameters, in [an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/wrapKey#Syntax). ### unwrapKey * unwrapKey(format, key, unwrappingKey, unwrapAlgo, 
unwrappedKeyAlgo, extractable, keyUsages)
: Promise\ * Transform a key that was wrapped by `wrapKey()` back into a `CryptoKey`. #### Parameters * formatstring * Described the [data format of the key to be unwrapped](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax). * keyCryptoKey * unwrappingKeyCryptoKey * unwrapAlgoobject * Describes the algorithm that was used to encrypt the wrapped key, [in an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax). * unwrappedKeyAlgoobject * Describes the key to be unwrapped, [in an algorithm-specific format](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax). * extractablebool * keyUsagesArray * An Array of strings indicating the [possible usages of the new key](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/unwrapKey#Syntax) ### timingSafeEqual * timingSafeEqual(a, b) : bool * Compare two buffers in a way that is resistant to timing attacks. This is a non-standard extension to the Web Crypto API. #### Parameters * aArrayBuffer | TypedArray * bArrayBuffer | TypedArray ### Supported algorithms Workers implements all operations of the [WebCrypto standard](https://www.w3.org/TR/WebCryptoAPI/), as shown in the following table. A checkmark (✓) indicates that this feature is believed to be fully supported according to the spec.
An x (✘) indicates that this feature is part of the specification but not implemented.
If a feature only implements the operation partially, details are listed. | Algorithm | sign()
verify() | encrypt()
decrypt() | digest() | deriveBits()
deriveKey() | generateKey() | wrapKey()
unwrapKey() | exportKey() | importKey() | | :------------------------------------------------- | :------------------ | :---------------------- | :------- | :--------------------------- | :------------ | :------------------------ | :---------- | :---------- | | RSASSA PKCS1 v1.5 | ✓ | | | | ✓ | | ✓ | ✓ | | RSA PSS | ✓ | | | | ✓ | | ✓ | ✓ | | RSA OAEP | | ✓ | | | ✓ | ✓ | ✓ | ✓ | | ECDSA | ✓ | | | | ✓ | | ✓ | ✓ | | ECDH | | | | ✓ | ✓ | | ✓ | ✓ | | Ed255191 | ✓ | | | | ✓ | | ✓ | ✓ | | X255191 | | | | ✓ | ✓ | | ✓ | ✓ | | NODE ED255192 | ✓ | | | | ✓ | | ✓ | ✓ | | AES CTR | | ✓ | | | ✓ | ✓ | ✓ | ✓ | | AES CBC | | ✓ | | | ✓ | ✓ | ✓ | ✓ | | AES GCM | | ✓ | | | ✓ | ✓ | ✓ | ✓ | | AES KW | | | | | ✓ | ✓ | ✓ | ✓ | | HMAC | ✓ | | | | ✓ | | ✓ | ✓ | | SHA 1 | | | ✓ | | | | | | | SHA 256 | | | ✓ | | | | | | | SHA 384 | | | ✓ | | | | | | | SHA 512 | | | ✓ | | | | | | | MD53 | | | ✓ | | | | | | | HKDF | | | | ✓ | | | | ✓ | | PBKDF2 | | | | ✓ | | | | ✓ | **Footnotes:** 1. Algorithms as specified in the [Secure Curves API](https://wicg.github.io/webcrypto-secure-curves). 2. Legacy non-standard EdDSA is supported for the Ed25519 curve in addition to the Secure Curves version. Since this algorithm is non-standard, note the following while using it: * Use NODE-ED25519 as the algorithm and `namedCurve` parameters. * Unlike NodeJS, Cloudflare will not support raw import of private keys. * The algorithm implementation may change over time. While Cloudflare cannot guarantee it at this time, Cloudflare will strive to maintain backward compatibility and compatibility with NodeJS's behavior. Any notable compatibility notes will be communicated in release notes and via this developer documentation. 3. MD5 is not part of the WebCrypto standard but is supported in Cloudflare Workers for interacting with legacy systems that require MD5. MD5 is considered a weak algorithm. Do not rely upon MD5 for security. *** ## Related resources * [SubtleCrypto documentation on MDN](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto) * [SubtleCrypto documentation as part of the W3C Web Crypto API specification](https://www.w3.org/TR/WebCryptoAPI//#subtlecrypto-interface) * [Example: signing requests](/workers/examples/signing-requests/) --- # Web standards URL: https://developers.cloudflare.com/workers/runtime-apis/web-standards/ *** ## JavaScript standards The Cloudflare Workers runtime is [built on top of the V8 JavaScript and WebAssembly engine](/workers/reference/how-workers-works/). The Workers runtime is updated at least once a week, to at least the version of V8 that is currently used by Google Chrome's stable release. This means you can safely use the latest JavaScript features, with no need for transpilers. All of the [standard built-in objects](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference) supported by the current Google Chrome stable release are supported, with a few notable exceptions: * For security reasons, the following are not allowed: * `eval()` * `new Function` * [`WebAssembly.compile`](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/compile_static) * [`WebAssembly.compileStreaming`](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/compileStreaming_static) * `WebAssembly.instantiate` with a [buffer parameter](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/instantiate_static#primary_overload_%E2%80%94_taking_wasm_binary_code) * [`WebAssembly.instantiateStreaming`](https://developer.mozilla.org/en-US/docs/WebAssembly/JavaScript_interface/instantiateStreaming_static) * `Date.now()` returns the time of the last I/O; it does not advance during code execution. *** ## Web standards and global APIs The following methods are available per the [Worker Global Scope](https://developer.mozilla.org/en-US/docs/Web/API/WorkerGlobalScope): ### Base64 utility methods * atob() * Decodes a string of data which has been encoded using base-64 encoding. * btoa() * Creates a base-64 encoded ASCII string from a string of binary data. ### Timers * setInterval() * Schedules a function to execute every time a given number of milliseconds elapses. * clearInterval() * Cancels the repeated execution set using [`setInterval()`](https://developer.mozilla.org/en-US/docs/Web/API/setInterval). * setTimeout() * Schedules a function to execute in a given amount of time. * clearTimeout() * Cancels the delayed execution set using [`setTimeout()`](https://developer.mozilla.org/en-US/docs/Web/API/setTimeout). :::note Timers are only available inside of [the Request Context](/workers/runtime-apis/request/#the-request-context). ::: ### `performance.timeOrigin` and `performance.now()` * performance.timeOrigin * Returns the high resolution time origin. Workers uses the UNIX epoch as the time origin, meaning that `performance.timeOrigin` will always return `0`. * performance.now() * Returns a `DOMHighResTimeStamp` representing the number of milliseconds elapsed since `performance.timeOrigin`. Note that Workers intentionally reduces the precision of `performance.now()` such that it returns the time of the last I/O and does not advance during code execution. Effectively, because of this, and because `performance.timeOrigin` is always, `0`, `performance.now()` will always equal `Date.now()`, yielding a consistent view of the passage of time within a Worker. ### `EventTarget` and `Event` The [`EventTarget`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget) and [`Event`](https://developer.mozilla.org/en-US/docs/Web/API/Event) API allow objects to publish and subscribe to events. ### `AbortController` and `AbortSignal` The [`AbortController`](https://developer.mozilla.org/en-US/docs/Web/API/AbortController) and [`AbortSignal`](https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal) APIs provide a common model for canceling asynchronous operations. ### Fetch global * fetch() * Starts the process of fetching a resource from the network. Refer to [Fetch API](/workers/runtime-apis/fetch/). :::note The Fetch API is only available inside of [the Request Context](/workers/runtime-apis/request/#the-request-context). ::: *** ## Encoding API Both `TextEncoder` and `TextDecoder` support UTF-8 encoding/decoding. [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/API/Encoding_API). The [`TextEncoderStream`](https://developer.mozilla.org/en-US/docs/Web/API/TextEncoderStream) and [`TextDecoderStream`](https://developer.mozilla.org/en-US/docs/Web/API/TextDecoderStream) classes are also available. *** ## URL API The URL API supports URLs conforming to HTTP and HTTPS schemes. [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/API/URL) :::note The default URL class behavior differs from the URL Spec documented above. A new spec-compliant implementation of the URL class can be enabled using the `url_standard` [compatibility flag](/workers/configuration/compatibility-flags/). ::: *** ## Compression Streams The `CompressionStream` and `DecompressionStream` classes support the deflate, deflate-raw and gzip compression methods. [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/API/Compression_Streams_API) *** ## URLPattern API The `URLPattern` API provides a mechanism for matching URLs based on a convenient pattern syntax. [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/API/URLPattern). *** ## `Intl` The `Intl` API allows you to format dates, times, numbers, and more to the format that is used by a provided locale (language and region). [Refer to the MDN documentation for more information](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl). *** ## `navigator.userAgent` When the [`global_navigator`](/workers/configuration/compatibility-flags/#global-navigator) compatibility flag is set, the [`navigator.userAgent`](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/userAgent) property is available with the value `'Cloudflare-Workers'`. This can be used, for example, to reliably determine that code is running within the Workers environment. ## Unhandled promise rejections The [`unhandledrejection`](https://developer.mozilla.org/en-US/docs/Web/API/Window/unhandledrejection_event) event is emitted by the global scope when a JavaScript promise is rejected without a rejection handler attached. The [`rejectionhandled`](https://developer.mozilla.org/en-US/docs/Web/API/Window/rejectionhandled_event) event is emitted by the global scope when a JavaScript promise rejection is handled late (after a rejection handler is attached to the promise after an `unhandledrejection` event has already been emitted). ```js title="worker.js" addEventListener('unhandledrejection', (event) => { console.log(event.promise); // The promise that was rejected. console.log(event.reason); // The value or Error with which the promise was rejected. }); addEventListener('rejectionhandled', (event) => { console.log(event.promise); // The promise that was rejected. console.log(event.reason); // The value or Error with which the promise was rejected. }); ``` *** ## `navigator.sendBeacon(url[, data])` When the [`global_navigator`](/workers/configuration/compatibility-flags/#global-navigator) compatibility flag is set, the [`navigator.sendBeacon(...)`](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/sendBeacon) API is available to send an HTTP `POST` request containing a small amount of data to a web server. This API is intended as a means of transmitting analytics or diagnostics information asynchronously on a best-effort basis. For example, you can replace: ```js const promise = fetch('https://example.com', { method: 'POST', body: 'hello world' }); ctx.waitUntil(promise); ``` with `navigator.sendBeacon(...)`: ```js navigator.sendBeacon('https://example.com', 'hello world'); ``` --- # Billing and Limitations URL: https://developers.cloudflare.com/workers/static-assets/billing-and-limitations/ ## Billing Requests to a project with static assets can either return static assets or invoke the Worker script, depending on if the request [matches a static asset or not](/workers/static-assets/routing/). Requests to static assets are free and unlimited. Requests to the Worker script (for example, in the case of SSR content) are billed according to Workers pricing. Refer to [pricing](/workers/platform/pricing/#example-2) for an example. There is no additional cost for storing Assets. ## Limitations See the [Platform Limits](/workers/platform/limits/#static-assets) ## Troubleshooting - `assets.bucket is a required field` — if you see this error, you need to update Wrangler to at least `3.78.10` or later. `bucket` is not a required field. --- # WebSockets URL: https://developers.cloudflare.com/workers/runtime-apis/websockets/ ## Background WebSockets allow you to communicate in real time with your Cloudflare Workers serverless functions. For a complete example, refer to [Using the WebSockets API](/workers/examples/websockets/). :::note If your application needs to coordinate among multiple WebSocket connections, such as a chat room or game match, you will need clients to send messages to a single-point-of-coordination. Durable Objects provide a single-point-of-coordination for Cloudflare Workers, and are often used in parallel with WebSockets to persist state over multiple clients and connections. In this case, refer to [Durable Objects](/durable-objects/) to get started, and prefer using the Durable Objects' extended [WebSockets API](/durable-objects/best-practices/websockets/). ::: ## Constructor ```js // { 0: , 1: } let websocketPair = new WebSocketPair(); ``` The WebSocketPair returned from this constructor is an Object, with two WebSockets at keys `0` and `1`. These WebSockets are commonly referred to as `client` and `server`. The below example combines `Object.values` and ES6 destructuring to retrieve the WebSockets as `client` and `server`: ```js let [client, server] = Object.values(new WebSocketPair()); ``` ## Methods ### accept * accept() * Accepts the WebSocket connection and begins terminating requests for the WebSocket on Cloudflare's global network. This effectively enables the Workers runtime to begin responding to and handling WebSocket requests. ### addEventListener * addEventListener(eventWebSocketEvent, callbackFunctionFunction) * Add callback functions to be executed when an event has occurred on the WebSocket. #### Parameters * `event` WebSocketEvent * The WebSocket event (refer to [Events](/workers/runtime-apis/websockets/#events)) to listen to. * callbackFunction(messageMessage) Function * A function to be called when the WebSocket responds to a specific event. ### close * close(codenumber, reasonstring) * Close the WebSocket connection. #### Parameters * codeinteger optional * An integer indicating the close code sent by the server. This should match an option from the [list of status codes](https://developer.mozilla.org/en-US/docs/Web/API/CloseEvent#status_codes) provided by the WebSocket spec. * reasonstring optional * A human-readable string indicating why the WebSocket connection was closed. ### send * send(messagestring | ArrayBuffer | ArrayBufferView) * Send a message to the other WebSocket in this WebSocket pair. #### Parameters * messagestring * The message to send down the WebSocket connection to the corresponding client. This should be a string or something coercible into a string; for example, strings and numbers will be simply cast into strings, but objects and arrays should be cast to JSON strings using JSON.stringify, and parsed in the client. *** ## Events * close * An event indicating the WebSocket has closed. * error * An event indicating there was an error with the WebSocket. * message * An event indicating a new message received from the client, including the data passed by the client. :::note WebSocket messages received by a Worker have a size limit of 1 MiB (1048576). If a larger message is sent, the WebSocket will be automatically closed with a `1009` "Message is too large" response. ::: ## Types ### Message * `data` any - The data passed back from the other WebSocket in your pair. * `type` string - Defaults to `message`. *** ## Related resources * [Mozilla Developer Network's (MDN) documentation on the WebSocket class](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) * [Our WebSocket template for building applications on Workers using WebSockets](https://github.com/cloudflare/websocket-template) --- # Configuration and Bindings URL: https://developers.cloudflare.com/workers/static-assets/binding/ import { Badge, Description, FileTree, InlineBadge, Render, TabItem, Tabs, WranglerConfig, } from "~/components"; Configuring a Worker with assets requires specifying a [directory](/workers/static-assets/binding/#directory) and, optionally, an [assets binding](/workers/static-assets/binding/), in your Worker's Wrangler file. The [assets binding](/workers/static-assets/binding/) allows you to dynamically fetch assets from within your Worker script (e.g. `env.ASSETS.fetch()`), similarly to how you might with a make a `fetch()` call with a [Service binding](/workers/runtime-apis/bindings/service-bindings/http/). Only one collection of static assets can be configured in each Worker. ## `directory` The folder of static assets to be served. For many frameworks, this is the `./public/`, `./dist/`, or `./build/` folder. ```toml title="wrangler.toml" name = "my-worker" compatibility_date = "2024-09-19" assets = { directory = "./public/" } ``` ### Ignoring assets Sometime there are files in the asset directory that should not be uploaded. In this case, create a `.assetsignore` file in the root of the assets directory. This file takes the same format as `.gitignore`. Wrangler will not upload asset files that match lines in this file. **Example** You are migrating from a Pages project where the assets directory is `dist`. You do not want to upload the server-side Worker code nor Pages configuration files as public client-side assets. Add the following `.assetsignore` file: ```txt _worker.js _redirects _headers ``` Now Wrangler will not upload these files as client-side assets when deploying the Worker. ## `run_worker_first` Controls whether to invoke the Worker script regardless of a request which would have otherwise matched an asset. `run_worker_first = false` (default) will serve any static asset matching a request, while `run_worker_first = true` will unconditionally [invoke your Worker script](/workers/static-assets/routing/worker-script/#run-your-worker-script-first). ```toml title="wrangler.toml" name = "my-worker" compatibility_date = "2024-09-19" main = "src/index.ts" # The following configuration unconditionally invokes the Worker script at # `src/index.ts`, which can programatically fetch assets via the ASSETS binding [assets] directory = "./public/" binding = "ASSETS" run_worker_first = true ``` ## `binding` Configuring the optional [binding](/workers/runtime-apis/bindings) gives you access to the collection of assets from within your Worker script. ```toml title="wrangler.toml" name = "my-worker" main = "./src/index.js" compatibility_date = "2024-09-19" [assets] directory = "./public/" binding = "ASSETS" ``` In the example above, assets would be available through `env.ASSETS`. ### Runtime API Reference #### `fetch()` **Parameters** - `request: Request | URL | string` Pass a [Request object](/workers/runtime-apis/request/), URL object, or URL string. Requests made through this method have `html_handling` and `not_found_handling` configuration applied to them. **Response** - `Promise` Returns a static asset response for the given request. **Example** Your dynamic code can make new, or forward incoming requests to your project's static assets using the assets binding. For example, `env.ASSETS.fetch(request)`, `env.ASSETS.fetch(new URL('https://assets.local/my-file'))` or `env.ASSETS.fetch('https://assets.local/my-file')`. Take the following example that configures a Worker script to return a response under all requests headed for `/api/`. Otherwise, the Worker script will pass the incoming request through to the asset binding. In this case, because a Worker script is only invoked when the requested route has not matched any static assets, this will always evaluate [`not_found_handling`](/workers/static-assets/routing/) behavior. ```js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { // TODO: Add your custom /api/* logic here. return new Response("Ok"); } // Passes the incoming request through to the assets binding. // No asset matched this request, so this will evaluate `not_found_handling` behavior. return env.ASSETS.fetch(request); }, }; ``` ```ts interface Env { ASSETS: Fetcher; } export default { async fetch(request, env): Promise { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { // TODO: Add your custom /api/* logic here. return new Response("Ok"); } // Passes the incoming request through to the assets binding. // No asset matched this request, so this will evaluate `not_found_handling` behavior. return env.ASSETS.fetch(request); }, } satisfies ExportedHandler; ``` ## Routing configuration For the various static asset routing configuration options, refer to [Routing](/workers/static-assets/routing/). ## Smart Placement [Smart Placement](/workers/configuration/smart-placement/) can be used to place a Worker's code close to your back-end infrastructure. Smart Placement will only have an effect if you specified a `main`, pointing to your Worker code. ### Smart Placement with Worker Code First If you desire to run your [Worker code ahead of assets](/workers/static-assets/routing/worker-script/#run-your-worker-script-first) by setting `run_worker_first=true`, all requests must first travel to your Smart-Placed Worker. As a result, you may experience increased latency for asset requests. Use Smart Placement with `run_worker_first=true` when you need to integrate with other backend services, authenticate requests before serving any assets, or if your want to make modifications to your assets before serving them. If you want some assets served as quickly as possible to the user, but others to be served behind a smart-placed Worker, considering splitting your app into multiple Workers and [using service bindings to connect them](/workers/configuration/smart-placement/#best-practices). ### Smart Placement with Assets First Enabling Smart Placement with `run_worker_first=false` (or not specifying it) lets you serve assets from as close as possible to your users, but moves your Worker logic to run most efficiently (such as near a database). Use Smart Placement with `run_worker_first=false` (or not specifying it) when prioritizing fast asset delivery. This will not impact the [default routing behavior](/workers/static-assets/routing/). --- # Direct Uploads URL: https://developers.cloudflare.com/workers/static-assets/direct-upload/ import { Badge, Description, FileTree, InlineBadge, Render, TabItem, Tabs, TypeScriptExample, } from "~/components"; import { Icon } from "astro-icon/components"; :::note Directly uploading assets via APIs is an advanced approach which, unless you are building a programatic integration, most users will not need. Instead, we encourage users to deploy your Worker with [Wrangler](/workers/static-assets/get-started/#1-create-a-new-worker-project-using-the-cli). ::: Our API empowers users to upload and include static assets as part of a Worker. These static assets can be served for free, and additionally, users can also fetch assets through an optional [assets binding](/workers/static-assets/binding/) to power more advanced applications. This guide will describe the process for attaching assets to your Worker directly with the API. ```mermaid sequenceDiagram participant User participant Workers API User<<->>Workers API: Submit manifest
POST /client/v4/accounts/:accountId/workers/scripts/:scriptName/assets-upload-session User<<->>Workers API: Upload files
POST /client/v4/accounts/:accountId/workers/assets/upload?base64=true User<<->>Workers API: Upload script version
PUT /client/v4/accounts/:accountId/workers/scripts/:scriptName ```
```mermaid sequenceDiagram participant User participant Workers API User<<->>Workers API: Submit manifest
POST /client/v4/accounts/:accountId/workers/dispatch/namespaces/:dispatchNamespace/scripts/:scriptName/assets-upload-session User<<->>Workers API: Upload files
POST /client/v4/accounts/:accountId/workers/assets/upload?base64=true User<<->>Workers API: Upload script version
PUT /client/v4/accounts/:accountId/workers/dispatch/namespaces/:dispatchNamespace/scripts/:scriptName ```
The asset upload flow can be distilled into three distinct phases: 1. Registration of a manifest 2. Upload of the assets 3. Deployment of the Worker ## Upload manifest The asset manifest is a ledger which keeps track of files we want to use in our Worker. This manifest is used to track assets associated with each Worker version, and eliminate the need to upload unchanged files prior to a new upload. The [manifest upload request](/api/resources/workers/subresources/scripts/subresources/assets/subresources/upload/methods/create/) describes each file which we intend to upload. Each file is its own key representing the file path and name, and is an object which contains metadata about the file. `hash` represents a 32 hexadecimal character hash of the file, while `size` is the size (in bytes) of the file. ```bash curl -X POST https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/scripts/{script_name}/assets-upload-session \ --header 'content-type: application/json' \ --header 'Authorization: Bearer ' \ --data '{ "manifest": { "/filea.html": { "hash": "08f1dfda4574284ab3c21666d1", "size": 12 }, "/fileb.html": { "hash": "4f1c1af44620d531446ceef93f", "size": 23 }, "/filec.html": { "hash": "54995e302614e0523757a04ec1", "size": 23 } } }' ``` ```bash curl -X POST https://api.cloudflare.com/client/v4/accounts/{account_id}/workers/dispatch/namespaces/{dispatch_namespace}/scripts/{script_name}/assets-upload-session \ --header 'content-type: application/json' \ --header 'Authorization: Bearer ' \ --data '{ "manifest": { "/filea.html": { "hash": "08f1dfda4574284ab3c21666d1", "size": 12 }, "/fileb.html": { "hash": "4f1c1af44620d531446ceef93f", "size": 23 }, "/filec.html": { "hash": "54995e302614e0523757a04ec1", "size": 23 } } }' ``` The resulting response will contain a JWT, which provides authentication during file upload. The JWT is valid for one hour. In addition to the JWT, the response instructs users how to optimally batch upload their files. These instructions are encoded in the `buckets` field. Each array in `buckets` contains a list of file hashes which should be uploaded together. Unmodified files will not be returned in the `buckets` field (as they do not need to be re-uploaded) if they have recently been uploaded in previous versions of your Worker. ```json { "result": { "jwt": "", "buckets": [ ["08f1dfda4574284ab3c21666d1", "4f1c1af44620d531446ceef93f"], ["54995e302614e0523757a04ec1"] ] }, "success": true, "errors": null, "messages": null } ``` :::note If all assets have been previously uploaded, `buckets` will be empty, and `jwt` will contain a completion token. Uploading files is not necessary, and you can skip directly to [uploading a new script or version](/workers/static-assets/direct-upload/#createdeploy-new-version). ::: ### Limitations - Each file must be under 25 MiB - The overall manifest must not contain more than 20,000 file entries ## Upload Static Assets The [file upload API](/api/resources/workers/subresources/assets/subresources/upload/methods/create/) requires files be uploaded using `multipart/form-data`. The contents of each file must be base64 encoded, and the `base64` query parameter in the URL must be set to `true`. The provided `Content-Type` header of each file part will be attached when eventually serving the file. If you wish to avoid sending a `Content-Type` header in your deployment, `application/null` may be sent at upload time. The `Authorization` header must be provided as a bearer token, using the JWT (upload token) from the aforementioned manifest upload call. Once every file in the manifest has been uploaded, a status code of 201 will be returned, with the `jwt` field present. This JWT is a final "completion" token which can be used to create a deployment of a Worker with this set of assets. This completion token is valid for 1 hour. ## Create/Deploy New Version [Script](/api/resources/workers/subresources/scripts/methods/update/), [Version](/api/resources/workers/subresources/scripts/subresources/versions/methods/create/), and [Workers for Platform script](/api/resources/workers_for_platforms/subresources/dispatch/subresources/namespaces/subresources/scripts/methods/update/) upload endpoints require specifying a metadata part in the form data. Here, we can provide the completion token from the previous (upload assets) step. ```bash title="Example Worker Metadata Specifying Completion Token" { "main_module": "main.js", "assets": { "jwt": "" }, "compatibility_date": "2021-09-14" } ``` If this is a Worker which already has assets, and you wish to just re-use the existing set of assets, we do not have to specify the completion token again. Instead, we can pass the boolean `keep_assets` option. ```bash title="Example Worker Metadata Specifying keep_assets" { "main_module": "main.js", "keep_assets": true, "compatibility_date": "2021-09-14" } ``` Asset [routing configuration](/workers/wrangler/configuration/#assets) can be provided in the `assets` object, such as `html_handling` and `not_found_handling`. ```bash title="Example Worker Metadata Specifying Asset Configuration" { "main_module": "main.js", "assets": { "jwt": "", "config" { "html_handling": "auto-trailing-slash" } }, "compatibility_date": "2021-09-14" } ``` Optionally, an assets binding can be provided if you wish to fetch and serve assets from within your Worker code. ```bash title="Example Worker Metadata Specifying Asset Binding" { "main_module": "main.js", "assets": { ... }, "bindings": [ ... { "name": "ASSETS", "type": "assets" } ... ] "compatibility_date": "2021-09-14" } ``` ## Programmatic Example ```ts import * as fs from "fs"; import * as path from "path"; import * as crypto from "crypto"; import { FormData, fetch } from "undici"; import "node:process"; const accountId: string = ""; // Replace with your actual account ID const filesDirectory: string = "assets"; // Adjust to your assets directory const scriptName: string = "my-new-script"; // Replace with desired script name const dispatchNamespace: string = ""; // Replace with a dispatch namespace if using Workers for Platforms interface FileMetadata { hash: string; size: number; } interface UploadSessionData { uploadToken: string; buckets: string[][]; fileMetadata: Record; } interface UploadResponse { result: { jwt: string; buckets: string[][]; }; success: boolean; errors: any; messages: any; } // Function to calculate the SHA-256 hash of a file and truncate to 32 characters function calculateFileHash(filePath: string): { fileHash: string; fileSize: number; } { const hash = crypto.createHash("sha256"); const fileBuffer = fs.readFileSync(filePath); hash.update(fileBuffer); const fileHash = hash.digest("hex").slice(0, 32); // Grab the first 32 characters const fileSize = fileBuffer.length; return { fileHash, fileSize }; } // Function to gather file metadata for all files in the directory function gatherFileMetadata(directory: string): Record { const files = fs.readdirSync(directory); const fileMetadata: Record = {}; files.forEach((file) => { const filePath = path.join(directory, file); const { fileHash, fileSize } = calculateFileHash(filePath); fileMetadata["/" + file] = { hash: fileHash, size: fileSize, }; }); return fileMetadata; } function findMatch( fileHash: string, fileMetadata: Record, ): string { for (let prop in fileMetadata) { const file = fileMetadata[prop] as FileMetadata; if (file.hash === fileHash) { return prop; } } throw new Error("unknown fileHash"); } // Function to upload a batch of files using the JWT from the first response async function uploadFilesBatch( jwt: string, fileHashes: string[][], fileMetadata: Record, ): Promise { const form = new FormData(); for (const bucket of fileHashes) { bucket.forEach((fileHash) => { const fullPath = findMatch(fileHash, fileMetadata); const relPath = filesDirectory + "/" + path.basename(fullPath); const fileBuffer = fs.readFileSync(relPath); const base64Data = fileBuffer.toString("base64"); // Convert file to Base64 form.append( fileHash, new File([base64Data], fileHash, { type: "text/html", // Modify Content-Type header based on type of file }), fileHash, ); }); const response = await fetch( `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/assets/upload?base64=true`, { method: "POST", headers: { Authorization: `Bearer ${jwt}`, }, body: form, }, ); const data = (await response.json()) as UploadResponse; if (data && data.result.jwt) { return data.result.jwt; } } throw new Error("Should have received completion token"); } async function scriptUpload(completionToken: string): Promise { const form = new FormData(); // Configure metadata form.append( "metadata", JSON.stringify({ main_module: "index.mjs", compatibility_date: "2022-03-11", assets: { jwt: completionToken, // Provide the completion token from file uploads }, bindings: [{ name: "ASSETS", type: "assets" }], // Optional assets binding to fetch from user worker }), ); // Configure (optional) user worker form.append( "index.js", new File( [ "export default {async fetch(request, env) { return new Response('Hello world from user worker!'); }}", ], "index.mjs", { type: "application/javascript+module", }, ), ); const url = dispatchNamespace ? `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}` : `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/scripts/${scriptName}`; const response = await fetch(url, { method: "PUT", headers: { Authorization: `Bearer ${process.env.CLOUDFLARE_API_TOKEN}`, }, body: form, }); if (response.status != 200) { throw new Error("unexpected status code"); } } // Function to make the POST request to start the assets upload session async function startUploadSession(): Promise { const fileMetadata = gatherFileMetadata(filesDirectory); const requestBody = JSON.stringify({ manifest: fileMetadata, }); const url = dispatchNamespace ? `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/dispatch/namespaces/${dispatchNamespace}/scripts/${scriptName}/assets-upload-session` : `https://api.cloudflare.com/client/v4/accounts/${accountId}/workers/scripts/${scriptName}/assets-upload-session`; const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${process.env.CLOUDFLARE_API_TOKEN}`, "Content-Type": "application/json", }, body: requestBody, }); const data = (await response.json()) as UploadResponse; const jwt = data.result.jwt; return { uploadToken: jwt, buckets: data.result.buckets, fileMetadata, }; } // Begin the upload session by uploading a new manifest const { uploadToken, buckets, fileMetadata } = await startUploadSession(); // If all files are already uploaded, a completion token will be immediately returned. Otherwise, // we should upload the missing files let completionToken = uploadToken; if (buckets.length > 0) { completionToken = await uploadFilesBatch(uploadToken, buckets, fileMetadata); } // Once we have uploaded all of our files, we can upload a new script, and assets, with completion token await scriptUpload(completionToken); ``` --- # Headers URL: https://developers.cloudflare.com/workers/static-assets/headers/ import { Render } from "~/components"; ## Default headers When serving static assets, Workers will attach some headers to the response by default. These are: - **`Content-Type`** A `Content-Type` header is attached to the response if one is provided during [the asset upload process](/workers/static-assets/direct-upload/). [Wrangler](/workers/wrangler/commands/#deploy) automatically determines the MIME type of the file, based on its extension. - **`Cache-Control: public, max-age=0, must-revalidate`** Sent when the request does not have an `Authorization` or `Range` header, this response header tells the browser that the asset can be cached, but that the browser should revalidate the freshness of the content every time before using it. This default behavior ensures good website performance for static pages, while still guaranteeing that stale content will never be served. - **`ETag`** This header complements the default `Cache-Control` header. Its value is a hash of the static asset file, and browsers can use this in subsequent requests with an `If-None-Match` header to check for freshness, without needing to re-download the entire file in the case of a match. - **`CF-Cache-Status`** This header indicates whether the asset was served from the cache (`HIT`) or not (`MISS`).[^1] Cloudflare reserves the right to attach new headers to static asset responses at any time in order to improve performance or harden the security of your Worker application. [^1]: Due to a technical limitation that we hope to address in the future, the `CF-Cache-Status` header is not always entirely accurate. It is possible for false-positives and false-negatives to occur. This should be rare. In the meantime, this header should be considered as returning a "probablistic" result. --- # Get Started URL: https://developers.cloudflare.com/workers/static-assets/get-started/ import { Badge, Description, InlineBadge, Render, PackageManagers, } from "~/components"; For most front-end applications, you'll want to use a framework. Workers supports number of popular [frameworks](/workers/frameworks/) that come with ready-to-use components, a pre-defined and structured architecture, and community support. View [framework specific guides](/workers/frameworks/) to get started using a framework. Alternatively, you may prefer to build your website from scratch if: - You're interested in learning by implementing core functionalities on your own. - You're working on a simple project where you might not need a framework. - You want to optimize for performance by minimizing external dependencies. - You require complete control over every aspect of the application. - You want to build your own framework. This guide will instruct you through setting up and deploying a static site or a full-stack application without a framework on Workers. ## Deploy a static site This guide will instruct you through setting up and deploying a static site on Workers. ### 1. Create a new Worker project using the CLI [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Open a terminal window and run C3 to create your Worker project: After setting up your project, change your directory by running the following command: ```sh cd my-static-site ``` ### 2. Develop locally After you have created your Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. ```sh npx wrangler dev ``` ### 3. Deploy your project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The [`wrangler deploy`](/workers/wrangler/commands/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. ```sh npx wrangler deploy ``` :::note Learn about how assets are configured and how routing works from [Routing configuration](/workers/static-assets/routing/). ::: ## Deploy a full-stack application This guide will instruct you through setting up and deploying dynamic and interactive server-side rendered (SSR) applications on Cloudflare Workers. When building a full-stack application, you can use any [Workers bindings](/workers/runtime-apis/bindings/), [including assets' own](/workers/static-assets/binding/), to interact with resources on the Cloudflare Developer Platform. ### 1. Create a new Worker project [C3 (`create-cloudflare-cli`)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Open a terminal window and run C3 to create your Worker project: After setting up your project, change your directory by running the following command: ```sh cd my-dynamic-site ``` ### 2. Develop locally After you have created your Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) in the project directory to start a local server. This will allow you to preview your project locally during development. ```sh npx wrangler dev ``` ### 3. Modify your Project With your new project generated and running, you can begin to write and edit your project: - The `src/index.ts` file is populated with sample code. Modify its content to change the server-side behavior of your Worker. - The `public/index.html` file is populated with sample code. Modify its content, or anything else in `public/`, to change the static assets of your Worker. Then, save the files and reload the page. Your project's output will have changed based on your modifications. ### 4. Deploy your Project Your project can be deployed to a `*.workers.dev` subdomain or a [Custom Domain](/workers/configuration/routing/custom-domains/), from your own machine or from any CI/CD system, including [Cloudflare's own](/workers/ci-cd/builds/). The [`wrangler deploy`](/workers/wrangler/commands/#deploy) will build and deploy your project. If you're using CI, ensure you update your ["deploy command"](/workers/ci-cd/builds/configuration/#build-settings) configuration appropriately. ```sh npx wrangler deploy ``` :::note Learn about how assets are configured and how routing works from [Routing configuration](/workers/static-assets/routing/). ::: --- # Static Assets URL: https://developers.cloudflare.com/workers/static-assets/ import { Aside, Badge, Card, CardGrid, Details, Description, InlineBadge, Icon, DirectoryListing, FileTree, Render, TabItem, Tabs, Feature, LinkButton, LinkCard, Stream, Flex, WranglerConfig, Steps, } from "~/components"; You can upload static assets (HTML, CSS, images and other files) as part of your Worker, and Cloudflare will handle caching and serving them to web browsers. ### How it works When you deploy your project, Cloudflare deploys both your Worker code and your static assets in a single operation. This deployment operates as a tightly integrated "unit" running across Cloudflare's network, combining static file hosting, custom logic, and global caching. The **assets directory** specified in your [Wrangler configuration file](/workers/wrangler/configuration/#assets) is central to this design. During deployment, Wrangler automatically uploads the files from this directory to Cloudflare's infrastructure. Once deployed, requests for these assets are routed efficiently to locations closest to your users. ```toml {3-4} name = "my-spa" main = "src/index.js" compatibility_date = "2025-01-01" [assets] directory = "./dist" binding = "ASSETS" ``` :::note If you are using the [Cloudflare Vite plugin](/workers/vite-plugin/), you do not need to specify `assets.directory`. For more information about using static assets with the Vite plugin, refer to the [plugin documentation](/workers/vite-plugin/reference/static-assets/). ::: By adding an [**assets binding**](/workers/static-assets/binding/#binding), you can directly fetch and serve assets within your Worker code. ```js {13} // index.js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { return new Response(JSON.stringify({ name: "Cloudflare" }), { headers: { "Content-Type": "application/json" }, }); } return env.ASSETS.fetch(request); }, }; ``` ### Routing behavior By default, if a requested URL matches a file in the static assets directory, that file will always be served — without running Worker code. If no matching asset is found and a Worker is configured, the request will be processed by the Worker instead. - If no Worker is set up, the [`not_found_handling`](/workers/static-assets/routing/) setting in your Wrangler configuration determines what happens next. By default, a `404 Not Found` response is returned. - If a Worker is configured and a request does not match a static asset, the Worker will handle the request. The Worker can choose to pass the request to the asset binding (through `env.ASSETS.fetch()`), following the `not_found_handling` rules. You can configure and override this default routing behaviour. For example, if you have a Single Page Application and want to serve `index.html` for all unmatched routes, you can set `not_found_handling = "single-page-application"`: ```toml [assets] directory = "./dist" not_found_handling = "single-page-application" ``` If you want the Worker code to execute before serving an asset (for example, to protect an asset behind authentication), you can set `run_worker_first = true`. ```toml [assets] directory = "./dist" run_worker_first = true ``` ### Caching behavior Cloudflare provides automatic caching for static assets across its network, ensuring fast delivery to users worldwide. When a static asset is requested, it is automatically cached for future requests. - **First Request:** When an asset is requested for the first time, it is fetched from storage and cached at the nearest Cloudflare location. - **Subsequent Requests:** If a request for the same asset reaches a data center that does not have it cached, Cloudflare's [tiered caching system](/cache/how-to/tiered-cache/) allows it to be retrieved from a nearby cache rather than going back to storage. This improves cache hit ratio, reduces latency, and reduces unnecessary origin fetches. ## Try it out #### 1. Create a new Worker project ```sh npm create cloudflare@latest -- my-dynamic-site ``` **For setup, select the following options**: - For _What would you like to start with?_, choose `Framework`. - For _Which framework would you like to use?_, choose `React`. - For _Which language do you want to use?_, choose `TypeScript`. - For _Do you want to use git for version control_?, choose `Yes`. - For _Do you want to deploy your application_?, choose `No` (we will be making some changes before deploying). After setting up the project, change the directory by running the following command: ```sh cd my-dynamic-site ``` #### 2. Build project Run the following command to build the project: ```sh npm run build ``` We should now see a new directory `/dist` in our project, which contains the compiled assets: - package.json - index.html - ... - dist Asset directory - ... Compiled assets - src - ... - ... In the next step, we use a Wrangler configuration file to allow Cloudflare to locate our compiled assets. #### 3. Add a Wrangler configuration file (`wrangler.toml` or `wrangler.json`) ```toml name = "my-spa" compatibility_date = "2025-01-01" [assets] directory = "./dist" ``` **Notice the `[assets]` block**: here we have specified our directory where Cloudflare can find our compiled assets (`./dist`). Our project structure should now look like this: - package.json - index.html - **wrangler.toml** Wrangler configuration - ... - dist Asset directory - ... Compiled assets - src - ... - ... #### 4. Deploy with Wrangler ```sh npx wrangler deploy ``` Our project is now deployed on Workers! But we can take this even further, by adding an **API Worker**. #### 5. Adjust our Wrangler configuration Replace the file contents of our Wrangler configuration with the following: ```toml name = "my-spa" main = "src/api/index.js" compatibility_date = "2025-01-01" [assets] directory = "./dist" binding = "ASSETS" not_found_handling = "single-page-application" ``` We have edited the Wrangler file in the following ways: - Added `main = "src/api/index.js"` to tell Cloudflare where to find our Worker code. - Added an `ASSETS` binding, which our Worker code can use to fetch and serve assets. - Enabled routing for Single Page Applications, which ensures that unmatched routes (such as `/dashboard`) serve our `index.html`. :::note By default, Cloudflare serves a `404 Not Found` to unmatched routes. To have the frontend handle routing instead of the server, you must enable `not_found_handling = "single-page-application"`. ::: #### 5. Create a new directory `/api`, and add an `index.js` file Copy the contents below into the index.js file. ```js {13} // api/index.js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { return new Response(JSON.stringify({ name: "Cloudflare" }), { headers: { "Content-Type": "application/json" }, }); } return env.ASSETS.fetch(request); }, }; ``` **Consider what this Worker does:** - Our Worker receives a HTTP request and extracts the URL. - If the request is for an API route (`/api/...`), it returns a JSON response. - Otherwise, it serves static assets from our directory (`env.ASSETS`). #### 6. Call the API from the client Edit `src/App.tsx` so that it includes an additional button that calls the API, and sets some state. Replace the file contents with the following code: ```js {9,25, 33-47} // src/App.tsx import { useState } from "react"; import reactLogo from "./assets/react.svg"; import viteLogo from "/vite.svg"; import "./App.css"; function App() { const [count, setCount] = useState(0); const [name, setName] = useState("unknown"); return ( <>

Vite + React

Edit src/App.tsx and save to test HMR

Edit api/index.ts to change the name

Click on the Vite and React logos to learn more

); } export default App; ``` Before deploying again, we need to rebuild our project: ```sh npm run build ``` #### 7. Deploy with Wrangler ```sh npx wrangler deploy ``` Now we can see a new button **Name from API**, and if you click the button, we can see our API response as **Cloudflare**! ## Learn more --- # Redirects URL: https://developers.cloudflare.com/workers/static-assets/redirects/ import { Render } from "~/components"; --- # Testing URL: https://developers.cloudflare.com/workers/testing/ import { Render, LinkButton } from "~/components"; The Workers platform has a variety of ways to test your applications, depending on your requirements. We recommend using the [Vitest integration](/workers/testing/vitest-integration), which allows you to run tests to _inside_ the Workers runtime, and unit test individual functions within your Worker. Get started with Vitest ## Testing comparison matrix However, if you don't use Vitest, both [Miniflare's API](/workers/testing/miniflare/writing-tests) and the [`unstable_startWorker()`](/workers/wrangler/api/#unstable_startworker) API provide options for testing your Worker in any testing framework. | Feature | [Vitest integration](/workers/testing/vitest-integration) | [`unstable_startWorker()`](/workers/testing/unstable_startworker/) | [Miniflare's API](/workers/testing/miniflare/writing-tests/) | | ------------------------------------- | --------------------------------------------------------- | ------------------------------------------------------------------ | ------------------------------------------------------------ | | Unit testing | ✅ | ❌ | ❌ | | Integration testing | ✅ | ✅ | ✅ | | Loading Wrangler configuration files | ✅ | ✅ | ❌ | | Use bindings directly in tests | ✅ | ❌ | ✅ | | Isolated per-test storage | ✅ | ❌ | ❌ | | Outbound request mocking | ✅ | ❌ | ✅ | | Multiple Worker support | ✅ | ✅ | ✅ | | Direct access to Durable Objects | ✅ | ❌ | ❌ | | Run Durable Object alarms immediately | ✅ | ❌ | ❌ | | List Durable Objects | ✅ | ❌ | ❌ | | Testing service Workers | ❌ | ✅ | ✅ | --- # Wrangler's unstable_startWorker() URL: https://developers.cloudflare.com/workers/testing/unstable_startworker/ import { Render } from "~/components"; import { LinkButton } from "@astrojs/starlight/components"; :::note For most users, Cloudflare recommends using the Workers Vitest integration. If you have been using `unstable_dev()`, refer to the [Migrate from `unstable_dev()` guide](/workers/testing/vitest-integration/migration-guides/migrate-from-unstable-dev/). ::: :::caution `unstable_startWorker()` is an experimental API subject to breaking changes. ::: If you do not want to use Vitest, consider using [Wrangler's `unstable_startWorker()` API](/workers/wrangler/api/#unstable_startworker). This API exposes the internals of Wrangler's dev server, and allows you to customise how it runs. Compared to using [Miniflare directly for testing](/workers/testing/miniflare/writing-tests/), you can pass in a Wrangler configuration file, and it will automatically load the configuration for you. This example uses `node:test`, but should apply to any testing framework: ```ts import assert from "node:assert"; import test, { after, before, describe } from "node:test"; import { unstable_startWorker } from "wrangler"; describe("worker", () => { let worker; before(async () => { worker = await unstable_startWorker({ config: "wrangler.json" }); }); test("hello world", async () => { assert.strictEqual( await (await worker.fetch("http://example.com")).text(), "Hello world", ); }); after(async () => { await worker.dispose(); }); }); ``` --- # Get started URL: https://developers.cloudflare.com/workers/vite-plugin/get-started/ import { PackageManagers, WranglerConfig } from "~/components"; :::note This guide demonstrates creating a standalone Worker from scratch. If you would instead like to create a new application from a ready-to-go template, refer to the [React Router](/workers/frameworks/framework-guides/react-router/), [React](/workers/frameworks/framework-guides/react/) or [Vue](/workers/frameworks/framework-guides/vue/) framework guides. ::: ## Start with a basic `package.json` ```json title="package.json" { "name": "cloudflare-vite-get-started", "private": true, "version": "0.0.0", "type": "module", "scripts": { "dev": "vite dev", "build": "vite build", "preview": "npm run build && vite preview", "deploy": "npm run build && wrangler deploy" } } ``` :::note Ensure that you include `"type": "module"` in order to use ES modules by default. ::: ## Install the dependencies ## Create your Vite config file and include the Cloudflare plugin ```ts title="vite.config.ts" import { defineConfig } from "vite"; import { cloudflare } from "@cloudflare/vite-plugin"; export default defineConfig({ plugins: [cloudflare()], }); ``` The Cloudflare Vite plugin doesn't require any configuration by default and will look for a `wrangler.jsonc`, `wrangler.json` or `wrangler.toml` in the root of your application. Refer to the [API reference](/workers/vite-plugin/reference/api/) for configuration options. ## Create your Worker config file ```toml name = "cloudflare-vite-get-started" compatibility_date = "2025-04-03" main = "./src/index.ts" ``` The `name` field specifies the name of your Worker. By default, this is also used as the name of the Worker's Vite Environment (see [Vite Environments](/workers/vite-plugin/reference/vite-environments/) for more information). The `main` field specifies the entry file for your Worker code. For more information about the Worker configuration, see [Configuration](/workers/wrangler/configuration/). ## Create your Worker entry file ```ts title="src/index.ts" export default { fetch() { return new Response(`Running in ${navigator.userAgent}!`); }, }; ``` A request to this Worker will return **'Running in Cloudflare-Workers!'**, demonstrating that the code is running inside the Workers runtime. ## Dev, build, preview and deploy You can now start the Vite development server (`npm run dev`), build the application (`npm run build`), preview the built application (`npm run preview`), and deploy to Cloudflare (`npm run deploy`). --- # Vite plugin URL: https://developers.cloudflare.com/workers/vite-plugin/ The Cloudflare Vite plugin enables a full-featured integration between [Vite](https://vite.dev/) and the [Workers runtime](/workers/runtime-apis/). Your Worker code runs inside [workerd](https://github.com/cloudflare/workerd), matching the production behavior as closely as possible and providing confidence as you develop and deploy your applications. ## Features - Uses the Vite [Environment API](https://vite.dev/guide/api-environment) to integrate Vite with the Workers runtime - Provides direct access to [Workers runtime APIs](/workers/runtime-apis/) and [bindings](/workers/runtime-apis/bindings/) - Builds your front-end assets for deployment to Cloudflare, enabling you to build static sites, SPAs, and full-stack applications - Official support for [React Router v7](https://reactrouter.com/) with server-side rendering - Leverages Vite's hot module replacement for consistently fast updates - Supports `vite preview` for previewing your build output in the Workers runtime prior to deployment ## Use cases - [React Router v7](https://reactrouter.com/) (support for more full-stack frameworks is coming soon) - Static sites, such as single-page applications, with or without an integrated backend API - Standalone Workers - Multi-Worker applications ## Get started To create a new application from a ready-to-go template, refer to the [React Router](/workers/frameworks/framework-guides/react-router/), [React](/workers/frameworks/framework-guides/react/) or [Vue](/workers/frameworks/framework-guides/vue/) framework guides. To create a standalone Worker from scratch, refer to [Get started](/workers/vite-plugin/get-started/). For a more in-depth look at adapting an existing Vite project and an introduction to key concepts, refer to the [Tutorial](/workers/vite-plugin/tutorial/). --- # Tutorial - React SPA with an API URL: https://developers.cloudflare.com/workers/vite-plugin/tutorial/ import { PackageManagers, WranglerConfig } from "~/components"; This tutorial takes you through the steps needed to adapt a Vite project to use the Cloudflare Vite plugin. Much of the content can also be applied to adapting existing Vite projects and to front-end frameworks other than React. :::note If you want to start a new app with a template already set up with Vite, React and the Cloudflare Vite plugin, refer to the [React framework guide](/workers/frameworks/framework-guides/react/). To create a standalone Worker, refer to [Get started](/workers/vite-plugin/get-started/). ::: ## Introduction In this tutorial, you will create a React SPA that can be deployed as a Worker with static assets. You will then add an API Worker that can be accessed from the front-end code. You will develop, build, and preview the application using Vite before finally deploying to Cloudflare. ## Set up and configure the React SPA ### Scaffold a Vite project Start by creating a React TypeScript project with Vite. Next, open the `cloudflare-vite-tutorial` directory in your editor of choice. ### Add the Cloudflare dependencies ### Add the plugin to your Vite config ```ts {3, 6} title="vite.config.ts" import { defineConfig } from "vite"; import react from "@vitejs/plugin-react"; import { cloudflare } from "@cloudflare/vite-plugin"; export default defineConfig({ plugins: [react(), cloudflare()], }); ``` The Cloudflare Vite plugin doesn't require any configuration by default and will look for a `wrangler.jsonc`, `wrangler.json` or `wrangler.toml` in the root of your application. Refer to the [API reference](/workers/vite-plugin/reference/api/) for configuration options. ### Create your Worker config file ```toml name = "cloudflare-vite-tutorial" compatibility_date = "2025-04-03" assets = { not_found_handling = "single-page-application" } ``` The [`not_found_handling`](/workers/static-assets/routing/single-page-application/) value has been set to `single-page-application`. This means that all not found requests will serve the `index.html` file. With the Cloudflare plugin, the `assets` routing configuration is used in place of Vite's default behavior. This ensures that your application's [routing configuration](/workers/static-assets/routing/) works the same way while developing as it does when deployed to production. Note that the [`directory`](/workers/static-assets/binding/#directory) field is not used when configuring assets with Vite. The `directory` in the output configuration will automatically point to the client build output. See [Static Assets](/workers/vite-plugin/reference/static-assets/) for more information. :::note When using the Cloudflare Vite plugin, the Worker config (for example, `wrangler.jsonc`) that you provide is the input configuration file. A separate output `wrangler.json` file is created when you run `vite build`. This output file is a snapshot of your configuration at the time of the build and is modified to reference your build artifacts. It is the configuration that is used for preview and deployment. ::: ### Update the .gitignore file When developing Workers, additional files are used and/or generated that should not be stored in git. Add the following lines to your `.gitignore` file: ```txt title=".gitignore" .wrangler .dev.vars* ``` ### Run the development server Run `npm run dev` to start the Vite development server and verify that your application is working as expected. For a purely front-end application, you could now build (`npm run build`), preview (`npm run preview`), and deploy (`npm exec wrangler deploy`) your application. This tutorial, however, will show you how to go a step further and add an API Worker. ## Add an API Worker ### Configure TypeScript for your Worker code ```jsonc title="tsconfig.worker.json" { "extends": "./tsconfig.node.json", "compilerOptions": { "tsBuildInfoFile": "./node_modules/.tmp/tsconfig.worker.tsbuildinfo", "types": ["@cloudflare/workers-types/2023-07-01", "vite/client"], }, "include": ["worker"], } ``` ```jsonc {6} title="tsconfig.json" { "files": [], "references": [ { "path": "./tsconfig.app.json" }, { "path": "./tsconfig.node.json" }, { "path": "./tsconfig.worker.json" }, ], } ``` ### Add to your Worker configuration ```toml name = "cloudflare-vite-tutorial" compatibility_date = "2025-04-03" assets = { not_found_handling = "single-page-application" } main = "./worker/index.ts" ``` The `main` field specifies the entry file for your Worker code. ### Add your API Worker ```ts title="worker/index.ts" interface Env { ASSETS: Fetcher; } export default { fetch(request, env) { const url = new URL(request.url); if (url.pathname.startsWith("/api/")) { return Response.json({ name: "Cloudflare", }); } return new Response(null, { status: 404 }); }, } satisfies ExportedHandler; ``` The Worker above will be invoked for any non-navigation request that does not match a static asset. It returns a JSON response if the `pathname` starts with `/api/` and otherwise return a `404` response. :::note For top-level navigation requests, browsers send a `Sec-Fetch-Mode: navigate` header. If this is present and the URL does not match a static asset, the `not_found_handling` behavior will be invoked rather than the Worker. ::: ### Call the API from the client Edit `src/App.tsx` so that it includes an additional button that calls the API and sets some state: ```tsx {8, 32-46} collapse={12-27} title="src/App.tsx" import { useState } from "react"; import reactLogo from "./assets/react.svg"; import viteLogo from "/vite.svg"; import "./App.css"; function App() { const [count, setCount] = useState(0); const [name, setName] = useState("unknown"); return ( <>

Vite + React

Edit src/App.tsx and save to test HMR

Edit api/index.ts to change the name

Click on the Vite and React logos to learn more

); } export default App; ``` Now, if you click the button, it will display 'Name from API is: Cloudflare'. Increment the counter to update the application state in the browser. Next, edit `api/index.ts` by changing the `name` it returns to `'Cloudflare Workers'`. If you click the button again, it will display the new `name` while preserving the previously set counter value. With Vite and the Cloudflare plugin, you can iterate on the client and server parts of your app together, without losing UI state between edits. ### Build your application Run `npm run build` to build your application. ```sh npm run build ``` If you inspect the `dist` directory, you will see that it contains two subdirectories: - `client` - the client code that runs in the browser - `cloudflare-vite-tutorial` - the Worker code alongside the output `wrangler.json` configuration file ### Preview your application Run `npm run preview` to validate that your application runs as expected. ```sh npm run preview ``` This command will run your build output locally in the Workers runtime, closely matching its behaviour in production. ### Deploy to Cloudflare Run `npm exec wrangler deploy` to deploy your application to Cloudflare. ```sh npm exec wrangler deploy ``` This command will automatically use the output `wrangler.json` that was included in the build output. ## Next steps In this tutorial, we created an SPA that could be deployed as a Worker with static assets. We then added an API Worker that could be accessed from the front-end code. Finally, we deployed both the client and server-side parts of the application to Cloudflare. Possible next steps include: - Adding a binding to another Cloudflare service such as a [KV namespace](/kv/) or [D1 database](/d1/) - Expanding the API to include additional routes - Using a library, such as [Hono](https://hono.dev/) or [tRPC](https://trpc.io/), in your API Worker --- # Tutorials URL: https://developers.cloudflare.com/workers/tutorials/ import { GlossaryTooltip, ListTutorials, YouTubeVideos } from "~/components"; View tutorials to help you get started with Workers. ## Docs ## Videos --- # API URL: https://developers.cloudflare.com/workers/wrangler/api/ import { Render, TabItem, Tabs, Type, MetaInfo, WranglerConfig, PackageManagers, } from "~/components"; Wrangler offers APIs to programmatically interact with your Cloudflare Workers. - [`unstable_startWorker`](#unstable_startworker) - Start a server for running integration tests against your Worker. - [`unstable_dev`](#unstable_dev) - Start a server for running either end-to-end (e2e) or integration tests against your Worker. - [`getPlatformProxy`](#getplatformproxy) - Get proxies and values for emulating the Cloudflare Workers platform in a Node.js process. ## `unstable_startWorker` This API exposes the internals of Wrangler's dev server, and allows you to customise how it runs. For example, you could use `unstable_startWorker()` to run integration tests against your Worker. This example uses `node:test`, but should apply to any testing framework: ```js import assert from "node:assert"; import test, { after, before, describe } from "node:test"; import { unstable_startWorker } from "wrangler"; describe("worker", () => { let worker; before(async () => { worker = await unstable_startWorker({ config: "wrangler.json" }); }); test("hello world", async () => { assert.strictEqual( await (await worker.fetch("http://example.com")).text(), "Hello world", ); }); after(async () => { await worker.dispose(); }); }); ``` ## `unstable_dev` Start an HTTP server for testing your Worker. Once called, `unstable_dev` will return a `fetch()` function for invoking your Worker without needing to know the address or port, as well as a `stop()` function to shut down the HTTP server. By default, `unstable_dev` will perform integration tests against a local server. If you wish to perform an e2e test against a preview Worker, pass `local: false` in the `options` object when calling the `unstable_dev()` function. Note that e2e tests can be significantly slower than integration tests. :::note The `unstable_dev()` function has an `unstable_` prefix because the API is experimental and may change in the future. We recommend migrating to the `unstable_startWorker()` API, documented above. If you have been using `unstable_dev()` for integration testing and want to migrate to Cloudflare's Vitest integration, refer to the [Migrate from `unstable_dev` migration guide](/workers/testing/vitest-integration/migration-guides/migrate-from-unstable-dev/) for more information. ::: ### Constructor ```js const worker = await unstable_dev(script, options); ``` ### Parameters - `script` - A string containing a path to your Worker script, relative to your Worker project's root directory. - `options` - Optional options object containing `wrangler dev` configuration settings. - Include an `experimental` object inside `options` to access experimental features such as `disableExperimentalWarning`. - Set `disableExperimentalWarning` to `true` to disable Wrangler's warning about using `unstable_` prefixed APIs. ### Return Type `unstable_dev()` returns an object containing the following methods: - `fetch()` `Promise` - Send a request to your Worker. Returns a Promise that resolves with a [`Response`](/workers/runtime-apis/response) object. - Refer to [`Fetch`](/workers/runtime-apis/fetch/). - `stop()` `Promise` - Shuts down the dev server. ### Usage When initiating each test suite, use a `beforeAll()` function to start `unstable_dev()`. The `beforeAll()` function is used to minimize overhead: starting the dev server takes a few hundred milliseconds, starting and stopping for each individual test adds up quickly, slowing your tests down. In each test case, call `await worker.fetch()`, and check that the response is what you expect. To wrap up a test suite, call `await worker.stop()` in an `afterAll` function. #### Single Worker example ```js const { unstable_dev } = require("wrangler"); describe("Worker", () => { let worker; beforeAll(async () => { worker = await unstable_dev("src/index.js", { experimental: { disableExperimentalWarning: true }, }); }); afterAll(async () => { await worker.stop(); }); it("should return Hello World", async () => { const resp = await worker.fetch(); const text = await resp.text(); expect(text).toMatchInlineSnapshot(`"Hello World!"`); }); }); ``` ```ts import { unstable_dev } from "wrangler"; import type { UnstableDevWorker } from "wrangler"; describe("Worker", () => { let worker: UnstableDevWorker; beforeAll(async () => { worker = await unstable_dev("src/index.ts", { experimental: { disableExperimentalWarning: true }, }); }); afterAll(async () => { await worker.stop(); }); it("should return Hello World", async () => { const resp = await worker.fetch(); const text = await resp.text(); expect(text).toMatchInlineSnapshot(`"Hello World!"`); }); }); ``` #### Multi-Worker example You can test Workers that call other Workers. In the below example, we refer to the Worker that calls other Workers as the parent Worker, and the Worker being called as a child Worker. If you shut down the child Worker prematurely, the parent Worker will not know the child Worker exists and your tests will fail. ```js import { unstable_dev } from "wrangler"; describe("multi-worker testing", () => { let childWorker; let parentWorker; beforeAll(async () => { childWorker = await unstable_dev("src/child-worker.js", { config: "src/child-wrangler.toml", experimental: { disableExperimentalWarning: true }, }); parentWorker = await unstable_dev("src/parent-worker.js", { config: "src/parent-wrangler.toml", experimental: { disableExperimentalWarning: true }, }); }); afterAll(async () => { await childWorker.stop(); await parentWorker.stop(); }); it("childWorker should return Hello World itself", async () => { const resp = await childWorker.fetch(); const text = await resp.text(); expect(text).toMatchInlineSnapshot(`"Hello World!"`); }); it("parentWorker should return Hello World by invoking the child worker", async () => { const resp = await parentWorker.fetch(); const parsedResp = await resp.text(); expect(parsedResp).toEqual("Parent worker sees: Hello World!"); }); }); ``` ```ts import { unstable_dev } from "wrangler"; import type { UnstableDevWorker } from "wrangler"; describe("multi-worker testing", () => { let childWorker: UnstableDevWorker; let parentWorker: UnstableDevWorker; beforeAll(async () => { childWorker = await unstable_dev("src/child-worker.js", { config: "src/child-wrangler.toml", experimental: { disableExperimentalWarning: true }, }); parentWorker = await unstable_dev("src/parent-worker.js", { config: "src/parent-wrangler.toml", experimental: { disableExperimentalWarning: true }, }); }); afterAll(async () => { await childWorker.stop(); await parentWorker.stop(); }); it("childWorker should return Hello World itself", async () => { const resp = await childWorker.fetch(); const text = await resp.text(); expect(text).toMatchInlineSnapshot(`"Hello World!"`); }); it("parentWorker should return Hello World by invoking the child worker", async () => { const resp = await parentWorker.fetch(); const parsedResp = await resp.text(); expect(parsedResp).toEqual("Parent worker sees: Hello World!"); }); }); ``` ## `getPlatformProxy` The `getPlatformProxy` function provides a way to obtain an object containing proxies (to **local** `workerd` bindings) and emulations of Cloudflare Workers specific values, allowing the emulation of such in a Node.js process. :::caution `getPlatformProxy` is, by design, to be used exclusively in Node.js applications. `getPlatformProxy` cannot be run inside the Workers runtime. ::: One general use case for getting a platform proxy is for emulating bindings in applications targeting Workers, but running outside the Workers runtime (for example, framework local development servers running in Node.js), or for testing purposes (for example, ensuring code properly interacts with a type of binding). :::note Binding proxies provided by this function are a best effort emulation of the real production bindings. Although they are designed to be as close as possible to the real thing, there might be slight differences and inconsistencies between the two. ::: ### Syntax ```js const platform = await getPlatformProxy(options); ``` ### Parameters - `options` - Optional options object containing preferences for the bindings: - `environment` string The environment to use. - `configPath` string The path to the config file to use. If no path is specified, the default behavior is to search from the current directory up the filesystem for a [Wrangler configuration file](/workers/wrangler/configuration/) to use. **Note:** this field is optional but if a path is specified it must point to a valid file on the filesystem. - `persist` boolean | `{ path: string }` Indicates if and where to persist the bindings data. If `true` or `undefined`, defaults to the same location used by Wrangler, so data can be shared between it and the caller. If `false`, no data is persisted to or read from the filesystem. **Note:** If you use `wrangler`'s `--persist-to` option, note that this option adds a subdirectory called `v3` under the hood while `getPlatformProxy`'s `persist` does not. For example, if you run `wrangler dev --persist-to ./my-directory`, to reuse the same location using `getPlatformProxy`, you will have to specify: `persist: { path: "./my-directory/v3" }`. ### Return Type `getPlatformProxy()` returns a `Promise` resolving to an object containing the following fields. - `env` `Record` - Object containing proxies to bindings that can be used in the same way as production bindings. This matches the shape of the `env` object passed as the second argument to modules-format workers. These proxy to binding implementations run inside `workerd`. - TypeScript Tip: `getPlatformProxy()` is a generic function. You can pass the shape of the bindings record as a type argument to get proper types without `unknown` values. - `cf` IncomingRequestCfProperties read-only - Mock of the `Request`'s `cf` property, containing data similar to what you would see in production. - `ctx` object - Mock object containing implementations of the [`waitUntil`](/workers/runtime-apis/context/#waituntil) and [`passThroughOnException`](/workers/runtime-apis/context/#passthroughonexception) functions that do nothing. - `caches` object - Emulation of the [Workers `caches` runtime API](/workers/runtime-apis/cache/). - For the time being, all cache operations do nothing. A more accurate emulation will be made available soon. - `dispose()` () => `Promise` - Terminates the underlying `workerd` process. - Call this after the platform proxy is no longer required by the program. If you are running a long running process (such as a dev server) that can indefinitely make use of the proxy, you do not need to call this function. ### Usage The `getPlatformProxy` function uses bindings found in the [Wrangler configuration file](/workers/wrangler/configuration/). For example, if you have an [environment variable](/workers/configuration/environment-variables/#add-environment-variables-via-wrangler) configuration set up in the Wrangler configuration file: ```toml [vars] MY_VARIABLE = "test" ``` You can access the bindings by importing `getPlatformProxy` like this: ```js import { getPlatformProxy } from "wrangler"; const { env } = await getPlatformProxy(); ``` To access the value of the `MY_VARIABLE` binding add the following to your code: ```js console.log(`MY_VARIABLE = ${env.MY_VARIABLE}`); ``` This will print the following output: `MY_VARIABLE = test`. ### Supported bindings All supported bindings found in your [Wrangler configuration file](/workers/wrangler/configuration/) are available to you via `env`. The bindings supported by `getPlatformProxy` are: - [Environment variables](/workers/configuration/environment-variables/) - [Service bindings](/workers/runtime-apis/bindings/service-bindings/) - [KV namespace bindings](/kv/api/) - [R2 bucket bindings](/r2/api/workers/workers-api-reference/) - [Queue bindings](/queues/configuration/javascript-apis/) - [D1 database bindings](/d1/worker-api/) - [Hyperdrive bindings](/hyperdrive) :::note[Hyperdrive values are simple passthrough ones] Values provided by hyperdrive bindings such as `connectionString` and `host` do not have a valid meaning outside of a `workerd` process. This means that Hyperdrive proxies return passthrough values, which are values corresponding to the database connection provided by the user. Otherwise, it would return values which would be unusable from within node.js. ::: - [Workers AI bindings](/workers-ai/get-started/workers-wrangler/#2-connect-your-worker-to-workers-ai) - [Durable Object bindings](/durable-objects/api/) - To use a Durable Object binding with `getPlatformProxy`, always specify a [`script_name`](/workers/wrangler/configuration/#durable-objects). For example, you might have the following binding in a Wrangler configuration file read by `getPlatformProxy`. ```toml [[durable_objects.bindings]] name = "MyDurableObject" class_name = "MyDurableObject" script_name = "external-do-worker" ``` You will need to declare your Durable Object `"MyDurableObject"` in another Worker, called `external-do-worker` in this example. ```ts title="./external-do-worker/src/index.ts" export class MyDurableObject extends DurableObject { // Your DO code goes here } export default { fetch() { // Doesn't have to do anything, but a DO cannot be the default export return new Response("Hello, world!"); }, }; ``` That Worker also needs a Wrangler configuration file that looks like this: ```json { "name": "external-do-worker", "main": "src/index.ts", "compatibility_date": "XXXX-XX-XX" } ``` If you are not using RPC with your Durable Object, you can run a separate Wrangler dev session alongside your framework development server. Otherwise, you can build your application and run both Workers in the same Wrangler dev session. If you are using Pages run: If you are using Workers with Assets run: --- # Bundling URL: https://developers.cloudflare.com/workers/wrangler/bundling/ By default, Wrangler bundles your Worker code using [`esbuild`](https://esbuild.github.io/). This means that Wrangler has built-in support for importing modules from [npm](https://www.npmjs.com/) defined in your `package.json`. To review the exact code that Wrangler will upload to Cloudflare, run `npx wrangler deploy --dry-run --outdir dist`, which will show your Worker code after Wrangler's bundling.
`esbuild` version Wrangler uses `esbuild`. We periodically update the `esbuild` version included with Wrangler, and since `esbuild` is a pre-1.0.0 tool, this may sometimes include breaking changes to how bundling works. In particular, we may bump the `esbuild` version in a Wrangler minor version.
:::note Wrangler's inbuilt bundling usually provides the best experience, but we understand there are cases where you will need more flexibility. You can provide `rules` and set `find_additional_modules` in your configuration to control which files are included in the deployed Worker but not bundled into the entry-point file. Furthermore, we have an escape hatch in the form of [Custom Builds](/workers/wrangler/custom-builds/), which lets you run your own build before Wrangler's built-in one. ::: ## Including non-JavaScript modules Bundling your Worker code takes multiple modules and bundles them into one file. Sometimes, you might have modules that cannot be inlined directly into the bundle. For example, instead of bundling a Wasm file into your JavaScript Worker, you would want to upload the Wasm file as a separate module that can be imported at runtime. Wrangler supports this for the following file types: - `.txt` - `.html` - `.bin` - `.wasm` and `.wasm?module` Refer to [Bundling configuration](/workers/wrangler/configuration/#bundling) to customize these file types. For example, with the following import, the variable `data` will be a string containing the contents of `example.html`: ```js import data from "./example.html"; // Where `example.html` is a file in your local directory ``` This is also the basis of Wasm support with Wrangler. To use a Wasm module in a Worker developed with Wrangler, add the following to your Worker: ```js import wasm from "./example.wasm"; // Where `example.wasm` is a file in your local directory const instance = await WebAssembly.instantiate(wasm); // Instantiate Wasm modules in global scope, not within the fetch() handler export default { fetch(request) { const result = instance.exports.exported_func(); }, }; ``` :::caution Cloudflare Workers does not support `WebAssembly.instantiateStreaming()`. ::: ## Find additional modules By setting `find_additional_modules` to `true` in your configuration file, Wrangler will traverse the file tree below `base_dir`. Any files that match the `rules` you define will also be included as unbundled, external modules in the deployed Worker. This approach is useful for supporting lazy loading of large or dynamically imported JavaScript files: - Normally, a large lazy-imported file (for example, `await import("./large-dep.mjs")`) would be bundled directly into your entrypoint, reducing the effectiveness of the lazy loading. If matching rule is added to `rules`, then this file would only be loaded and executed at runtime when it is actually imported. - Previously, variable based dynamic imports (for example, ``await import(`./lang/${language}.mjs`)``) would always fail at runtime because Wrangler had no way of knowing which modules to include in the upload. Providing a rule that matches all these files, such as `{ type = "EsModule", globs = ["./land/**/*.mjs"], fallthrough = true }`, will ensure this module is available at runtime. - "Partial bundling" is supported when `find_additional_modules` is `true`, and a source file matches one of the configured `rules`, since Wrangler will then treat it as "external" and not try to bundle it into the entry-point file. ## Conditional exports Wrangler respects the [conditional `exports` field](https://nodejs.org/api/packages.html#conditional-exports) in `package.json`. This allows developers to implement isomorphic libraries that have different implementations depending on the JavaScript runtime they are running in. When bundling, Wrangler will try to load the [`workerd` key](https://runtime-keys.proposal.wintercg.org/#workerd). Refer to the Wrangler repository for [an example isomorphic package](https://github.com/cloudflare/workers-sdk/tree/main/fixtures/isomorphic-random-example). ## Disable bundling :::caution Disabling bundling is not recommended in most scenarios. Use this option only when deploying code pre-processed by other tooling. ::: If your build tooling already produces build artifacts suitable for direct deployment to Cloudflare, you can opt out of bundling by using the `--no-bundle` command line flag: `npx wrangler deploy --no-bundle`. If you opt out of bundling, Wrangler will not process your code and some features introduced by Wrangler bundling (for example minification, and polyfills injection) will not be available. Use [Custom Builds](/workers/wrangler/custom-builds/) to customize what Wrangler will bundle and upload to the Cloudflare global network when you use [`wrangler dev`](/workers/wrangler/commands/#dev) and [`wrangler deploy`](/workers/wrangler/commands/#deploy). ## Generated Wrangler configuration Some framework tools, or custom pre-build processes, generate a modified Wrangler configuration to be used to deploy the Worker code. It is possible for Wrangler to automatically use this generated configuration rather than the original, user's configuration. See [Generated Wrangler configuration](/workers/wrangler/configuration/#generated-wrangler-configuration) for more information. --- # Commands URL: https://developers.cloudflare.com/workers/wrangler/commands/ import { TabItem, Tabs, Render, Type, MetaInfo, WranglerConfig, } from "~/components"; Wrangler offers a number of commands to manage your Cloudflare Workers. - [`docs`](#docs) - Open this page in your default browser. - [`init`](#init) - Create a new project from a variety of web frameworks and templates. - [`d1`](#d1) - Interact with D1. - [`vectorize`](#vectorize) - Interact with Vectorize indexes. - [`hyperdrive`](#hyperdrive) - Manage your Hyperdrives. - [`deploy`](#deploy) - Deploy your Worker to Cloudflare. - [`dev`](#dev) - Start a local server for developing your Worker. - [`delete`](#delete-1) - Delete your Worker from Cloudflare. - [`kv namespace`](#kv-namespace) - Manage Workers KV namespaces. - [`kv key`](#kv-key) - Manage key-value pairs within a Workers KV namespace. - [`kv bulk`](#kv-bulk) - Manage multiple key-value pairs within a Workers KV namespace in batches. - [`r2 bucket`](#r2-bucket) - Manage Workers R2 buckets. - [`r2 object`](#r2-object) - Manage Workers R2 objects. - [`secret`](#secret) - Manage the secret variables for a Worker. - [`secret bulk`](#secret-bulk) - Manage multiple secret variables for a Worker. - [`secrets-store secret`](#secrets-store-secret) - Manage account secrets within a secrets store. - [`secrets-store store`](#secrets-store-store) - Manage your store within secrets store. - [`workflows`](#workflows) - Manage and configure Workflows. - [`tail`](#tail) - Start a session to livestream logs from a deployed Worker. - [`pages`](#pages) - Configure Cloudflare Pages. - [`pipelines`](#pipelines) - Configure Cloudflare Pipelines. - [`queues`](#queues) - Configure Workers Queues. - [`login`](#login) - Authorize Wrangler with your Cloudflare account using OAuth. - [`logout`](#logout) - Remove Wrangler’s authorization for accessing your account. - [`whoami`](#whoami) - Retrieve your user information and test your authentication configuration. - [`versions`](#versions) - Retrieve details for recent versions. - [`deployments`](#deployments) - Retrieve details for recent deployments. - [`rollback`](#rollback) - Rollback to a recent deployment. - [`dispatch-namespace`](#dispatch-namespace) - Interact with a [dispatch namespace](/cloudflare-for-platforms/workers-for-platforms/reference/how-workers-for-platforms-works/#dispatch-namespace). - [`mtls-certificate`](#mtls-certificate) - Manage certificates used for mTLS connections. - [`cert`](#cert) - Manage certificates used for mTLS and Certificate Authority (CA) chain connections. - [`types`](#types) - Generate types from bindings and module rules in configuration. - [`telemetry`](#telemetry) - Configure whether Wrangler can collect anonymous usage data. - [`check`](#check) - Validate your Worker. :::note ::: --- ## How to run Wrangler commands This page provides a reference for Wrangler commands. ```txt wrangler [PARAMETERS] [OPTIONS] ``` Since Cloudflare recommends [installing Wrangler locally](/workers/wrangler/install-and-update/) in your project(rather than globally), the way to run Wrangler will depend on your specific setup and package manager. ```sh npx wrangler [PARAMETERS] [OPTIONS] ``` ```sh yarn wrangler [PARAMETERS] [OPTIONS] ``` ```sh pnpm wrangler [PARAMETERS] [OPTIONS] ``` You can add Wrangler commands that you use often as scripts in your project's `package.json` file: ```json { ... "scripts": { "deploy": "wrangler deploy", "dev": "wrangler dev" } ... } ``` You can then run them using your package manager of choice: ```sh npm run deploy ``` ```sh yarn run deploy ``` ```sh pnpm run deploy ``` --- ## `docs` Open the Cloudflare developer documentation in your default browser. ```txt wrangler docs [] ``` - `COMMAND` - The Wrangler command you want to learn more about. This opens your default browser to the section of the documentation that describes the command. ## `init` Create a new project via the [create-cloudflare-cli (C3) tool](/workers/get-started/guide/#1-create-a-new-worker-project). A variety of web frameworks are available to choose from as well as templates. Dependencies are installed by default, with the option to deploy your project immediately. ```txt wrangler init [] [OPTIONS] ``` - `NAME` - The name of the Workers project. This is both the directory name and `name` property in the generated [Wrangler configuration](/workers/wrangler/configuration/). - `--yes` - Answer yes to any prompts for new projects. - `--from-dash` - Fetch a Worker initialized from the dashboard. This is done by passing the flag and the Worker name. `wrangler init --from-dash `. - The `--from-dash` command will not automatically sync changes made to the dashboard after the command is used. Therefore, it is recommended that you continue using the CLI. --- ## `d1` Interact with Cloudflare's D1 service. --- ## `hyperdrive` Manage [Hyperdrive](/hyperdrive/) database configurations. --- ## `vectorize` Interact with a [Vectorize](/vectorize/) vector database. --- ## `dev` Start a local server for developing your Worker. ```txt wrangler dev [ `; ``` The `landing` variable, which is a static HTML string, sets up an `input` tag and a corresponding `button`, which calls the `generateQRCode` function. This function will make an HTTP `POST` request back to your Worker, allowing you to see the corresponding QR code image returned on the page. With the above steps complete, your Worker is ready. The full version of the code looks like this: ```js const QRCode = require("qrcode-svg"); export default { async fetch(request, env, ctx) { if (request.method === "POST") { return generateQRCode(request); } return new Response(landing, { headers: { "Content-Type": "text/html", }, }); }, }; async function generateQRCode(request) { const { text } = await request.json(); const qr = new QRCode({ content: text || "https://workers.dev" }); return new Response(qr.svg(), { headers: { "Content-Type": "image/svg+xml" } }); } const landing = `

QR Generator

Click the below button to generate a new QR code. This will make a request to your Worker.

Generated QR Code Image

`; ``` ## 5. Deploy your Worker With all the above steps complete, you have written the code for a QR code generator on Cloudflare Workers. Wrangler has built-in support for bundling, uploading, and releasing your Cloudflare Workers application. To do this, run `npx wrangler deploy`, which will build and deploy your code. ```sh title="Deploy your Worker project" npx wrangler deploy ``` ## Related resources In this tutorial, you built and deployed a Worker application for generating QR codes. If you would like to see the full source code for this application, you can find it [on GitHub](https://github.com/kristianfreeman/workers-qr-code-generator). If you want to get started building your own projects, review the existing list of [Quickstart templates](/workers/get-started/quickstarts/). --- # Build a Slackbot URL: https://developers.cloudflare.com/workers/tutorials/build-a-slackbot/ import { Render, TabItem, Tabs, PackageManagers } from "~/components"; In this tutorial, you will build a [Slack](https://slack.com) bot using [Cloudflare Workers](/workers/). Your bot will make use of GitHub webhooks to send messages to a Slack channel when issues are updated or created, and allow users to write a command to look up GitHub issues from inside Slack. ![After following this tutorial, you will be able to create a Slackbot like the one in this example. Continue reading to build your Slackbot.](~/assets/images/workers/tutorials/slackbot/issue-command.png) This tutorial is recommended for people who are familiar with writing web applications. You will use TypeScript as the programming language and [Hono](https://hono.dev/) as the web framework. If you have built an application with tools like [Node](https://nodejs.org) and [Express](https://expressjs.com), this project will feel very familiar to you. If you are new to writing web applications or have wanted to build something like a Slack bot in the past, but were intimidated by deployment or configuration, Workers will be a way for you to focus on writing code and shipping projects. If you would like to review the code or how the bot works in an actual Slack channel before proceeding with this tutorial, you can access the final version of the codebase [on GitHub](https://github.com/yusukebe/workers-slack-bot). From GitHub, you can add your own Slack API keys and deploy it to your own Slack channels for testing. --- ## Set up Slack This tutorial assumes that you already have a Slack account, and the ability to create and manage Slack applications. ### Configure a Slack application To post messages from your Cloudflare Worker into a Slack channel, you will need to create an application in Slack’s UI. To do this, go to Slack’s API section, at [api.slack.com/apps](https://api.slack.com/apps), and select **Create New App**. ![To create a Slackbot, first create a Slack App](~/assets/images/workers/tutorials/slackbot/create-a-slack-app.png) Slack applications have many features. You will make use of two of them, Incoming Webhooks and Slash Commands, to build your Worker-powered Slack bot. #### Incoming Webhook Incoming Webhooks are URLs that you can use to send messages to your Slack channels. Your incoming webhook will be paired with GitHub’s webhook support to send messages to a Slack channel whenever there are updates to issues in a given repository. You will see the code in more detail as you build your application. First, create a Slack webhook: 1. On the sidebar of Slack's UI, select **Incoming Webhooks**. 2. In **Webhook URLs for your Workspace**, select **Add New Webhook to Workspace**. 3. On the following screen, select the channel that you want your webhook to send messages to (you can select a room, like #general or #code, or be messaged directly by your Slack bot when the webhook is called.) 4. Authorize the new webhook URL. After authorizing your webhook URL, you will be returned to the **Incoming Webhooks** page and can view your new webhook URL. You will add this into your Workers code later. Next, you will add the second component to your Slack bot: a Slash Command. ![Select Add New Webhook to Workspace to add a new Webhook URL in Slack's dashboard](~/assets/images/workers/tutorials/slackbot/slack-incoming-webhook.png) #### Slash Command A Slash Command in Slack is a custom-configured command that can be attached to a URL request. For example, if you configured `/weather `, Slack would make an HTTP POST request to a configured URL, passing the text `` to get the weather for a specified zip code. In your application, you will use the `/issue` command to look up GitHub issues using the [GitHub API](https://developer.github.com). Typing `/issue cloudflare/wrangler#1` will send the text `cloudflare/wrangler#1` in a HTTP POST request to your application, which the application will use to find the [relevant GitHub issue](https://github.com/cloudflare/wrangler-legacy/issues/1). 1. On the Slack sidebar, select **Slash Commands**. 2. Create your first slash command. For this tutorial, you will use the command `/issue`. The request URL should be the `/lookup` path on your application URL: for example, if your application will be hosted at `https://myworkerurl.com`, the Request URL should be `https://myworkerurl.com/lookup`. ![You must create a Slash Command in Slack's dashboard and attach it to a Request URL](~/assets/images/workers/tutorials/slackbot/create-slack-command.png) ### Configure your GitHub Webhooks Your Cloudflare Workers application will be able to handle incoming requests from Slack. It should also be able to receive events directly from GitHub. If a GitHub issue is created or updated, you can make use of GitHub webhooks to send that event to your Workers application and post a corresponding message in Slack. To configure a webhook: 1. Go to your GitHub repository's **Settings** > **Webhooks** > **Add webhook**. If you have a repository like `https://github.com/user/repo`, you can access the **Webhooks** page directly at `https://github.com/user/repo/settings/hooks`. 2. Set the Payload URL to the `/webhook` path on your Worker URL. For example, if your Worker will be hosted at `https://myworkerurl.com`, the Payload URL should be `https://myworkerurl.com/webhook`. 3. In the **Content type** dropdown, select **application/json**. The **Content type** for your payload can either be a URL-encoded payload (`application/x-www-form-urlencoded`) or JSON (`application/json`). For the purpose of this tutorial and to make parsing the payload sent to your application, select JSON. 4. In **Which events would you like to trigger this webhook?**, select **Let me select individual events**. GitHub webhooks allow you to specify which events you would like to have sent to your webhook. By default, the webhook will send `push` events from your repository. For the purpose of this tutorial, you will choose **Let me select individual events**. 5. Select the **Issues** event type. There are many different event types that can be enabled for your webhook. Selecting **Issues** will send every issue-related event to your webhook, including when issues are opened, edited, deleted, and more. If you would like to expand your Slack bot application in the future, you can select more of these events after the tutorial. 6. Select **Add webhook**. ![Create a GitHub Webhook in the GitHub dashboard](~/assets/images/workers/tutorials/slackbot/new-github-webhook.png) When your webhook is created, it will attempt to send a test payload to your application. Since your application is not actually deployed yet, leave the configuration as it is. You will later return to your repository to create, edit, and close some issues to ensure that the webhook is working once your application is deployed. ## Init To initiate the project, use the command line interface [C3 (create-cloudflare-cli)](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare). Follow these steps to create a Hono project. - For _What would you like to start with_?, select `Framework Starter`. - For _Which development framework do you want to use?_, select `Hono`. - For, _Do you want to deploy your application?_, select `No`. Go to the `slack-bot` directory: ```sh cd slack-bot ``` Open `src/index.ts` in an editor to find the following code. ```ts import { Hono } from "hono"; type Bindings = { [key in keyof CloudflareBindings]: CloudflareBindings[key]; }; const app = new Hono<{ Bindings: Bindings }>(); app.get("/", (c) => { return c.text("Hello Hono!"); }); export default app; ``` This is a minimal application using Hono. If a GET access comes in on the path `/`, it will return a response with the text `Hello Hono!`. It also returns a message `404 Not Found` with status code 404 if any other path or method is accessed. To run the application on your local machine, execute the following command. ```sh title="Run your application locally" npm run dev ``` ```sh title="Run your application locally" yarn dev ``` Access to `http://localhost:8787` in your browser after the server has been started, and you can see the message. Hono helps you to create your Workers application easily and quickly. ## Build Now, let's create a Slack bot on Cloudflare Workers. ### Separating files You can create your application in several files instead of writing all endpoints and functions in one file. With Hono, it is able to add routing of child applications to the parent application using the function `app.route()`. For example, imagine the following Web API application. ```ts import { Hono } from "hono"; const app = new Hono(); app.get("/posts", (c) => c.text("Posts!")); app.post("/posts", (c) => c.text("Created!", 201)); export default app; ``` You can add the routes under `/api/v1`. ```ts null {2,6} import { Hono } from "hono"; import api from "./api"; const app = new Hono(); app.route("/api/v1", api); export default app; ``` It will return `Posts!` when accessing `GET /api/v1/posts`. The Slack bot will have two child applications called "route" each. 1. `lookup` route will take requests from Slack (sent when a user uses the `/issue` command), and look up the corresponding issue using the GitHub API. This application will be added to `/lookup` in the main application. 2. `webhook` route will be called when an issue changes on GitHub, via a configured webhook. This application will be add to `/webhook` in the main application. Create the route files in a directory named `routes`. ```sh title="Create new folders and files" mkdir -p src/routes touch src/routes/lookup.ts touch src/routes/webhook.ts ``` Then update the main application. ```ts null {2,3,7,8} import { Hono } from "hono"; import lookup from "./routes/lookup"; import webhook from "./routes/webhook"; const app = new Hono(); app.route("/lookup", lookup); app.route("/webhook", webhook); export default app; ``` ### Defining TypeScript types Before implementing the actual functions, you need to define the TypeScript types you will use in this project. Create a new file in the application at `src/types.ts` and write the code. `Bindings` is a type that describes the Cloudflare Workers environment variables. `Issue` is a type for a GitHub issue and `User` is a type for a GitHub user. You will need these later. ```ts export type Bindings = { SLACK_WEBHOOK_URL: string; }; export type Issue = { html_url: string; title: string; body: string; state: string; created_at: string; number: number; user: User; }; type User = { html_url: string; login: string; avatar_url: string; }; ``` ### Creating the lookup route Start creating the lookup route in `src/routes/lookup.ts`. ```ts import { Hono } from "hono"; const app = new Hono(); export default app; ``` To understand how you should design this function, you need to understand how Slack slash commands send data to URLs. According to the [documentation for Slack slash commands](https://api.slack.com/interactivity/slash-commands), Slack sends an HTTP POST request to your specified URL, with a `application/x-www-form-urlencoded` content type. For example, if someone were to type `/issue cloudflare/wrangler#1`, you could expect a data payload in the format: ```txt token=gIkuvaNzQIHg97ATvDxqgjtO &team_id=T0001 &team_domain=example &enterprise_id=E0001 &enterprise_name=Globular%20Construct%20Inc &channel_id=C2147483705 &channel_name=test &user_id=U2147483697 &user_name=Steve &command=/issue &text=cloudflare/wrangler#1 &response_url=https://hooks.slack.com/commands/1234/5678 &trigger_id=13345224609.738474920.8088930838d88f008e0 ``` Given this payload body, you need to parse it, and get the value of the `text` key. With that `text`, for example, `cloudflare/wrangler#1`, you can parse that string into known piece of data (`owner`, `repo`, and `issue_number`), and use it to make a request to GitHub’s API, to retrieve the issue data. With Slack slash commands, you can respond to a slash command by returning structured data as the response to the incoming slash command. In this case, you should use the response from GitHub’s API to present a formatted version of the GitHub issue, including pieces of data like the title of the issue, who created it, and the date it was created. Slack’s new [Block Kit](https://api.slack.com/block-kit) framework will allow you to return a detailed message response, by constructing text and image blocks with the data from GitHub’s API. #### Parsing slash commands To begin, the `lookup` route should parse the messages coming from Slack. As previously mentioned, the Slack API sends an HTTP POST in URL Encoded format. You can get the variable `text` by parsing it with `c.req.json()`. ```ts null {5,6,7,8,9,10} import { Hono } from "hono"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } }); export default app; ``` Given a `text` variable, that contains text like `cloudflare/wrangler#1`, you should parse that text, and get the individual parts from it for use with GitHub’s API: `owner`, `repo`, and `issue_number`. To do this, create a new file in your application, at `src/utils/github.ts`. This file will contain a number of “utility” functions for working with GitHub’s API. The first of these will be a string parser, called `parseGhIssueString`: ```ts const ghIssueRegex = /(?[\w.-]*)\/(?[\w.-]*)\#(?\d*)/; export const parseGhIssueString = (text: string) => { const match = text.match(ghIssueRegex); return match ? (match.groups ?? {}) : {}; }; ``` `parseGhIssueString` takes in a `text` input, matches it against `ghIssueRegex`, and if a match is found, returns the `groups` object from that match, making use of the `owner`, `repo`, and `issue_number` capture groups defined in the regex. By exporting this function from `src/utils/github.ts`, you can make use of it back in `src/handlers/lookup.ts`: ```ts null {2,12} import { Hono } from "hono"; import { parseGhIssueString } from "../utils/github"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } const { owner, repo, issue_number } = parseGhIssueString(text); }); export default app; ``` #### Making requests to GitHub’s API With this data, you can make your first API lookup to GitHub. Again, make a new function in `src/utils/github.ts`, to make a `fetch` request to the GitHub API for the issue data: ```ts null {8,9,10,11,12} const ghIssueRegex = /(?[\w.-]*)\/(?[\w.-]*)\#(?\d*)/; export const parseGhIssueString = (text: string) => { const match = text.match(ghIssueRegex); return match ? (match.groups ?? {}) : {}; }; export const fetchGithubIssue = ( owner: string, repo: string, issue_number: string, ) => { const url = `https://api.github.com/repos/${owner}/${repo}/issues/${issue_number}`; const headers = { "User-Agent": "simple-worker-slack-bot" }; return fetch(url, { headers }); }; ``` Back in `src/handlers/lookup.ts`, use `fetchGitHubIssue` to make a request to GitHub’s API, and parse the response: ```ts null {2,3,14,15} import { Hono } from "hono"; import { fetchGithubIssue, parseGhIssueString } from "../utils/github"; import { Issue } from "../types"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } const { owner, repo, issue_number } = parseGhIssueString(text); const response = await fetchGithubIssue(owner, repo, issue_number); const issue = await response.json(); }); export default app; ``` #### Constructing a Slack message After you have received a response back from GitHub’s API, the final step is to construct a Slack message with the issue data, and return it to the user. The final result will look something like this: ![A successful Slack Message will have the components listed below](~/assets/images/workers/tutorials/slackbot/issue-slack-message.png) You can see four different pieces in the above screenshot: 1. The first line (bolded) links to the issue, and shows the issue title 2. The following lines (including code snippets) are the issue body 3. The last line of text shows the issue status, the issue creator (with a link to the user’s GitHub profile), and the creation date for the issue 4. The profile picture of the issue creator, on the right-hand side The previously mentioned [Block Kit](https://api.slack.com/block-kit) framework will help take the issue data (in the structure lined out in [GitHub’s REST API documentation](https://developer.github.com/v3/issues/)) and format it into something like the above screenshot. Create another file, `src/utils/slack.ts`, to contain the function `constructGhIssueSlackMessage`, a function for taking issue data, and turning it into a collection of blocks. Blocks are JavaScript objects that Slack will use to format the message: ```ts import { Issue } from "../types"; export const constructGhIssueSlackMessage = ( issue: Issue, issue_string: string, prefix_text?: string, ) => { const issue_link = `<${issue.html_url}|${issue_string}>`; const user_link = `<${issue.user.html_url}|${issue.user.login}>`; const date = new Date(Date.parse(issue.created_at)).toLocaleDateString(); const text_lines = [ prefix_text, `*${issue.title} - ${issue_link}*`, issue.body, `*${issue.state}* - Created by ${user_link} on ${date}`, ]; }; ``` Slack messages accept a variant of Markdown, which supports bold text via asterisks (`*bolded text*`), and links in the format ``. Given that format, construct `issue_link`, which takes the `html_url` property from the GitHub API `issue` data (in format `https://github.com/cloudflare/wrangler-legacy/issues/1`), and the `issue_string` sent from the Slack slash command, and combines them into a clickable link in the Slack message. `user_link` is similar, using `issue.user.html_url` (in the format `https://github.com/signalnerve`, a GitHub user) and the user’s GitHub username (`issue.user.login`), to construct a clickable link to the GitHub user. Finally, parse `issue.created_at`, an ISO 8601 string, convert it into an instance of a JavaScript `Date`, and turn it into a formatted string, in the format `MM/DD/YY`. With those variables in place, `text_lines` is an array of each line of text for the Slack message. The first line is the **issue title** and the **issue link**, the second is the **issue body**, and the final line is the **issue state** (for example, open or closed), the **user link**, and the **creation date**. With the text constructed, you can finally construct your Slack message, returning an array of blocks for Slack’s [Block Kit](https://api.slack.com/block-kit). In this case, there is only have one block: a [section](https://api.slack.com/reference/messaging/blocks#section) block with Markdown text, and an accessory image of the user who created the issue. Return that single block inside of an array, to complete the `constructGhIssueSlackMessage` function: ```ts null {15,16,17,18,19,20,21,22,23,24,25,26,27,28} import { Issue } from "../types"; export const constructGhIssueSlackMessage = ( issue: Issue, issue_string: string, prefix_text?: string, ) => { const issue_link = `<${issue.html_url}|${issue_string}>`; const user_link = `<${issue.user.html_url}|${issue.user.login}>`; const date = new Date(Date.parse(issue.created_at)).toLocaleDateString(); const text_lines = [ prefix_text, `*${issue.title} - ${issue_link}*`, issue.body, `*${issue.state}* - Created by ${user_link} on ${date}`, ]; return [ { type: "section", text: { type: "mrkdwn", text: text_lines.join("\n"), }, accessory: { type: "image", image_url: issue.user.avatar_url, alt_text: issue.user.login, }, }, ]; }; ``` #### Finishing the lookup route In `src/handlers/lookup.ts`, use `constructGhIssueSlackMessage` to construct `blocks`, and return them as a new response with `c.json()` when the slash command is called: ```ts null {3,17,18,19,20,21,22} import { Hono } from "hono"; import { fetchGithubIssue, parseGhIssueString } from "../utils/github"; import { constructGhIssueSlackMessage } from "../utils/slack"; import { Issue } from "../types"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } const { owner, repo, issue_number } = parseGhIssueString(text); const response = await fetchGithubIssue(owner, repo, issue_number); const issue = await response.json(); const blocks = constructGhIssueSlackMessage(issue, text); return c.json({ blocks, response_type: "in_channel", }); }); export default app; ``` One additional parameter passed into the response is `response_type`. By default, responses to slash commands are ephemeral, meaning that they are only seen by the user who writes the slash command. Passing a `response_type` of `in_channel`, as seen above, will cause the response to appear for all users in the channel. If you would like the messages to remain private, remove the `response_type` line. This will cause `response_type` to default to `ephemeral`. #### Handling errors The `lookup` route is almost complete, but there are a number of errors that can occur in the route, such as parsing the body from Slack, getting the issue from GitHub, or constructing the Slack message itself. Although Hono applications can handle errors without having to do anything, you can customize the response returned in the following way. ```ts null {25,26,27,28,29,30} import { Hono } from "hono"; import { fetchGithubIssue, parseGhIssueString } from "../utils/github"; import { constructGhIssueSlackMessage } from "../utils/slack"; import { Issue } from "../types"; const app = new Hono(); app.post("/", async (c) => { const { text } = await c.req.parseBody(); if (typeof text !== "string") { return c.notFound(); } const { owner, repo, issue_number } = parseGhIssueString(text); const response = await fetchGithubIssue(owner, repo, issue_number); const issue = await response.json(); const blocks = constructGhIssueSlackMessage(issue, text); return c.json({ blocks, response_type: "in_channel", }); }); app.onError((_e, c) => { return c.text( "Uh-oh! We couldn't find the issue you provided. " + "We can only find public issues in the following format: `owner/repo#issue_number`.", ); }); export default app; ``` ### Creating the webhook route You are now halfway through implementing the routes for your Workers application. In implementing the next route, `src/routes/webhook.ts`, you will re-use a lot of the code that you have already written for the lookup route. At the beginning of this tutorial, you configured a GitHub webhook to track any events related to issues in your repository. When an issue is opened, for example, the function corresponding to the path `/webhook` on your Workers application should take the data sent to it from GitHub, and post a new message in the configured Slack channel. In `src/routes/webhook.ts`, define a blank Hono application. The difference from the `lookup` route is that the `Bindings` is passed as a generics for the `new Hono()`. This is necessary to give the appropriate TypeScript type to `SLACK_WEBHOOK_URL` which will be used later. ```ts import { Hono } from "hono"; import { Bindings } from "../types"; const app = new Hono<{ Bindings: Bindings }>(); export default app; ``` Much like with the `lookup` route, you will need to parse the incoming payload inside of `request`, get the relevant issue data from it (refer to [the GitHub API documentation on `IssueEvent`](https://developer.github.com/v3/activity/events/types/#issuesevent) for the full payload schema), and send a formatted message to Slack to indicate what has changed. The final version will look something like this: ![A successful Webhook Message example](~/assets/images/workers/tutorials/slackbot/webhook_example.png) Compare this message format to the format returned when a user uses the `/issue` slash command. You will see that there is only one actual difference between the two: the addition of an action text on the first line, in the format `An issue was $action:`. This action, which is sent as part of the `IssueEvent` from GitHub, will be used as you construct a very familiar looking collection of blocks using Slack’s Block Kit. #### Parsing event data To start filling out the route, parse the request body formatted JSON into an object and construct some helper variables: ```ts null {2,6,7,8,9,10} import { Hono } from "hono"; import { constructGhIssueSlackMessage } from "../utils/slack"; const app = new Hono(); app.post("/", async (c) => { const { action, issue, repository } = await c.req.json(); const prefix_text = `An issue was ${action}:`; const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`; }); export default app; ``` An `IssueEvent`, the payload sent from GitHub as part of your webhook configuration, includes an `action` (what happened to the issue: for example, it was opened, closed, locked, etc.), the `issue` itself, and the `repository`, among other things. Use `c.req.json()` to convert the payload body of the request from JSON into a plain JS object. Use ES6 destructuring to set `action`, `issue` and `repository` as variables you can use in your code. `prefix_text` is a string indicating what happened to the issue, and `issue_string` is the familiar string `owner/repo#issue_number` that you have seen before: while the `lookup` route directly used the text sent from Slack to fill in `issue_string`, you will construct it directly based on the data passed in the JSON payload. #### Constructing and sending a Slack message The messages your Slack bot sends back to your Slack channel from the `lookup` and `webhook` routes are incredibly similar. Because of this, you can re-use the existing `constructGhIssueSlackMessage` to continue populating `src/handlers/webhook.ts`. Import the function from `src/utils/slack.ts`, and pass the issue data into it: ```ts null {10} import { Hono } from "hono"; import { constructGhIssueSlackMessage } from "../utils/slack"; const app = new Hono(); app.post("/", async (c) => { const { action, issue, repository } = await c.req.json(); const prefix_text = `An issue was ${action}:`; const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`; const blocks = constructGhIssueSlackMessage(issue, issue_string, prefix_text); }); export default app; ``` Importantly, the usage of `constructGhIssueSlackMessage` in this handler adds one additional argument to the function, `prefix_text`. Update the corresponding function inside of `src/utils/slack.ts`, adding `prefix_text` to the collection of `text_lines` in the message block, if it has been passed in to the function. Add a utility function, `compact`, which takes an array, and filters out any `null` or `undefined` values from it. This function will be used to remove `prefix_text` from `text_lines` if it has not actually been passed in to the function, such as when called from `src/handlers/lookup.ts`. The full (and final) version of the `src/utils/slack.ts` looks like this: ```ts null {3,26} import { Issue } from "../types"; const compact = (array: unknown[]) => array.filter((el) => el); export const constructGhIssueSlackMessage = ( issue: Issue, issue_string: string, prefix_text?: string, ) => { const issue_link = `<${issue.html_url}|${issue_string}>`; const user_link = `<${issue.user.html_url}|${issue.user.login}>`; const date = new Date(Date.parse(issue.created_at)).toLocaleDateString(); const text_lines = [ prefix_text, `*${issue.title} - ${issue_link}*`, issue.body, `*${issue.state}* - Created by ${user_link} on ${date}`, ]; return [ { type: "section", text: { type: "mrkdwn", text: compact(text_lines).join("\n"), }, accessory: { type: "image", image_url: issue.user.avatar_url, alt_text: issue.user.login, }, }, ]; }; ``` Back in `src/handlers/webhook.ts`, the `blocks` that are returned from `constructGhIssueSlackMessage` become the body in a new `fetch` request, an HTTP POST request to a Slack webhook URL. Once that request completes, return a response with status code `200`, and the body text `"OK"`: ```ts null {13,14,15,16,17,18,19} import { Hono } from "hono"; import { constructGhIssueSlackMessage } from "../utils/slack"; import { Bindings } from "../types"; const app = new Hono<{ Bindings: Bindings }>(); app.post("/", async (c) => { const { action, issue, repository } = await c.req.json(); const prefix_text = `An issue was ${action}:`; const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`; const blocks = constructGhIssueSlackMessage(issue, issue_string, prefix_text); const fetchResponse = await fetch(c.env.SLACK_WEBHOOK_URL, { body: JSON.stringify({ blocks }), method: "POST", headers: { "Content-Type": "application/json" }, }); return c.text("OK"); }); export default app; ``` The constant `SLACK_WEBHOOK_URL` represents the Slack Webhook URL that you created all the way back in the [Incoming Webhook](/workers/tutorials/build-a-slackbot/#incoming-webhook) section of this tutorial. :::caution Since this webhook allows developers to post directly to your Slack channel, keep it secret. ::: To use this constant inside of your codebase, use the [`wrangler secret`](/workers/wrangler/commands/#secret) command: ```sh title="Set the SLACK_WEBHOOK_URL secret" npx wrangler secret put SLACK_WEBHOOK_URL ``` ```sh output Enter a secret value: https://hooks.slack.com/services/abc123 ``` #### Handling errors Similarly to the `lookup` route, the `webhook` route should include some basic error handling. Unlike `lookup`, which sends responses directly back into Slack, if something goes wrong with your webhook, it may be useful to actually generate an erroneous response, and return it to GitHub. To do this, write the custom error handler with `app.onError()` and return a new response with a status code of `500`. The final version of `src/routes/webhook.ts` looks like this: ```ts null {24,25,26,27,28,29,30,31} import { Hono } from "hono"; import { constructGhIssueSlackMessage } from "../utils/slack"; import { Bindings } from "../types"; const app = new Hono<{ Bindings: Bindings }>(); app.post("/", async (c) => { const { action, issue, repository } = await c.req.json(); const prefix_text = `An issue was ${action}:`; const issue_string = `${repository.owner.login}/${repository.name}#${issue.number}`; const blocks = constructGhIssueSlackMessage(issue, issue_string, prefix_text); const fetchResponse = await fetch(c.env.SLACK_WEBHOOK_URL, { body: JSON.stringify({ blocks }), method: "POST", headers: { "Content-Type": "application/json" }, }); if (!fetchResponse.ok) throw new Error(); return c.text("OK"); }); app.onError((_e, c) => { return c.json( { message: "Unable to handle webhook", }, 500, ); }); export default app; ``` ## Deploy By completing the preceding steps, you have finished writing the code for your Slack bot. You can now deploy your application. Wrangler has built-in support for bundling, uploading, and releasing your Cloudflare Workers application. To do this, run the following command which will build and deploy your code. ```sh title="Deploy your application" npm run deploy ``` ```sh title="Deploy your application" yarn deploy ``` Deploying your Workers application should now cause issue updates to start appearing in your Slack channel, as the GitHub webhook can now successfully reach your Workers webhook route: ![When you create new issue, a Slackbot will now appear in your Slack channel](/images/workers/tutorials/slackbot/create-new-issue.gif) ## Related resources In this tutorial, you built and deployed a Cloudflare Workers application that can respond to GitHub webhook events, and allow GitHub API lookups within Slack. If you would like to review the full source code for this application, you can find the repository [on GitHub](https://github.com/yusukebe/workers-slack-bot). If you want to get started building your own projects, review the existing list of [Quickstart templates](/workers/get-started/quickstarts/). --- # Build a todo list Jamstack application URL: https://developers.cloudflare.com/workers/tutorials/build-a-jamstack-app/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will build a todo list application using HTML, CSS, and JavaScript. The application data will be stored in [Workers KV](/kv/api/). ![Preview of a finished todo list. Continue reading for instructions on how to set up a todo list.](~/assets/images/workers/tutorials/jamstack/finished.png) Before starting this project, you should have some experience with HTML, CSS, and JavaScript. You will learn: 1. How building with Workers makes allows you to focus on writing code and ship finished products. 2. How the addition of Workers KV makes this tutorial a great introduction to building full, data-driven applications. If you would like to see the finished code for this project, find the [project on GitHub](https://github.com/lauragift21/cloudflare-workers-todos) and refer to the [live demo](https://todos.examples.workers.dev/) to review what you will be building. ## 1. Create a new Workers project First, use the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) CLI tool to create a new Cloudflare Workers project named `todos`. In this tutorial, you will use the default `Hello World` template to create a Workers project. Move into your newly created directory: ```sh cd todos ``` Inside of your new `todos` Worker project directory, `index.js` represents the entry point to your Cloudflare Workers application. All incoming HTTP requests to a Worker are passed to the [`fetch()` handler](/workers/runtime-apis/handlers/fetch/) as a [request](/workers/runtime-apis/request/) object. After a request is received by the Worker, the response your application constructs will be returned to the user. This tutorial will guide you through understanding how the request/response pattern works and how you can use it to build fully featured applications. ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` In your default `index.js` file, you can see that request/response pattern in action. The `fetch` constructs a new `Response` with the body text `'Hello World!'`. When a Worker receives a `request`, the Worker returns the newly constructed response to the client. Your Worker will serve new responses directly from [Cloudflare's global network](https://www.cloudflare.com/network) instead of continuing to your origin server. A standard server would accept requests and return responses. Cloudflare Workers allows you to respond by constructing responses directly on the Cloudflare global network. ## 2. Review project details Any project you deploy to Cloudflare Workers can make use of modern JavaScript tooling like [ES modules](/workers/reference/migrate-to-module-workers/), `npm` packages, and [`async`/`await`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function) functions to build your application. In addition to writing Workers, you can use Workers to [build full applications](/workers/tutorials/build-a-slackbot/) using the same tooling and process as in this tutorial. In this tutorial, you will build a todo list application running on Workers that allows reading data from a [KV](/kv/) store and using the data to populate an HTML response to send to the client. The work needed to create this application is split into three tasks: 1. Write data to KV. 2. Rendering data from KV. 3. Adding todos from the application UI. For the remainder of this tutorial you will complete each task, iterating on your application, and then publish it to your own domain. ## 3. Write data to KV To begin, you need to understand how to populate your todo list with actual data. To do this, use [Cloudflare Workers KV](/kv/) — a key-value store that you can access inside of your Worker to read and write data. To get started with KV, set up a namespace. All of your cached data will be stored inside that namespace and, with configuration, you can access that namespace inside the Worker with a predefined variable. Use Wrangler to create a new namespace called `TODOS` with the [`kv namespace create` command](/workers/wrangler/commands/#kv-namespace-create) and get the associated namespace ID by running the following command in your terminal: ```sh title="Create a new KV namespace" npx wrangler kv namespace create "TODOS" --preview ``` The associated namespace can be combined with a `--preview` flag to interact with a preview namespace instead of a production namespace. Namespaces can be added to your application by defining them inside your Wrangler configuration. Copy your newly created namespace ID, and in your [Wrangler configuration file](/workers/wrangler/configuration/), define a `kv_namespaces` key to set up your namespace: ```toml kv_namespaces = [ {binding = "TODOS", id = "", preview_id = ""} ] ``` The defined namespace, `TODOS`, will now be available inside of your codebase. With that, it is time to understand the [KV API](/kv/api/). A KV namespace has three primary methods you can use to interface with your cache: `get`, `put`, and `delete`. Start storing data by defining an initial set of data, which you will put inside of the cache using the `put` method. The following example defines a `defaultData` object instead of an array of todo items. You may want to store metadata and other information inside of this cache object later on. Given that data object, use `JSON.stringify` to add a string into the cache: ```js export default { async fetch(request, env, ctx) { const defaultData = { todos: [ { id: 1, name: "Finish the Cloudflare Workers blog post", completed: false, }, ], }; await env.TODOS.put("data", JSON.stringify(defaultData)); return new Response("Hello World!"); }, }; ``` Workers KV is an eventually consistent, global datastore. Any writes within a region are immediately reflected within that same region but will not be immediately available in other regions. However, those writes will eventually be available everywhere and, at that point, Workers KV guarantees that data within each region will be consistent. Given the presence of data in the cache and the assumption that your cache is eventually consistent, this code needs a slight adjustment: the application should check the cache and use its value, if the key exists. If it does not, you will use `defaultData` as the data source for now (it should be set in the future) and write it to the cache for future use. After breaking out the code into a few functions for simplicity, the result looks like this: ```js export default { async fetch(request, env, ctx) { const defaultData = { todos: [ { id: 1, name: "Finish the Cloudflare Workers blog post", completed: false, }, ], }; const setCache = (data) => env.TODOS.put("data", data); const getCache = () => env.TODOS.get("data"); let data; const cache = await getCache(); if (!cache) { await setCache(JSON.stringify(defaultData)); data = defaultData; } else { data = JSON.parse(cache); } return new Response(JSON.stringify(data)); }, }; ``` ## Render data from KV Given the presence of data in your code, which is the cached data object for your application, you should take this data and render it in a user interface. To do this, make a new `html` variable in your Workers script and use it to build up a static HTML template that you can serve to the client. In `fetch`, construct a new `Response` with a `Content-Type: text/html` header and serve it to the client: ```js const html = ` Todos

Todos

OSZAR »
`; async fetch (request, env, ctx) { // previous code return new Response(html, { headers: { 'Content-Type': 'text/html' } }); } ``` You have a static HTML site being rendered and you can begin populating it with data. In the body, add a `div` tag with an `id` of `todos`: ```js null {10} const html = ` Todos

Todos

OSZAR »
`; ``` Add a ` `; ``` Your static page can take in `window.todos` and render HTML based on it, but you have not actually passed in any data from KV. To do this, you will need to make a few changes. First, your `html` variable will change to a function. The function will take in a `todos` argument, which will populate the `window.todos` variable in the above code sample: ```js null {1,6} const html = (todos) => ` `; ``` This code updates the cache. Remember that the KV cache is eventually consistent — even if you were to update your Worker to read from the cache and return it, you have no guarantees it will actually be up to date. Instead, update the list of todos locally, by taking your original code for rendering the todo list, making it a reusable function called `populateTodos`, and calling it when the page loads and when the cache request has finished: ```js null {6,7,8,9,10,11,12,13,14,15,16} const html = (todos) => ` `; ``` With the client-side code in place, deploying the new version of the function should put all these pieces together. The result is an actual dynamic todo list. ## 5. Update todos from the application UI For the final piece of your todo list, you need to be able to update todos — specifically, marking them as completed. Luckily, a great deal of the infrastructure for this work is already in place. You can update the todo list data in the cache, as evidenced by your `createTodo` function. Performing updates on a todo is more of a client-side task than a Worker-side one. To start, the `populateTodos` function can be updated to generate a `div` for each todo. In addition, move the name of the todo into a child element of that `div`: ```js null {11,12,13} const html = (todos) => ` `; ``` You have designed the client-side part of this code to handle an array of todos and render a list of HTML elements. There is a number of things that you have been doing that you have not quite had a use for yet – specifically, the inclusion of IDs and updating the todo's completed state. These things work well together to actually support updating todos in the application UI. To start, it would be useful to attach the ID of each todo in the HTML. By doing this, you can then refer to the element later in order to correspond it to the todo in the JavaScript part of your code. Data attributes and the corresponding `dataset` method in JavaScript are a perfect way to implement this. When you generate your `div` element for each todo, you can attach a data attribute called todo to each `div`: ```js null {11} const html = (todos) => ` `; ``` Inside your HTML, each `div` for a todo now has an attached data attribute, which looks like: ```html
``` You can now generate a checkbox for each todo element. This checkbox will default to unchecked for new todos but you can mark it as checked as the element is rendered in the window: ```js null {13,14,15,17} const html = (todos) => ` `; ``` The checkbox is set up to correctly reflect the value of completed on each todo but it does not yet update when you actually check the box. To do this, attach the `completeTodo` function as an event listener on the `click` event. Inside the function, inspect the checkbox element, find its parent (the todo `div`), and use its `todo` data attribute to find the corresponding todo in the data array. You can toggle the completed status, update its properties, and rerender the UI: ```js null {9,13,14,15,16,17,18,19,20,21,22} const html = (todos) => ` `; ``` The final result of your code is a system that checks the `todos` variable, updates your Cloudflare KV cache with that value, and then does a re-render of the UI based on the data it has locally. ## 6. Conclusion and next steps By completing this tutorial, you have built a static HTML, CSS, and JavaScript application that is transparently powered by Workers and Workers KV, which take full advantage of Cloudflare's global network. If you would like to keep improving on your project, you can implement a better design (you can refer to a live version available at [todos.signalnerve.workers.dev](https://todos.signalnerve.workers.dev/)), or make additional improvements to security and speed. You may also want to add user-specific caching. Right now, the cache key is always `data` – this means that any visitor to the site will share the same todo list with other visitors. Within your Worker, you could use values from the client request to create and maintain user-specific lists. For example, you may generate a cache key based on the requesting IP: ```js null {15,16,22,33} export default { async fetch(request, env, ctx) { const defaultData = { todos: [ { id: 1, name: "Finish the Cloudflare Workers blog post", completed: false, }, ], }; const setCache = (key, data) => env.TODOS.put(key, data); const getCache = (key) => env.TODOS.get(key); const ip = request.headers.get("CF-Connecting-IP"); const myKey = `data-${ip}`; if (request.method === "PUT") { const body = await request.text(); try { JSON.parse(body); await setCache(myKey, body); return new Response(body, { status: 200 }); } catch (err) { return new Response(err, { status: 500 }); } } let data; const cache = await getCache(); if (!cache) { await setCache(myKey, JSON.stringify(defaultData)); data = defaultData; } else { data = JSON.parse(cache); } const body = html(JSON.stringify(data.todos).replace(/ ` Todos

Todos

OSZAR »
`; export default { async fetch(request, env, ctx) { const defaultData = { todos: [ { id: 1, name: "Finish the Cloudflare Workers blog post", completed: false, }, ], }; const setCache = (key, data) => env.TODOS.put(key, data); const getCache = (key) => env.TODOS.get(key); const ip = request.headers.get("CF-Connecting-IP"); const myKey = `data-${ip}`; if (request.method === "PUT") { const body = await request.text(); try { JSON.parse(body); await setCache(myKey, body); return new Response(body, { status: 200 }); } catch (err) { return new Response(err, { status: 500 }); } } let data; const cache = await getCache(); if (!cache) { await setCache(myKey, JSON.stringify(defaultData)); data = defaultData; } else { data = JSON.parse(cache); } const body = html(JSON.stringify(data.todos).replace(/ The above options will create the "Hello World" TypeScript project. Move into your newly created directory: ```sh cd finetune-chatgpt-model ``` ## 2. Upload a fine-tune document to R2 Next, upload the fine-tune document to R2. R2 is a key-value store that allows you to store and retrieve files from within your Workers application. You will use [Wrangler](/workers/wrangler) to create a new R2 bucket. To create a new R2 bucket use the [`wrangler r2 bucket create`](/workers/wrangler/commands/#r2-bucket-create) command. Note that you are logged in with your Cloudflare account. If not logged in via Wrangler, use the [`wrangler login`](/workers/wrangler/commands/#login) command. ```sh npx wrangler r2 bucket create ``` Replace `` with your desired bucket name. Note that bucket names must be lowercase and can only contain dashes. Next, upload a file using the [`wrangler r2 object put`](/workers/wrangler/commands/#r2-object-put) command. ```sh npx wrangler r2 object put -f ``` `` is the combined bucket and file path of the file you want to upload -- for example, `fine-tune-ai/finetune.jsonl`, where `fine-tune-ai` is the bucket name. Replace `` with the local filename of your fine-tune document. ## 3. Bind your bucket to the Worker A binding is how your Worker interacts with external resources such as the R2 bucket. To bind the R2 bucket to your Worker, add the following to your Wrangler file. Update the binding property to a valid JavaScript variable identifier. Replace `` with the name of the bucket you created in [step 2](#2-upload-a-fine-tune-document-to-r2): ```toml [[r2_buckets]] binding = 'MY_BUCKET' # <~ valid JavaScript variable name bucket_name = '' ``` ## 4. Initialize your Worker application You will use [Hono](https://hono.dev/), a lightweight framework for building Cloudflare Workers applications. Hono provides an interface for defining routes and middleware functions. Inside your project directory, run the following command to install Hono: ```sh npm install hono ``` You also need to install the [OpenAI Node API library](https://www.npmjs.com/package/openai). This library provides convenient access to the OpenAI REST API in a Node.js project. To install the library, execute the following command: ```sh npm install openai ``` Next, open the `src/index.ts` file and replace the default code with the below code. Replace `` with the binding name you set in Wrangler file. ```typescript import { Context, Hono } from "hono"; import OpenAI from "openai"; type Bindings = { : R2Bucket OPENAI_API_KEY: string } type Variables = { openai: OpenAI } const app = new Hono<{ Bindings: Bindings, Variables: Variables }>() app.use('*', async (c, next) => { const openai = new OpenAI({ apiKey: c.env.OPENAI_API_KEY, }) c.set("openai", openai) await next() }) app.onError((err, c) => { return c.text(err.message, 500) }) export default app; ``` In the above code, you first import the required packages and define the types. Then, you initialize `app` as a new Hono instance. Using the `use` middleware function, you add the OpenAI API client to the context of all routes. This middleware function allows you to access the client from within any route handler. `onError()` defines an error handler to return any errors as a JSON response. ## 5. Read R2 files and upload them to OpenAI In this section, you will define the route and function responsible for handling file uploads. In `createFile`, your Worker reads the file from R2 and converts it to a `File` object. Your Worker then uses the OpenAI API to upload the file and return the response. The `GET /files` route listens for `GET` requests with a query parameter `file`, representing a filename of an uploaded fine-tune document in R2. The function uses the `createFile` function to manage the file upload process. Replace `` with the binding name you set in Wrangler file. ```typescript // New import added at beginning of file import { toFile } from 'openai/uploads' const createFile = async (c: Context, r2Object: R2ObjectBody) => { const openai: OpenAI = c.get("openai") const blob = await r2Object.blob() const file = await toFile(blob, r2Object.key) const uploadedFile = await openai.files.create({ file, purpose: "fine-tune", }) return uploadedFile } app.get('/files', async c => { const fileQueryParam = c.req.query("file") if (!fileQueryParam) return c.text("Missing file query param", 400) const file = await c.env..get(fileQueryParam) if (!file) return c.text("Couldn't find file", 400) const uploadedFile = await createFile(c, file) return c.json(uploadedFile) }) ``` ## 6. Create fine-tuned models This section includes the `GET /models` route and the `createModel` function. The function `createModel` takes care of specifying the details and initiating the fine-tuning process with OpenAI. The route handles incoming requests for creating a new fine-tuned model. ```typescript const createModel = async (c: Context, fileId: string) => { const openai: OpenAI = c.get("openai"); const body = { training_file: fileId, model: "gpt-4o-mini", }; return openai.fineTuning.jobs.create(body); }; app.get("/models", async (c) => { const fileId = c.req.query("file_id"); if (!fileId) return c.text("Missing file ID query param", 400); const model = await createModel(c, fileId); return c.json(model); }); ``` ## 7. List all fine-tune jobs This section describes the `GET /jobs` route and the corresponding `getJobs` function. The function interacts with OpenAI's API to fetch a list of all fine-tuning jobs. The route provides an interface for retrieving this information. ```typescript const getJobs = async (c: Context) => { const openai: OpenAI = c.get("openai"); const resp = await openai.fineTuning.jobs.list(); return resp.data; }; app.get("/jobs", async (c) => { const jobs = await getJobs(c); return c.json(jobs); }); ``` ## 8. Deploy your application After you have created your Worker application and added the required functions, deploy the application. Before you deploy, you must set the `OPENAI_API_KEY` [secret](/workers/configuration/secrets/) for your application. Do this by running the [`wrangler secret put`](/workers/wrangler/commands/#put) command: ```sh npx wrangler secret put OPENAI_API_KEY ``` To deploy your Worker application to the Cloudflare global network: 1. Make sure you are in your Worker project's directory, then run the [`wrangler deploy`](/workers/wrangler/commands/#deploy) command: ```sh npx wrangler deploy ``` 2. Wrangler will package and upload your code. 3. After your application is deployed, Wrangler will provide you with your Worker's URL. ## 9. View the fine-tune job status and use the model To use your application, create a new fine-tune job by making a request to the `/files` with a `file` query param matching the filename you uploaded earlier: ```sh curl https://your-worker-url.com/files?file=finetune.jsonl ``` When the file is uploaded, issue another request to `/models`, passing the `file_id` query parameter. This should match the `id` returned as JSON from the `/files` route: ```sh curl https://your-worker-url.com/models?file_id=file-abc123 ``` Finally, visit `/jobs` to see the status of your fine-tune jobs in OpenAI. Once the fine-tune job has completed, you can see the `fine_tuned_model` value, indicating a fine-tuned model has been created. ![Jobs](~/assets/images/workers/tutorials/finetune/finetune-jobs.png) Visit the [OpenAI Playground](https://platform.openai.com/playground) in order to use your fine-tune model. Select your fine-tune model from the top-left dropdown of the interface. ![Demo](~/assets/images/workers/tutorials/finetune/finetune-example.png) Use it in any API requests you make to OpenAI's chat completions endpoints. For instance, in the below code example: ```javascript openai.chat.completions.create({ messages: [{ role: "system", content: "You are a helpful assistant." }], model: "ft:gpt-4o-mini:my-org:custom_suffix:id", }); ``` ## Next steps To build more with Workers, refer to [Tutorials](/workers/tutorials). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with other developers and the Cloudflare team. --- # Connect to and query your Turso database using Workers URL: https://developers.cloudflare.com/workers/tutorials/connect-to-turso-using-workers/ import { Render, PackageManagers, WranglerConfig } from "~/components"; This tutorial will guide you on how to build globally distributed applications with Cloudflare Workers, and [Turso](https://chiselstrike.com/), an edge-hosted distributed database based on libSQL. By using Workers and Turso, you can create applications that are close to your end users without having to maintain or operate infrastructure in tens or hundreds of regions. :::note For a more seamless experience, refer to the [Turso Database Integration guide](/workers/databases/native-integrations/turso/). The Turso Database Integration will connect your Worker to a Turso database by getting the right configuration from Turso and adding it as [secrets](/workers/configuration/secrets/) to your Worker. ::: ## Prerequisites Before continuing with this tutorial, you should have: - Successfully [created up your first Cloudflare Worker](/workers/get-started/guide/) and/or have deployed a Cloudflare Worker before. - Installed [Wrangler](/workers/wrangler/install-and-update/), a command-line tool for building Cloudflare Workers. - A [GitHub account](https://github.com/), required for authenticating to Turso. - A basic familiarity with installing and using command-line interface (CLI) applications. ## Install the Turso CLI You will need the Turso CLI to create and populate a database. Run either of the following two commands in your terminal to install the Turso CLI: ```sh # On macOS or Linux with Homebrew brew install chiselstrike/tap/turso # Manual scripted installation curl -sSfL | bash ``` After you have installed the Turso CLI, verify that the CLI is in your shell path: ```sh turso --version ``` ```sh output # This should output your current Turso CLI version (your installed version may be higher): turso version v0.51.0 ``` ## Create and populate a database Before you create your first Turso database, you need to log in to the CLI using your GitHub account by running: ```sh turso auth login ``` ```sh output Waiting for authentication... ✔ Success! Logged in as ``` `turso auth login` will open a browser window and ask you to sign into your GitHub account, if you are not already logged in. The first time you do this, you will need to give the Turso application permission to use your account. Select **Approve** to grant Turso the permissions needed. After you have authenticated, you can create a database by running `turso db create `. Turso will automatically choose a location closest to you. ```sh turso db create my-db ``` ```sh output # Example: [===> ] Creating database my-db in Los Angeles, California (US) (lax) # Once succeeded: Created database my-db in Los Angeles, California (US) (lax) in 34 seconds. ``` With your first database created, you can now connect to it directly and execute SQL against it: ```sh turso db shell my-db ``` To get started with your database, create and define a schema for your first table. In this example, you will create a `example_users` table with one column: `email` (of type `text`) and then populate it with one email address. In the shell you just opened, paste in the following SQL: ```sql create table example_users (email text); insert into example_users values ("foo@bar.com"); ``` If the SQL statements succeeded, there will be no output. Note that the trailing semi-colons (`;`) are necessary to terminate each SQL statement. Type `.quit` to exit the shell. ## Use Wrangler to create a Workers project The Workers command-line interface, [Wrangler](/workers/wrangler/install-and-update/), allows you to create, locally develop, and deploy your Workers projects. To create a new Workers project (named `worker-turso-ts`), run the following: To start developing your Worker, `cd` into your new project directory: ```sh cd worker-turso-ts ``` In your project directory, you now have the following files: - `wrangler.json` / `wrangler.toml`: [Wrangler configuration file](/workers/wrangler/configuration/) - `src/index.ts`: A minimal Hello World Worker written in TypeScript - `package.json`: A minimal Node dependencies configuration file. - `tsconfig.json`: TypeScript configuration that includes Workers types. Only generated if indicated. For this tutorial, only the [Wrangler configuration file](/workers/wrangler/configuration/) and `src/index.ts` file are relevant. You will not need to edit the other files, and they should be left as is. ## Configure your Worker for your Turso database The Turso client library requires two pieces of information to make a connection: 1. `LIBSQL_DB_URL` - The connection string for your Turso database. 2. `LIBSQL_DB_AUTH_TOKEN` - The authentication token for your Turso database. This should be kept a secret, and not committed to source code. To get the URL for your database, run the following Turso CLI command, and copy the result: ```sh turso db show my-db --url ``` ```sh output libsql://my-db-.turso.io ``` Open the [Wrangler configuration file](/workers/wrangler/configuration/) in your editor and at the bottom of the file, create a new `[vars]` section representing the [environment variables](/workers/configuration/environment-variables/) for your project: ```toml [vars] LIBSQL_DB_URL = "paste-your-url-here" ``` Save the changes to the [Wrangler configuration file](/workers/wrangler/configuration/). Next, create a long-lived authentication token for your Worker to use when connecting to your database. Run the following Turso CLI command, and copy the output to your clipboard: ```sh turso db tokens create my-db -e none # Will output a long text string (an encoded JSON Web Token) ``` To keep this token secret: 1. You will create a `.dev.vars` file for local development. Do not commit this file to source control. You should add `.dev.vars to your `.gitignore\` file if you are using Git. - You will also create a [secret](/workers/configuration/secrets/) to keep your authentication token confidential. First, create a new file called `.dev.vars` with the following structure. Paste your authentication token in the quotation marks: ``` LIBSQL_DB_AUTH_TOKEN="" ``` Save your changes to `.dev.vars`. Next, store the authentication token as a secret for your production Worker to reference. Run the following `wrangler secret` command to create a Secret with your token: ```sh # Ensure you specify the secret name exactly: your Worker will need to reference it later. npx wrangler secret put LIBSQL_DB_AUTH_TOKEN ``` ```sh output ? Enter a secret value: › ``` Select `` on your keyboard to save the token as a secret. Both `LIBSQL_DB_URL` and `LIBSQL_DB_AUTH_TOKEN` will be available in your Worker's environment at runtime. ## Install extra libraries Install the Turso client library and a router: ```sh npm install @libsql/client itty-router ``` The `@libsql/client` library allows you to query a Turso database. The `itty-router` library is a lightweight router you will use to help handle incoming requests to the worker. ## Write your Worker You will now write a Worker that will: 1. Handle an HTTP request. 2. Route it to a specific handler to either list all users in our database or add a new user. 3. Return the results and/or success. Open `src/index.ts` and delete the existing template. Copy the below code exactly as is and paste it into the file: ```ts import { Client as LibsqlClient, createClient } from "@libsql/client/web"; import { Router, RouterType } from "itty-router"; export interface Env { // The environment variable containing your the URL for your Turso database. LIBSQL_DB_URL?: string; // The Secret that contains the authentication token for your Turso database. LIBSQL_DB_AUTH_TOKEN?: string; // These objects are created before first use, then stashed here // for future use router?: RouterType; } export default { async fetch(request, env): Promise { if (env.router === undefined) { env.router = buildRouter(env); } return env.router.fetch(request); }, } satisfies ExportedHandler; function buildLibsqlClient(env: Env): LibsqlClient { const url = env.LIBSQL_DB_URL?.trim(); if (url === undefined) { throw new Error("LIBSQL_DB_URL env var is not defined"); } const authToken = env.LIBSQL_DB_AUTH_TOKEN?.trim(); if (authToken === undefined) { throw new Error("LIBSQL_DB_AUTH_TOKEN env var is not defined"); } return createClient({ url, authToken }); } function buildRouter(env: Env): RouterType { const router = Router(); router.get("/users", async () => { const client = buildLibsqlClient(env); const rs = await client.execute("select * from example_users"); return Response.json(rs); }); router.get("/add-user", async (request) => { const client = buildLibsqlClient(env); const email = request.query.email; if (email === undefined) { return new Response("Missing email", { status: 400 }); } if (typeof email !== "string") { return new Response("email must be a single string", { status: 400 }); } if (email.length === 0) { return new Response("email length must be > 0", { status: 400 }); } try { await client.execute({ sql: "insert into example_users values (?)", args: [email], }); } catch (e) { console.error(e); return new Response("database insert failed"); } return new Response("Added"); }); router.all("*", () => new Response("Not Found.", { status: 404 })); return router; } ``` Save your `src/index.ts` file with your changes. Note: - The libSQL client library import '@libsql/client/web' must be imported exactly as shown when working with Cloudflare workers. The non-web import will not work in the Workers environment. - The `Env` interface contains the environment variable and secret you defined earlier. - The `Env` interface also caches the libSQL client object and router, which are created on the first request to the Worker. - The `/users` route fetches all rows from the `example_users` table you created in the Turso shell. It simply serializes the `ResultSet` object as JSON directly to the caller. - The `/add-user` route inserts a new row using a value provided in the query string. With your environment configured and your code ready, you will now test your Worker locally before you deploy. ## Run the Worker locally with Wrangler To run a local instance of our Worker (entirely on your machine), run the following command: ```sh npx wrangler dev ``` You should be able to review output similar to the following: ```txt Your worker has access to the following bindings: - Vars: - LIBSQL_DB_URL: "your-url" ⎔ Starting a local server... ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ [b] open a browser, [d] open Devtools, [l] turn off local mode, [c] clear console, [x] to exit │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ Debugger listening on ws://127.0.0.1:61918/1064babd-bc9d-4bed-b171-b35dab3b7680 For help, see: https://nodejs.org/en/docs/inspector Debugger attached. [mf:inf] Worker reloaded! (40.25KiB) [mf:inf] Listening on 0.0.0.0:8787 [mf:inf] - http://127.0.0.1:8787 [mf:inf] - http://192.168.1.136:8787 [mf:inf] Updated `Request.cf` object cache! ``` The localhost address — the one with `127.0.0.1` in it — is a web-server running locally on your machine. Connect to it and validate your Worker returns the email address you inserted when you created your `example_users` table by visiting the `/users` route in your browser: [http://127.0.0.1:8787/users](http://127.0.0.1:8787/users). You should see JSON similar to the following containing the data from the `example_users` table: ```json { "columns": ["email"], "rows": [{ "email": "foo@bar.com" }], "rowsAffected": 0 } ``` :::caution If you see an error instead of a list of users, double check that: - You have entered the correct value for your `LIBSQL_DB_URL` in the [Wrangler configuration file](/workers/wrangler/configuration/). - You have set a secret called `LIBSQL_DB_AUTH_TOKEN` with your database authentication token. Both of these need to be present and match the variable names in your Worker's code. ::: Test the `/add-users` route and pass it an email address to insert: [http://127.0.0.1:8787/add-user?email=test@test.com](http://127.0.0.1:8787/add-user?email=test@test.com.) You should see the text `“Added”`. If you load the first URL with the `/users` route again ([http://127.0.0.1:8787/users](http://127.0.0.1:8787/users)), it will show the newly added row. You can repeat this as many times as you like. Note that due to its design, your application will not stop you from adding duplicate email addresses. Quit Wrangler by typing `q` into the shell where it was started. ## Deploy to Cloudflare After you have validated that your Worker can connect to your Turso database, deploy your Worker. Run the following Wrangler command to deploy your Worker to the Cloudflare global network: ```sh npx wrangler deploy ``` The first time you run this command, it will launch a browser, ask you to sign in with your Cloudflare account, and grant permissions to Wrangler. The `deploy` command will output the following: ```txt Your worker has access to the following bindings: - Vars: - LIBSQL_DB_URL: "your-url" ... Published worker-turso-ts (0.19 sec) https://worker-turso-ts..workers.dev Current Deployment ID: f9e6b48f-5aac-40bd-8f44-8a40be2212ff ``` You have now deployed a Worker that can connect to your Turso database, query it, and insert new data. ## Optional: Clean up To clean up the resources you created as part of this tutorial: - If you do not want to keep this Worker, run `npx wrangler delete worker-turso-ts` to delete the deployed Worker. - You can also delete your Turso database via `turso db destroy my-db`. ## Related resources - Find the [complete project source code on GitHub](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-turso-ts/). - Understand how to [debug your Cloudflare Worker](/workers/observability/). - Join the [Cloudflare Developer Discord](https://discord.cloudflare.com). - Join the [ChiselStrike (Turso) Discord](https://discord.com/invite/4B5D7hYwub). --- # Deploy a real-time chat application URL: https://developers.cloudflare.com/workers/tutorials/deploy-a-realtime-chat-app/ import { Render, WranglerConfig } from "~/components"; In this tutorial, you will deploy a serverless, real-time chat application that runs using [Durable Objects](/durable-objects/). This chat application uses a Durable Object to control each chat room. Users connect to the Object using WebSockets. Messages from one user are broadcast to all the other users. The chat history is also stored in durable storage. Real-time messages are relayed directly from one user to others without going through the storage layer. ## Clone the chat application repository Open your terminal and clone the [workers-chat-demo](https://github.com/cloudflare/workers-chat-demo) repository: ```sh git clone https://github.com/cloudflare/workers-chat-demo.git ``` ## Authenticate Wrangler After you have cloned the repository, authenticate Wrangler by running: ```sh npx wrangler login ``` ## Deploy your project When you are ready to deploy your application, run: ```sh npx wrangler deploy ``` Your application will be deployed to your `*.workers.dev` subdomain. To deploy your application to a custom domain within the Cloudflare dashboard, go to your Worker > **Triggers** > **Add Custom Domain**. To deploy your application to a custom domain using Wrangler, open your project's [Wrangler configuration file](/workers/wrangler/configuration/). To configure a route in your Wrangler configuration file, add the following to your environment: ```toml routes = [ { pattern = "example.com/about", zone_id = "" } ] ``` If you have specified your zone ID in the environment of your Wrangler configuration file, you will not need to write it again in object form. To configure a subdomain in your Wrangler configuration file, add the following to your environment: ```toml routes = [ { pattern = "subdomain.example.com", custom_domain = true } ] ``` To test your live application: 1. Open your `edge-chat-demo..workers.dev` subdomain. Your subdomain can be found in the [Cloudflare dashboard](https://dash.cloudflare.com) > **Workers & Pages** > your Worker > **Triggers** > **Routes** > select the `edge-chat-demo..workers.dev` route. 2. Enter a name in the **your name** field. 3. Choose whether to enter a public room or create a private room. 4. Send the link to other participants. You will be able to view room participants on the right side of the screen. ## Uninstall your application To uninstall your chat application, modify your Wrangler file to remove the `durable_objects` bindings and add a `deleted_classes` migration: ```toml [durable_objects] bindings = [ ] # Indicate that you want the ChatRoom and RateLimiter classes to be callable as Durable Objects. [[migrations]] tag = "v1" # Should be unique for each entry new_sqlite_classes = ["ChatRoom", "RateLimiter"] [[migrations]] tag = "v2" deleted_classes = ["ChatRoom", "RateLimiter"] ``` Then run `npx wrangler deploy`. To delete your Worker: 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account. 2. In Account Home, select **Workers & Pages**. 3. In **Overview**, select your Worker. 4. Select **Manage Service** > **Delete**. For complete instructions on set up and deletion, refer to the `README.md` in your cloned repository. By completing this tutorial, you have deployed a real-time chat application with Durable Objects and Cloudflare Workers. ## Related resources Continue building with other Cloudflare Workers tutorials below. - [Build a Slackbot](/workers/tutorials/build-a-slackbot/) - [Create SMS notifications for your GitHub repository using Twilio](/workers/tutorials/github-sms-notifications-using-twilio/) - [Build a QR code generator](/workers/tutorials/build-a-qr-code-generator/) --- # GitHub SMS notifications using Twilio URL: https://developers.cloudflare.com/workers/tutorials/github-sms-notifications-using-twilio/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will learn to build an SMS notification system on Workers to receive updates on a GitHub repository. Your Worker will send you a text update using Twilio when there is new activity on your repository. You will learn how to: - Build webhooks using Workers. - Integrate Workers with GitHub and Twilio. - Use Worker secrets with Wrangler. ![Animated gif of receiving a text message on your phone after pushing changes to a repository](/images/workers/tutorials/github-sms/video-of-receiving-a-text-after-pushing-to-a-repo.gif) --- ## Create a Worker project Start by using `npm create cloudflare@latest` to create a Worker project in the command line: Make note of the URL that your application was deployed to. You will be using it when you configure your GitHub webhook. ```sh cd github-twilio-notifications ``` Inside of your new `github-sms-notifications` directory, `src/index.js` represents the entry point to your Cloudflare Workers application. You will configure this file for most of the tutorial. You will also need a GitHub account and a repository for this tutorial. If you do not have either setup, [create a new GitHub account](https://github.com/join) and [create a new repository](https://docs.github.com/en/get-started/quickstart/create-a-repo) to continue with this tutorial. First, create a webhook for your repository to post updates to your Worker. Inside of your Worker, you will then parse the updates. Finally, you will send a `POST` request to Twilio to send a text message to you. You can reference the finished code at this [GitHub repository](https://github.com/rickyrobinett/workers-sdk/tree/main/templates/examples/github-sms-notifications-using-twilio). --- ## Configure GitHub To start, configure a GitHub webhook to post to your Worker when there is an update to the repository: 1. Go to your GitHub repository's **Settings** > **Webhooks** > **Add webhook**. 2. Set the Payload URL to the `/webhook` path on the Worker URL that you made note of when your application was first deployed. 3. In the **Content type** dropdown, select _application/json_. 4. In the **Secret** field, input a secret key of your choice. 5. In **Which events would you like to trigger this webhook?**, select **Let me select individual events**. Select the events you want to get notifications for (such as **Pull requests**, **Pushes**, and **Branch or tag creation**). 6. Select **Add webhook** to finish configuration. ![Following instructions to set up your webhook in the GitHub webhooks settings dashboard](~/assets/images/workers/tutorials/github-sms/github-config-screenshot.png) --- ## Parsing the response With your local environment set up, parse the repository update with your Worker. Initially, your generated `index.js` should look like this: ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` Use the `request.method` property of [`Request`](/workers/runtime-apis/request/) to check if the request coming to your application is a `POST` request, and send an error response if the request is not a `POST` request. ```js export default { async fetch(request, env, ctx) { if (request.method !== "POST") { return new Response("Please send a POST request!"); } }, }; ``` Next, validate that the request is sent with the right secret key. GitHub attaches a hash signature for [each payload using the secret key](https://docs.github.com/en/developers/webhooks-and-events/webhooks/securing-your-webhooks). Use a helper function called `checkSignature` on the request to ensure the hash is correct. Then, you can access data from the webhook by parsing the request as JSON. ```js async fetch(request, env, ctx) { if(request.method !== 'POST') { return new Response('Please send a POST request!'); } try { const rawBody = await request.text(); if (!checkSignature(rawBody, request.headers, env.GITHUB_SECRET_TOKEN)) { return new Response("Wrong password, try again", {status: 403}); } } catch (e) { return new Response(`Error: ${e}`); } }, ``` The `checkSignature` function will use the Node.js crypto library to hash the received payload with your known secret key to ensure it matches the request hash. GitHub uses an HMAC hexdigest to compute the hash in the SHA-256 format. You will place this function at the top of your `index.js` file, before your export. ```js import { createHmac, timingSafeEqual } from "node:crypto"; import { Buffer } from "node:buffer"; function checkSignature(text, headers, githubSecretToken) { const hmac = createHmac("sha256", githubSecretToken); hmac.update(text); const expectedSignature = hmac.digest("hex"); const actualSignature = headers.get("x-hub-signature-256"); const trusted = Buffer.from(`sha256=${expectedSignature}`, "ascii"); const untrusted = Buffer.from(actualSignature, "ascii"); return ( trusted.byteLength == untrusted.byteLength && timingSafeEqual(trusted, untrusted) ); } ``` To make this work, you need to use [`wrangler secret put`](/workers/wrangler/commands/#put) to set your `GITHUB_SECRET_TOKEN`. This token is the secret you picked earlier when configuring you GitHub webhook: ```sh npx wrangler secret put GITHUB_SECRET_TOKEN ``` Add the nodejs_compat flag to your Wrangler file: ```toml compatibility_flags = ["nodejs_compat"] ``` --- ## Sending a text with Twilio You will send a text message to you about your repository activity using Twilio. You need a Twilio account and a phone number that can receive text messages. [Refer to the Twilio guide to get set up](https://www.twilio.com/messaging/sms). (If you are new to Twilio, they have [an interactive game](https://www.twilio.com/quest) where you can learn how to use their platform and get some free credits for beginners to the service.) You can then create a helper function to send text messages by sending a `POST` request to the Twilio API endpoint. [Refer to the Twilio reference](https://www.twilio.com/docs/sms/api/message-resource#create-a-message-resource) to learn more about this endpoint. Create a new function called `sendText()` that will handle making the request to Twilio: ```js async function sendText(accountSid, authToken, message) { const endpoint = `https://api.twilio.com/2010-04-01/Accounts/${accountSid}/Messages.json`; const encoded = new URLSearchParams({ To: "%YOUR_PHONE_NUMBER%", From: "%YOUR_TWILIO_NUMBER%", Body: message, }); const token = btoa(`${accountSid}:${authToken}`); const request = { body: encoded, method: "POST", headers: { Authorization: `Basic ${token}`, "Content-Type": "application/x-www-form-urlencoded", }, }; const response = await fetch(endpoint, request); const result = await response.json(); return Response.json(result); } ``` To make this work, you need to set some secrets to hide your `ACCOUNT_SID` and `AUTH_TOKEN` from the source code. You can set secrets with [`wrangler secret put`](/workers/wrangler/commands/#put) in your command line. ```sh npx wrangler secret put TWILIO_ACCOUNT_SID npx wrangler secret put TWILIO_AUTH_TOKEN ``` Modify your `githubWebhookHandler` to send a text message using the `sendText` function you just made. ```js async fetch(request, env, ctx) { if(request.method !== 'POST') { return new Response('Please send a POST request!'); } try { const rawBody = await request.text(); if (!checkSignature(rawBody, request.headers, env.GITHUB_SECRET_TOKEN)) { return new Response('Wrong password, try again', {status: 403}); } const action = request.headers.get('X-GitHub-Event'); const json = JSON.parse(rawBody); const repoName = json.repository.full_name; const senderName = json.sender.login; return await sendText( env.TWILIO_ACCOUNT_SID, env.TWILIO_AUTH_TOKEN, `${senderName} completed ${action} onto your repo ${repoName}` ); } catch (e) { return new Response(`Error: ${e}`); } }; ``` Run the `npx wrangler deploy` command to redeploy your Worker project: ```sh npx wrangler deploy ``` ![Video of receiving a text after pushing to a repo](/images/workers/tutorials/github-sms/video-of-receiving-a-text-after-pushing-to-a-repo.gif) Now when you make an update (that you configured in the GitHub **Webhook** settings) to your repository, you will get a text soon after. If you have never used Git before, refer to the [GIT Push and Pull Tutorial](https://www.datacamp.com/tutorial/git-push-pull) for pushing to your repository. Reference the finished code [on GitHub](https://github.com/rickyrobinett/workers-sdk/tree/main/templates/examples/github-sms-notifications-using-twilio). By completing this tutorial, you have learned how to build webhooks using Workers, integrate Workers with GitHub and Twilio, and use Worker secrets with Wrangler. ## Related resources {/* */} - [Build a JAMStack app](/workers/tutorials/build-a-jamstack-app/) - [Build a QR code generator](/workers/tutorials/build-a-qr-code-generator/) --- # Generate YouTube thumbnails with Workers and Cloudflare Image Resizing URL: https://developers.cloudflare.com/workers/tutorials/generate-youtube-thumbnails-with-workers-and-images/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will learn how to programmatically generate a custom YouTube thumbnail using Cloudflare Workers and Cloudflare Image Resizing. You may want to generate a custom YouTube thumbnail to customize the thumbnail's design, call-to-actions and images used to encourage more viewers to watch your video. This tutorial will help you understand how to work with [Images](/images/),[Image Resizing](/images/transform-images/) and [Cloudflare Workers](/workers/). To follow this tutorial, make sure you have Node, Cargo, and [Wrangler](/workers/wrangler/install-and-update/) installed on your machine. ## Learning goals In this tutorial, you will learn how to: - Upload Images to Cloudflare with the Cloudflare dashboard or API. - Set up a Worker project with Wrangler. - Manipulate images with image transformations in your Worker. ## Upload your image To generate a custom thumbnail image, you first need to upload a background image to Cloudflare Images. This will serve as the image you use for transformations to generate the thumbnails. Cloudflare Images allows you to store, resize, optimize and deliver images in a fast and secure manner. To get started, upload your images to the Cloudflare dashboard or use the Upload API. ### Upload with the dashboard To upload an image using the Cloudflare dashboard: 1. Log in to the [Cloudflare Dashboard](https://dash.cloudflare.com) and select your account. 2. Select **Images**. 3. Use **Quick Upload** to either drag and drop an image or click to browse and choose a file from your local files. 4. After the image is uploaded, view it using the generated URL. ### Upload with the API To upload your image with the [Upload via URL](/images/upload-images/upload-url/) API, refer to the example below: ```sh curl --request POST \ --url https://api.cloudflare.com/client/v4/accounts//images/v1 \ --header 'Authorization: Bearer ' \ --form 'url=' \ --form 'metadata={"key":"value"}' \ --form 'requireSignedURLs=false' ``` - `ACCOUNT_ID`: The current user's account id which can be found in your account settings. - `API_TOKEN`: Needs to be generated to scoping Images permission. - `PATH_TO_IMAGE`: Indicates the URL for the image you want to upload. You will then receive a response similar to this: ```json { "result": { "id": "2cdc28f0-017a-49c4-9ed7-87056c83901", "filename": "image.jpeg", "metadata": { "key": "value" }, "uploaded": "2022-01-31T16:39:28.458Z", "requireSignedURLs": false, "variants": [ "https://imagedelivery.net/Vi7wi5KSItxGFsWRG2Us6Q/2cdc28f0-017a-49c4-9ed7-87056c83901/public", "https://imagedelivery.net/Vi7wi5KSItxGFsWRG2Us6Q/2cdc28f0-017a-49c4-9ed7-87056c83901/thumbnail" ] }, "success": true, "errors": [], "messages": [] } ``` Now that you have uploaded your image, you will use it as the background image for your video's thumbnail. ## Create a Worker to transform text to image After uploading your image, create a Worker that will enable you to transform text to image. This image can be used as an overlay on the background image you uploaded. Use the [rustwasm-worker-template](https://github.com/cloudflare/workers-sdk/tree/main/templates/worker-rust). You will need the following before you begin: - A recent version of [Rust](https://rustup.rs/). - Access to the `cargo-generate` subcommand: ```sh cargo install cargo-generate ``` Create a new Worker project using the `worker-rust` template: ```sh cargo generate https://github.com/cloudflare/rustwasm-worker-template ``` You will now make a few changes to the files in your project directory. 1. In the `lib.rs` file, add the following code block: ```rs use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get("/", |_, _| Response::ok("Hello from Workers!")) .run(req, env) .await } ``` 2. Update the `Cargo.toml` file in your `worker-to-text` project directory to use [text-to-png](https://github.com/RookAndPawn/text-to-png), a Rust package for rendering text to PNG. Add the package as a dependency by running: ```sh cargo add text-to-png@0.2.0 ``` 3. Import the `text_to_png` library into your `worker-to-text` project's `lib.rs` file. ```rs null {1} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get("/", |_, _| Response::ok("Hello from Workers!")) .run(req, env) .await } ``` 4. Update `lib.rs` to create a `handle-slash` function that will activate the image transformation based on the text passed to the URL as a query parameter. ```rs null {17} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get("/", |_, _| Response::ok("Hello from Workers!")) .run(req, env) .await } async fn handle_slash(text: String) -> Result {} ``` 5. In the `handle-slash` function, call the `TextRenderer` by assigning it to a renderer value, specifying that you want to use a custom font. Then, use the `render_text_to_png_data` method to transform the text into image format. In this example, the custom font (`Inter-Bold.ttf`) is located in an `/assets` folder at the root of the project which will be used for generating the thumbnail. You must update this portion of the code to point to your custom font file. ```rs null {17,18,19,20,21,22,23,24} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get("/", |_, _| Response::ok("Hello from Workers!")) .run(req, env) .await } async fn handle_slash(text: String) -> Result { let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf")) .expect("Example font is definitely loadable"); let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap(); } ``` 6. Rewrite the `Router` function to call `handle_slash` when a query is passed in the URL, otherwise return the `"Hello Worker!"` as the response. ```rs null {11,12,13,14,15,16,17,18,19,20} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get_async("/", |req, _| async move { if let Some(text) = req.url()?.query() { handle_slash(text.into()).await } else { handle_slash("Hello Worker!".into()).await } }) .run(req, env) .await } async fn handle_slash(text: String) -> Result { let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf")) .expect("Example font is definitely loadable"); let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap(); } ``` 7. In your `lib.rs` file, set the headers to `content-type: image/png` so that the response is correctly rendered as a PNG image. ```rs null {29,30,31,32} use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get_async("/", |req, _| async move { if let Some(text) = req.url()?.query() { handle_slash(text.into()).await } else { handle_slash("Hello Worker!".into()).await } }) .run(req, env) .await } async fn handle_slash(text: String) -> Result { let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf")) .expect("Example font is definitely loadable"); let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap(); let mut headers = Headers::new(); headers.set("content-type", "image/png")?; Ok(Response::from_bytes(text_png.data)?.with_headers(headers)) } ``` The final `lib.rs` file should look as follows. Find the full code as an example repository on [GitHub](https://github.com/cloudflare/workers-sdk/tree/main/templates/examples/worker-to-text). ```rs use text_to_png::{TextPng, TextRenderer}; use worker::*; mod utils; #[event(fetch)] pub async fn main(req: Request, env: Env, _ctx: worker::Context) -> Result { // Optionally, get more helpful error messages written to the console in the case of a panic. utils::set_panic_hook(); let router = Router::new(); router .get_async("/", |req, _| async move { if let Some(text) = req.url()?.query() { handle_slash(text.into()).await } else { handle_slash("Hello Worker!".into()).await } }) .run(req, env) .await } async fn handle_slash(text: String) -> Result { let renderer = TextRenderer::try_new_with_ttf_font_data(include_bytes!("../assets/Inter-Bold.ttf")) .expect("Example font is definitely loadable"); let text = if text.len() > 128 { "Nope".into() } else { text }; let text = urlencoding::decode(&text).map_err(|_| worker::Error::BadEncoding)?; let text_png: TextPng = renderer.render_text_to_png_data(text.replace("+", " "), 60, "003682").unwrap(); let mut headers = Headers::new(); headers.set("content-type", "image/png")?; Ok(Response::from_bytes(text_png.data)?.with_headers(headers)) } ``` After you have finished updating your project, start a local server for developing your Worker by running: ```sh npx wrangler dev ``` This should spin up a `localhost` instance with the image displayed: ![Run wrangler dev to start a local server for your Worker](~/assets/images/workers/tutorials/youtube-thumbnails/hello-worker.png) Adding a query parameter with custom text, you should receive: ![Follow the instructions above to receive an output image](~/assets/images/workers/tutorials/youtube-thumbnails/build-serverles.png) To deploy your Worker, open your Wrangler file and update the `name` key with your project's name. Below is an example with this tutorial's project name: ```toml name = "worker-to-text" ``` Then run the `npx wrangler deploy` command to deploy your Worker. ```sh npx wrangler deploy ``` A `.workers.dev` domain will be generated for your Worker after running `wrangler deploy`. You will use this domain in the main thumbnail image. ## Create a Worker to display the original image Create a Worker to serve the image you uploaded to Images by running: To start developing your Worker, `cd` into your new project directory: ```sh cd thumbnail-image ``` This will create a new Worker project named `thumbnail-image`. In the `src/index.js` file, add the following code block: ```js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname === "/original-image") { const image = await fetch( `https://imagedelivery.net/${env.CLOUDFLARE_ACCOUNT_HASH}/${IMAGE_ID}/public`, ); return image; } return new Response("Image Resizing with a Worker"); }, }; ``` Update `env.CLOUDFLARE_ACCOUNT_HASH` with your [Cloudflare account ID](/fundamentals/setup/find-account-and-zone-ids/). Update `env.IMAGE_ID` with your [image ID](/images/get-started/). Run your Worker and go to the `/original-image` route to review your image. ## Add custom text on your image You will now use [Cloudflare image transformations](/images/transform-images/), with the `fetch` method, to add your dynamic text image as an overlay on top of your background image. Start by displaying the resulting image on a different route. Call the new route `/thumbnail`. ```js null {11} export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname === "/original-image") { const image = await fetch( `https://imagedelivery.net/${env.CLOUDFLARE_ACCOUNT_HASH}/${IMAGE_ID}/public`, ); return image; } if (url.pathname === "/thumbnail") { } return new Response("Image Resizing with a Worker"); }, }; ``` Next, use the `fetch` method to apply the image transformation changes on top of the background image. The overlay options are nested in `options.cf.image`. ```js null {12,13,14,15,16,17,18} export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname === "/original-image") { const image = await fetch( `https://imagedelivery.net/${env.CLOUDFLARE_ACCOUNT_HASH}/${IMAGE_ID}/public`, ); return image; } if (url.pathname === "/thumbnail") { fetch(imageURL, { cf: { image: {}, }, }); } return new Response("Image Resizing with a Worker"); }, }; ``` The `imageURL` is the URL of the image you want to use as a background image. In the `cf.image` object, specify the options you want to apply to the background image. :::note At time of publication, Cloudflare image transformations do not allow resizing images in a Worker that is stored in Cloudflare Images. Instead of using the image you served on the `/original-image` route, you will use the same image from a different source. ::: Add your background image to an assets directory on GitHub and push your changes to GitHub. Copy the URL of the image upload by performing a left click on the image and selecting the **Copy Remote File Url** option. Replace the `imageURL` value with the copied remote URL. ```js null {2,3} if (url.pathname === "/thumbnail") { const imageURL = "https://github.com/lauragift21/social-image-demo/blob/1ed9044463b891561b7438ecdecbdd9da48cdb03/assets/cover.png?raw=true"; fetch(imageURL, { cf: { image: {}, }, }); } ``` Next, add overlay options in the image object. Resize the image to the preferred width and height for YouTube thumbnails and use the [draw](/images/transform-images/draw-overlays/) option to add overlay text using the deployed URL of your `text-to-image` Worker. ```js null {3,4,5,6,7,8,9,10,11,12} fetch(imageURL, { cf: { image: { width: 1280, height: 720, draw: [ { url: "https://text-to-image.examples.workers.dev", left: 40, }, ], }, }, }); ``` Image transformations can only be tested when you deploy your Worker. To deploy your Worker, open your Wrangler file and update the `name` key with your project's name. Below is an example with this tutorial's project name: ```toml name = "thumbnail-image" ``` Deploy your Worker by running: ```sh npx wrangler deploy ``` The command deploys your Worker to custom `workers.dev` subdomain. Go to your `.workers.dev` subdomain and go to the `/thumbnail` route. You should see the resized image with the text `Hello Workers!`. ![Follow the steps above to generate your resized image.](~/assets/images/workers/tutorials/youtube-thumbnails/thumbnail.png) You will now make text applied dynamic. Making your text dynamic will allow you change the text and have it update on the image automatically. To add dynamic text, append any text attached to the `/thumbnail` URL using query parameters and pass it down to the `text-to-image` Worker URL as a parameter. ```js for (const title of url.searchParams.values()) { try { const editedImage = await fetch(imageURL, { cf: { image: { width: 1280, height: 720, draw: [ { url: `https://text-to-image.examples.workers.dev/?${title}`, left: 50, }, ], }, }, }); return editedImage; } catch (error) { console.log(error); } } ``` This will always return the text you pass as a query string in the generated image. This example URL, [https://socialcard.cdnuptime.com/thumbnail?Getting%20Started%20With%20Cloudflare%20Images](https://socialcard.cdnuptime.com/thumbnail?Getting%20Started%20With%20Cloudflare%20Images), will generate the following image: ![An example thumbnail.](~/assets/images/workers/tutorials/youtube-thumbnails/thumbnail2.png) By completing this tutorial, you have successfully made a custom YouTube thumbnail generator. ## Related resources In this tutorial, you learned how to use Cloudflare Workers and Cloudflare image transformations to generate custom YouTube thumbnails. To learn more about Cloudflare Workers and image transformations, refer to [Resize an image with a Worker](/images/transform-images/transform-via-workers/). --- # Handle form submissions with Airtable URL: https://developers.cloudflare.com/workers/tutorials/handle-form-submissions-with-airtable/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will use [Cloudflare Workers](/workers/) and [Airtable](https://airtable.com) to persist form submissions from a front-end user interface. Airtable is a free-to-use spreadsheet solution that has an approachable API for developers. Workers will handle incoming form submissions and use Airtable's [REST API](https://airtable.com/api) to asynchronously persist the data in an Airtable base (Airtable's term for a spreadsheet) for later reference. ![GIF of a complete Airtable and serverless function integration](/images/workers/tutorials/airtable/example.gif) ## 1. Create a form For this tutorial, you will be building a Workers function that handles input from a contact form. The form this tutorial references will collect a first name, last name, email address, phone number, message subject, and a message. :::note[Build a form] If this is your first time building a form and you would like to follow a tutorial to create a form with Cloudflare Pages, refer to the [HTML forms](/pages/tutorials/forms) tutorial. ::: Review a simplified example of the form used in this tuttorial. Note that the `action` parameter of the `
` tag should point to the deployed Workers application that you will build in this tutorial. ```html title="Your front-end code" {1}
``` ## 2. Create a Worker project To handle the form submission, create and deploy a Worker that parses the incoming form data and prepares it for submission to Airtable. Create a new `airtable-form-handler` Worker project: Then, move into the newly created directory: ```sh cd airtable-form-handler ``` ## 3. Configure an Airtable base When your Worker is complete, it will send data up to an Airtable base via Airtable's REST API. If you do not have an Airtable account, create one (the free plan is sufficient to complete this tutorial). In Airtable's dashboard, create a new base by selecting **Start from scratch**. After you have created a new base, set it up for use with the front-end form. Delete the existing columns, and create six columns, with the following field types: | Field name | Airtable field type | | ------------ | ------------------- | | First Name | "Single line text" | | Last Name | "Single line text" | | Email | "Email" | | Phone Number | "Phone number" | | Subject | "Single line text" | | Message | "Long text" | Note that the field names are case-sensitive. If you change the field names, you will need to exactly match your new field names in the API request you make to Airtable later in the tutorial. Finally, you can optionally rename your table -- by defaulte it will have a name like Table 1. In the below code, we assume the table has been renamed with a more descriptive name, like `Form Submissions`. Next, navigate to [Airtable's API page](https://airtable.com/api) and select your new base. Note that you must be logged into Airtable to see your base information. In the API documentation page, find your **Airtable base ID**. You will also need to create a **Personal access token** that you'll use to access your Airtable base. You can do so by visiting the [Personal access tokens](https://airtable.com/create/tokens) page on Airtable's website and creating a new token. Make sure that you configure the token in the following way: - Scope: the `data.records:write` scope must be set on the token - Access: access should be granted to the base you have been working with in this tutorial The results access token should now be set in your application. To make the token available in your codebase, use the [`wrangler secret`](/workers/wrangler/commands/#secret) command. The `secret` command encrypts and stores environment variables for use in your function, without revealing them to users. Run `wrangler secret put`, passing `AIRTABLE_ACCESS_TOKEN` as the name of your secret: ```sh npx wrangler secret put AIRTABLE_ACCESS_TOKEN ``` ```sh output Enter the secret text you would like assigned to the variable AIRTABLE_ACCESS_TOKEN on the script named airtable-form-handler: ****** 🌀 Creating the secret for script name airtable-form-handler ✨ Success! Uploaded secret AIRTABLE_ACCESS_TOKEN. ``` Before you continue, review the keys that you should have from Airtable: 1. **Airtable Table Name**: The name for your table, like Form Submissions. 2. **Airtable Base ID**: The alphanumeric base ID found at the top of your base's API page. 3. **Airtable Access Token**: A Personal Access Token created by the user to access information about your new Airtable base. ## 4. Submit data to Airtable With your Airtable base set up, and the keys and IDs you need to communicate with the API ready, you will now set up your Worker to persist data from your form into Airtable. In your Worker project's `index.js` file, replace the default code with a Workers fetch handler that can respond to requests. When the URL requested has a pathname of `/submit`, you will handle a new form submission, otherwise, you will return a `404 Not Found` response. ```js export default { async fetch(request, env) { const url = new URL(request.url); if (url.pathname === "/submit") { await submitHandler(request, env); } return new Response("Not found", { status: 404 }); }, }; ``` The `submitHandler` has two functions. First, it will parse the form data coming from your HTML5 form. Once the data is parsed, use the Airtable API to persist a new row (a new form submission) to your table: ```js async function submitHandler(request, env) { if (request.method !== "POST") { return new Response("Method Not Allowed", { status: 405, }); } const body = await request.formData(); const { first_name, last_name, email, phone, subject, message } = Object.fromEntries(body); // The keys in "fields" are case-sensitive, and // should exactly match the field names you set up // in your Airtable table, such as "First Name". const reqBody = { fields: { "First Name": first_name, "Last Name": last_name, Email: email, "Phone Number": phone, Subject: subject, Message: message, }, }; await createAirtableRecord(env, reqBody); } // Existing code // export default ... ``` While the majority of this function is concerned with parsing the request body (the data being sent as part of the request), there are two important things to note. First, if the HTTP method sent to this function is not `POST`, you will return a new response with the status code of [`405 Method Not Allowed`](https://httpstatuses.com/405). The variable `reqBody` represents a collection of fields, which are key-value pairs for each column in your Airtable table. By formatting `reqBody` as an object with a collection of fields, you are creating a new record in your table with a value for each field. Then you call `createAirtableRecord` (the function you will define next). The `createAirtableRecord` function accepts a `body` parameter, which conforms to the Airtable API's required format — namely, a JavaScript object containing key-value pairs under `fields`, representing a single record to be created on your table: ```js async function createAirtableRecord(env, body) { try { const result = fetch( `https://api.airtable.com/v0/${env.AIRTABLE_BASE_ID}/${encodeURIComponent(env.AIRTABLE_TABLE_NAME)}`, { method: "POST", body: JSON.stringify(body), headers: { Authorization: `Bearer ${env.AIRTABLE_ACCESS_TOKEN}`, "Content-Type": "application/json", }, }, ); return result; } catch (error) { console.error(error); } } // Existing code // async function submitHandler // export default ... ``` To make an authenticated request to Airtable, you need to provide four constants that represent data about your Airtable account, base, and table name. You have already set `AIRTABLE_ACCESS_TOKEN` using `wrangler secret`, since it is a value that should be encrypted. The **Airtable base ID** and **table name**, and `FORM_URL` are values that can be publicly shared in places like GitHub. Use Wrangler's [`vars`](/workers/wrangler/migration/v1-to-v2/wrangler-legacy/configuration/#vars) feature to pass public environment variables from your Wrangler file. Add a `vars` table at the end of your Wrangler file: ```toml null {7} name = "workers-airtable-form" main = "src/index.js" compatibility_date = "2023-06-13" [vars] AIRTABLE_BASE_ID = "exampleBaseId" AIRTABLE_TABLE_NAME = "Form Submissions" ``` With all these fields submitted, it is time to deploy your Workers serverless function and get your form communicating with it. First, publish your Worker: ```sh title="Deploy your Worker" npx wrangler deploy ``` Your Worker project will deploy to a unique URL — for example, `https://workers-airtable-form.cloudflare.workers.dev`. This represents the first part of your front-end form's `action` attribute — the second part is the path for your form handler, which is `/submit`. In your front-end UI, configure your `form` tag as seen below: ```html
``` After you have deployed your new form (refer to the [HTML forms](/pages/tutorials/forms) tutorial if you need help creating a form), you should be able to submit a new form submission and see the value show up immediately in Airtable: ![Example GIF of complete Airtable and serverless function integration](/images/workers/tutorials/airtable/example.gif) ## Conclusion With this tutorial completed, you have created a Worker that can accept form submissions and persist them to Airtable. You have learned how to parse form data, set up environment variables, and use the `fetch` API to make requests to external services outside of your Worker. ## Related resources - [Build a Slackbot](/workers/tutorials/build-a-slackbot) - [Build a To-Do List Jamstack App](/workers/tutorials/build-a-jamstack-app) - [Build a blog using Nuxt.js and Sanity.io on Cloudflare Pages](/pages/tutorials/build-a-blog-using-nuxt-and-sanity) - [James Quick's video on building a Cloudflare Workers + Airtable integration](https://www.youtube.com/watch?v=tFQ2kbiu1K4) --- # Connect to a MySQL database with Cloudflare Workers URL: https://developers.cloudflare.com/workers/tutorials/mysql/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will learn how to create a Cloudflare Workers application and connect it to a MySQL database using [TCP Sockets](/workers/runtime-apis/tcp-sockets/) and [Hyperdrive](/hyperdrive/). The Workers application you create in this tutorial will interact with a product database inside of MySQL. :::note[Note] We recommend using [Hyperdrive](/hyperdrive/) to connect to your MySQL database. Hyperdrive provides optimal performance and will ensure secure connectivity between your Worker and your MySQL database. When connecting directly to your MySQL database (without Hyperdrive), the MySQL drivers rely on unsupported Node.js APIs to create secure connections, which prevents connections. ::: ## Prerequisites To continue: 1. Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already. 2. Install [`npm`](https://docs.npmjs.com/getting-started). 3. Install [`Node.js`](https://nodejs.org/en/). Use a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later. 4. Make sure you have access to a MySQL database. ## 1. Create a Worker application First, use the [`create-cloudflare` CLI](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) to create a new Worker application. To do this, open a terminal window and run the following command: This will prompt you to install the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) package and lead you through a setup wizard. If you choose to deploy, you will be asked to authenticate (if not logged in already), and your project will be deployed. If you deploy, you can still modify your Worker code and deploy again at the end of this tutorial. Now, move into the newly created directory: ```sh cd mysql-tutorial ``` ## 2. Enable Node.js compatibility [Node.js compatibility](/workers/runtime-apis/nodejs/) is required for database drivers, including mysql2, and needs to be configured for your Workers project. ## 3. Create a Hyperdrive configuration Create a Hyperdrive configuration using the connection string for your MySQL database. ```bash npx wrangler hyperdrive create --connection-string="mysql://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name" ``` This command outputs the Hyperdrive configuration `id` that will be used for your Hyperdrive [binding](/workers/runtime-apis/bindings/). Set up your binding by specifying the `id` in the Wrangler file. ```toml {7-9} name = "hyperdrive-example" main = "src/index.ts" compatibility_date = "2024-08-21" compatibility_flags = ["nodejs_compat"] # Pasted from the output of `wrangler hyperdrive create --connection-string=[...]` above. [[hyperdrive]] binding = "HYPERDRIVE" id = "" ``` ## 4. Query your database from your Worker ## 5. Deploy your Worker Run the following command to deploy your Worker: ```sh npx wrangler deploy ``` Your application is now live and accessible at `..workers.dev`. ## Next steps To build more with databases and Workers, refer to [Tutorials](/workers/tutorials) and explore the [Databases documentation](/workers/databases). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team. --- # Build Live Cursors with Next.js, RPC and Durable Objects URL: https://developers.cloudflare.com/workers/tutorials/live-cursors-with-nextjs-rpc-do/ import { Render, PackageManagers, Steps, TabItem, Tabs, Details } from "~/components"; In this tutorial, you will learn how to build a real-time [Next.js](https://nextjs.org/) app that displays the live cursor location of each connected user using [Durable Objects](/durable-objects/), the Workers' built-in [RPC (Remote Procedure Call)](/workers/runtime-apis/rpc/) system, and the [OpenNext](https://opennext.js.org/cloudflare) Cloudflare adapter. The application works like this: - An ID is generated for each user that navigates to the application, which is used for identifying the WebSocket connection in the Durable Object. - Once the WebSocket connection is established, the application sends a message to the WebSocket Durable Object to determine the current number of connected users. - A user can close all active WebSocket connections via a Next.js server action that uses an RPC method. - It handles WebSocket and mouse movement events to update the location of other users' cursors in the UI and to send updates about the user's own cursor, as well as join and leave WebSocket events. ![Animated gif of real-time Next.js app for visualizing live cursors](~/assets/images/workers/tutorials/live-cursors-nextjs/demo-live-cursors-nextjs-do.gif) --- ## 1. Create a Next.js Workers Project 1. Run the following command to create your Next.js Worker named `next-rpc`: 2. Change into your new directory: ```sh cd next-rpc ``` 3. Install [nanoid](https://www.npmjs.com/package/nanoid) so that string IDs can be generated for clients: 4. Install [perfect-cursors](https://www.npmjs.com/package/perfect-cursors) to interpolate cursor positions: 5. Define workspaces for each Worker: Update your `package.json` file. ```json title="package.json" ins={14-17} { "name": "next-rpc", "version": "0.1.0", "private": true, "scripts": { "dev": "next dev", "build": "next build", "start": "next start", "lint": "next lint", "deploy": "cloudflare && wrangler deploy", "preview": "cloudflare && wrangler dev", "cf-typegen": "wrangler types --env-interface CloudflareEnv env.d.ts" }, "workspaces": [ ".", "worker" ], // ... } ``` Create a new file `pnpm-workspace.yaml`. ```yaml title="pnpm-workspace.yaml" packages: - "worker" - "." ``` ## 2. Create a Durable Object Worker This Worker will manage the Durable Object and also have internal APIs that will be made available to the Next.js Worker using a [`WorkerEntrypoint`](/workers/runtime-apis/bindings/service-bindings/rpc/) class. 1. Create another Worker named `worker` inside the Next.js directory: ## 3. Build Durable Object Worker functionality 1. In your `worker/wrangler.toml` file, update the Durable Object binding: ```toml {4,5,9} title="worker/wrangler.toml" # ... Other wrangler configuration settings [[durable_objects.bindings]] name = "CURSOR_SESSIONS" class_name = "CursorSessions" [[migrations]] tag = "v1" new_sqlite_classes = ["CursorSessions"] ``` 2. Initialize the main methods for the Durable Object and define types for WebSocket messages and cursor sessions in your `worker/src/index.ts` to support type-safe interaction: - `WsMessage`. Specifies the structure of WebSocket messages handled by the Durable Object. - `Session`. Represents the connected user's ID and current cursor coordinates. ```ts title="worker/src/index.ts" import { DurableObject } from 'cloudflare:workers'; export type WsMessage = | { type: "message"; data: string } | { type: "quit"; id: string } | { type: "join"; id: string } | { type: "move"; id: string; x: number; y: number } | { type: "get-cursors" } | { type: "get-cursors-response"; sessions: Session[] }; export type Session = { id: string; x: number; y: number }; export class CursorSessions extends DurableObject { constructor(ctx: DurableObjectState, env: Env) {} broadcast(message: WsMessage, self?: string) {} async webSocketMessage(ws: WebSocket, message: string) {} async webSocketClose(ws: WebSocket, code: number) {} closeSessions() {} async fetch(request: Request) { return new Response("Hello"); } } export default { async fetch(request, env, ctx) { return new Response("Ok"); }, } satisfies ExportedHandler; ``` Now update `worker-configuration.d.ts` by running: ```sh cd worker && npm run cf-typegen ``` 3. Update the Durable Object to manage WebSockets: ```ts title="worker/src/index.ts" {29-34,36-43,56,79,89-100} // Rest of the code export class CursorSessions extends DurableObject { sessions: Map; constructor(ctx: DurableObjectState, env: Env) { super(ctx, env); this.sessions = new Map(); this.ctx.getWebSockets().forEach((ws) => { const meta = ws.deserializeAttachment(); this.sessions.set(ws, { ...meta }); }); } broadcast(message: WsMessage, self?: string) { this.ctx.getWebSockets().forEach((ws) => { const { id } = ws.deserializeAttachment(); if (id !== self) ws.send(JSON.stringify(message)); }); } async webSocketMessage(ws: WebSocket, message: string) { if (typeof message !== "string") return; const parsedMsg: WsMessage = JSON.parse(message); const session = this.sessions.get(ws); if (!session) return; switch (parsedMsg.type) { case "move": session.x = parsedMsg.x; session.y = parsedMsg.y; ws.serializeAttachment(session); this.broadcast(parsedMsg, session.id); break; case "get-cursors": const sessions: Session[] = []; this.sessions.forEach((session) => { sessions.push(session); }); const wsMessage: WsMessage = { type: "get-cursors-response", sessions }; ws.send(JSON.stringify(wsMessage)); break; case "message": this.broadcast(parsedMsg); break; default: break; } } async webSocketClose(ws: WebSocket, code: number) { const id = this.sessions.get(ws)?.id; id && this.broadcast({ type: 'quit', id }); this.sessions.delete(ws); ws.close(); } closeSessions() { this.ctx.getWebSockets().forEach((ws) => ws.close()); } async fetch(request: Request) { const url = new URL(request.url); const webSocketPair = new WebSocketPair(); const [client, server] = Object.values(webSocketPair); this.ctx.acceptWebSocket(server); const id = url.searchParams.get("id"); if (!id) { return new Response("Missing id", { status: 400 }); } // Set Id and Default Position const sessionInitialData: Session = { id, x: -1, y: -1 }; server.serializeAttachment(sessionInitialData); this.sessions.set(server, sessionInitialData); this.broadcast({ type: "join", id }, id); return new Response(null, { status: 101, webSocket: client, }); } } export default { async fetch(request, env, ctx) { if (request.url.match("/ws")) { const upgradeHeader = request.headers.get("Upgrade"); if (!upgradeHeader || upgradeHeader !== "websocket") { return new Response("Durable Object expected Upgrade: websocket", { status: 426, }); } const id = env.CURSOR_SESSIONS.idFromName("globalRoom"); const stub = env.CURSOR_SESSIONS.get(id); return stub.fetch(request); } return new Response(null, { status: 400, statusText: "Bad Request", headers: { "Content-Type": "text/plain", }, }); }, } satisfies ExportedHandler; ``` - The main `fetch` handler routes requests with a `/ws` URL to the `CursorSessions` Durable Object where a WebSocket connection is established. - The `CursorSessions` class manages WebSocket connections, session states, and broadcasts messages to other connected clients. - When a new WebSocket connection is established, the Durable Object broadcasts a `join` message to all connected clients; similarly, a `quit` message is broadcast when a client disconnects. - It tracks each WebSocket client's last cursor position under the `move` message, which is broadcasted to all active clients. - When a `get-cursors` message is received, it sends the number of currently active clients to the specific client that requested it. 4. Extend the `WorkerEntrypoint` class for RPC: :::note[Note] A service binding to `SessionsRPC` is used here because Durable Object RPC is not yet supported in multiple `wrangler dev` sessions. In this case, two `wrangler dev` sessions are used: one for the Next.js Worker and one for the Durable Object Worker. In production, however, Durable Object RPC is not an issue. For convenience in local development, a service binding is used instead of directly invoking the Durable Object RPC method. ::: ```ts title="worker/src/index.ts" ins={2,5-12} del={1} import { DurableObject } from 'cloudflare:workers'; import { DurableObject, WorkerEntrypoint } from 'cloudflare:workers'; // ... rest of the code export class SessionsRPC extends WorkerEntrypoint { async closeSessions() { const id = this.env.CURSOR_SESSIONS.idFromName("globalRoom"); const stub = this.env.CURSOR_SESSIONS.get(id); // Invoking Durable Object RPC method. Same `wrangler dev` session. await stub.closeSessions(); } } export default { async fetch(request, env, ctx) { if (request.url.match("/ws")) { // ... ``` 5. Leave the Durable Object Worker running. It's used for RPC and serves as a local WebSocket server: 6. Use the resulting address from the previous step to set the Worker host as a public environment variable in your Next.js project: ```text title="next-rpc/.env.local" ins={1} NEXT_PUBLIC_WS_HOST=localhost:8787 ``` ## 4. Build Next.js Worker functionality 1. In your Next.js Wrangler file, declare the external Durable Object binding and the Service binding to `SessionsRPC`: ```toml title="next-rpc/wrangler.toml" ins={10-18} # ... rest of the configuration compatibility_flags = ["nodejs_compat"] # Minification helps to keep the Worker bundle size down and improve start up time. minify = true # Use the new Workers + Assets to host the static frontend files assets = { directory = ".worker-next/assets", binding = "ASSETS" } [[durable_objects.bindings]] name = "CURSOR_SESSIONS" class_name = "CursorSessions" script_name = "worker" [[services]] binding = "RPC_SERVICE" service = "worker" entrypoint = "SessionsRPC" ``` 2. Update your `env.d.ts` file for type-safety: ```ts title="next-rpc/env.d.ts" {2-5} interface CloudflareEnv { CURSOR_SESSIONS: DurableObjectNamespace< import("./worker/src/index").CursorSessions >; RPC_SERVICE: Service; ASSETS: Fetcher; } ``` 3. Include Next.js server side logic: - Add a server action to close all active WebSocket connections. - Use the RPC method `closeSessions` from the `RPC_SERVICE` Service binding instead of invoking the Durable Object RPC method because of the limitation mentioned in the note above. - The server component generates unique IDs using `nanoid` to identify the WebSocket connection within the Durable Object. - Set the [`dynamic`](https://nextjs.org/docs/app/api-reference/file-conventions/route-segment-config) value to `force-dynamic` to ensure unique ID generation and avoid static rendering ```tsx title="src/app/page.tsx" {5,9-10,19,25-27,33} import { getCloudflareContext } from "@opennextjs/cloudflare"; import { Cursors } from "./cursor"; import { nanoid } from "nanoid"; export const dynamic = "force-dynamic"; async function closeSessions() { "use server"; const cf = await getCloudflareContext(); await cf.env.RPC_SERVICE.closeSessions(); // Note: Not supported in `wrangler dev` // const id = cf.env.CURSOR_SESSIONS.idFromName("globalRoom"); // const stub = cf.env.CURSOR_SESSIONS.get(id); // await stub.closeSessions(); } export default function Home() { const id = `ws_${nanoid(50)}`; return (

Server Actions

Live Cursors

); } ``` 4. Create a client component to manage WebSocket and mouse movement events:
```tsx title="src/app/cursor.tsx" "use client"; import { useCallback, useEffect, useLayoutEffect, useReducer, useRef, useState } from "react"; import type { Session, WsMessage } from "../../worker/src/index"; import { PerfectCursor } from "perfect-cursors"; const INTERVAL = 55; export function Cursors(props: { id: string }) { const wsRef = useRef(null); const [cursors, setCursors] = useState>(new Map()); const lastSentTimestamp = useRef(0); const [messageState, dispatchMessage] = useReducer(messageReducer, { in: "", out: "", }); const [highlightedIn, highlightIn] = useHighlight(); const [highlightedOut, highlightOut] = useHighlight(); function startWebSocket() { const wsProtocol = window.location.protocol === "https:" ? "wss" : "ws"; const ws = new WebSocket( `${wsProtocol}://${process.env.NEXT_PUBLIC_WS_HOST}/ws?id=${props.id}`, ); ws.onopen = () => { highlightOut(); dispatchMessage({ type: "out", message: "get-cursors" }); const message: WsMessage = { type: "get-cursors" }; ws.send(JSON.stringify(message)); }; ws.onmessage = (message) => { const messageData: WsMessage = JSON.parse(message.data); highlightIn(); dispatchMessage({ type: "in", message: messageData.type }); switch (messageData.type) { case "quit": setCursors((prev) => { const updated = new Map(prev); updated.delete(messageData.id); return updated; }); break; case "join": setCursors((prev) => { const updated = new Map(prev); if (!updated.has(messageData.id)) { updated.set(messageData.id, { id: messageData.id, x: -1, y: -1 }); } return updated; }); break; case "move": setCursors((prev) => { const updated = new Map(prev); const session = updated.get(messageData.id); if (session) { session.x = messageData.x; session.y = messageData.y; } else { updated.set(messageData.id, messageData); } return updated; }); break; case "get-cursors-response": setCursors( new Map( messageData.sessions.map((session) => [session.id, session]), ), ); break; default: break; } }; ws.onclose = () => setCursors(new Map()); return ws; } useEffect(() => { const abortController = new AbortController(); document.addEventListener( "mousemove", (ev) => { const x = ev.pageX / window.innerWidth, y = ev.pageY / window.innerHeight; const now = Date.now(); if ( now - lastSentTimestamp.current > INTERVAL && wsRef.current?.readyState === WebSocket.OPEN ) { const message: WsMessage = { type: "move", id: props.id, x, y }; wsRef.current.send(JSON.stringify(message)); lastSentTimestamp.current = now; highlightOut(); dispatchMessage({ type: "out", message: "move" }); } }, { signal: abortController.signal, }, ); return () => abortController.abort(); // eslint-disable-next-line react-hooks/exhaustive-deps }, []); useEffect(() => { wsRef.current = startWebSocket(); return () => wsRef.current?.close(); // eslint-disable-next-line react-hooks/exhaustive-deps }, [props.id]); function sendMessage() { highlightOut(); dispatchMessage({ type: "out", message: "message" }); wsRef.current?.send( JSON.stringify({ type: "message", data: "Ping" } satisfies WsMessage), ); } const otherCursors = Array.from(cursors.values()).filter( ({ id, x, y }) => id !== props.id && x !== -1 && y !== -1, ); return ( <>
WebSocket Connections
{cursors.size}
Messages
{messageState.in}
{messageState.out}
{otherCursors.map((session) => ( ))}
); } function SvgCursor({ point }: { point: number[] }) { const refSvg = useRef(null); const animateCursor = useCallback((point: number[]) => { refSvg.current?.style.setProperty( "transform", `translate(${point[0]}px, ${point[1]}px)`, ); }, []); const onPointMove = usePerfectCursor(animateCursor); useLayoutEffect(() => onPointMove(point), [onPointMove, point]); const [randomColor] = useState( `#${Math.floor(Math.random() * 16777215) .toString(16) .padStart(6, "0")}`, ); return ( ); } function usePerfectCursor(cb: (point: number[]) => void, point?: number[]) { const [pc] = useState(() => new PerfectCursor(cb)); useLayoutEffect(() => { if (point) pc.addPoint(point); return () => pc.dispose(); // eslint-disable-next-line react-hooks/exhaustive-deps }, [pc]); useLayoutEffect(() => { PerfectCursor.MAX_INTERVAL = 58; }, []); const onPointChange = useCallback( (point: number[]) => pc.addPoint(point), [pc], ); return onPointChange; } type MessageState = { in: string; out: string }; type MessageAction = { type: "in" | "out"; message: string }; function messageReducer(state: MessageState, action: MessageAction) { switch (action.type) { case "in": return { ...state, in: action.message }; case "out": return { ...state, out: action.message }; default: return state; } } function useHighlight(duration = 250) { const timestampRef = useRef(0); const [highlighted, setHighlighted] = useState(false); function highlight() { timestampRef.current = Date.now(); setHighlighted(true); setTimeout(() => { const now = Date.now(); if (now - timestampRef.current >= duration) { setHighlighted(false); } }, duration); } return [highlighted, highlight] as const; } ```
The generated ID is used here and passed as a parameter to the WebSocket server: ```ts const ws = new WebSocket( `${wsProtocol}://${process.env.NEXT_PUBLIC_WS_HOST}/ws?id=${props.id}`, ); ``` The component starts the WebSocket connection and handles 4 types of WebSocket messages, which trigger updates to React's state: - `join`. Received when a new WebSocket connection is established. - `quit`. Received when a WebSocket connection is closed. - `move`. Received when a user's cursor moves. - `get-cursors-response`. Received when a client sends a `get-cursors` message, which is triggered once the WebSocket connection is open. It sends the user's cursor coordinates to the WebSocket server during the [`mousemove`](https://developer.mozilla.org/en-US/docs/Web/API/Element/mousemove_event) event, which then broadcasts them to all active WebSocket clients. Although there are multiple strategies you can use together for real-time cursor synchronization (e.g., batching, interpolation, etc.), in this tutorial throttling, spline interpolation and position normalization are used: ```ts {4-5,8-9} document.addEventListener( "mousemove", (ev) => { const x = ev.pageX / window.innerWidth, y = ev.pageY / window.innerHeight; const now = Date.now(); if ( now - lastSentTimestamp.current > INTERVAL && wsRef.current?.readyState === WebSocket.OPEN ) { const message: WsMessage = { type: "move", id: props.id, x, y }; wsRef.current.send(JSON.stringify(message)); lastSentTimestamp.current = now; // ... } } ); ``` Each animated cursor is controlled by a `PerfectCursor` instance, which animates its position along a spline curve defined by the cursor's latest positions: ```ts {9-11} // SvgCursor react component const refSvg = useRef(null); const animateCursor = useCallback((point: number[]) => { refSvg.current?.style.setProperty( "transform", `translate(${point[0]}px, ${point[1]}px)`, ); }, []); const onPointMove = usePerfectCursor(animateCursor); // A `point` is added to the path whenever its vule updates; useLayoutEffect(() => onPointMove(point), [onPointMove, point]); // ... ``` 5. Run Next.js development server: :::note[Note] The Durable Object Worker must also be running. ::: 6. Open the App in the browser.
## 5. Deploy the project 1. Change into your Durable Object Worker directory: ```sh cd worker ``` Deploy the Worker: Copy only the host from the generated Worker URL, excluding the protocol, and set `NEXT_PUBLIC_WS_HOST` in `.env.local` to this value (e.g., `worker-unique-identifier.workers.dev`). ```txt title="next-rpc/.env.local" ins={2} del={1} NEXT_PUBLIC_WS_HOST=localhost:8787 NEXT_PUBLIC_WS_HOST=worker-unique-identifier.workers.dev ``` 2. Change into your root directory and deploy your Next.js app: :::note[Optional Step] Invoking Durable Object RPC methods between separate workers are fully supported in Cloudflare deployments. You can opt to use them instead of Service Bindings RPC: ```ts title="src/app/page.tsx" ins={7-9} del={4} async function closeSessions() { "use server"; const cf = await getCloudflareContext(); await cf.env.RPC_SERVICE.closeSessions(); // Note: Not supported in `wrangler dev` const id = cf.env.CURSOR_SESSIONS.idFromName("globalRoom"); const stub = cf.env.CURSOR_SESSIONS.get(id); await stub.closeSessions(); } ``` ::: ## Summary In this tutorial, you learned how to integrate Next.js with Durable Objects to build a real-time application to visualize cursors. You also learned how to use Workers' built-in RPC system alongside Next.js server actions. The complete code for this tutorial is available on [GitHub](https://github.com/exectx/next-live-cursors-do-rpc). ## Related resources You can check other Cloudflare tutorials or related resources: - [Workers RPC](/workers/runtime-apis/bindings/service-bindings/rpc/). - [Next.js and Workers Static Assets](/workers/frameworks/framework-guides/nextjs/). - [Build a seat booking app with SQLite in Durable Objects](/durable-objects/tutorials/build-a-seat-booking-app/). --- # Send Emails With Postmark URL: https://developers.cloudflare.com/workers/tutorials/send-emails-with-postmark/ In this tutorial, you will learn how to send transactional emails from Workers using [Postmark](https://postmarkapp.com/). At the end of this tutorial, you’ll be able to: - Create a Worker to send emails. - Sign up and add a Cloudflare domain to Postmark. - Send emails from your Worker using Postmark. - Store API keys securely with secrets. ## Prerequisites To continue with this tutorial, you’ll need: - A [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages), if you don’t already have one. - A [registered](/registrar/get-started/register-domain/) domain. - Installed [npm](https://docs.npmjs.com/getting-started). - A [Postmark account](https://account.postmarkapp.com/sign_up). ## Create a Worker project Start by using [C3](/pages/get-started/c3/) to create a Worker project in the command line, then, answer the prompts: ```sh npm create cloudflare@latest ``` Alternatively, you can use CLI arguments to speed things up: ```sh npm create cloudflare@latest email-with-postmark -- --type=hello-world --ts=false --git=true --deploy=false ``` This creates a simple hello-world Worker having the following content: ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` ## Add your domain to Postmark If you don’t already have a Postmark account, you can sign up for a [free account here](https://account.postmarkapp.com/sign_up). After signing up, check your inbox for a link to confirm your sender signature. This verifies and enables you to send emails from your registered email address. To enable email sending from other addresses on your domain, navigate to `Sender Signatures` on the Postmark dashboard, `Add Domain or Signature` > `Add Domain`, then type in your domain and click on `Verify Domain`. Next, you’re presented with a list of DNS records to add to your Cloudflare domain. On your Cloudflare dashboard, select the domain you entered earlier and navigate to `DNS` > `Records`. Copy/paste the DNS records (DKIM, and Return-Path) from Postmark to your Cloudflare domain. ![Image of adding DNS records to a Cloudflare domain](~/assets/images/workers/tutorials/postmarkapp/add_dns_records.png) :::note If you need more help adding DNS records in Cloudflare, refer to [Manage DNS records](/dns/manage-dns-records/how-to/create-dns-records/). ::: When that’s done, head back to Postmark and click on the `Verify` buttons. If all records are properly configured, your domain status should be updated to `Verified`. ![Image of domain verification on the Postmark dashboard](~/assets/images/workers/tutorials/postmarkapp/verified_domain.png) To grab your API token, navigate to the `Servers` tab, then `My First Server` > `API Tokens`, then copy your API key to a safe place. ## Send emails from your Worker The final step is putting it all together in a Worker. In your Worker, make a post request with `fetch` to Postmark’s email API and include your token and message body: :::note [Postmark’s JavaScript library](https://www.npmjs.com/package/postmark) is currently not supported on Workers. Use the [email API](https://postmarkapp.com/developer/user-guide/send-email-with-api) instead. ::: ```jsx export default { async fetch(request, env, ctx) { return await fetch("https://api.postmarkapp.com/email", { method: "POST", headers: { "Content-Type": "application/json", "X-Postmark-Server-Token": "your_postmark_api_token_here", }, body: JSON.stringify({ From: "hello@example.com", To: "someone@example.com", Subject: "Hello World", HtmlBody: "

Hello from Workers

", }), }); }, }; ``` To test your code locally, run the following command and navigate to [http://localhost:8787/](http://localhost:8787/) in a browser: ```sh npm start ``` Deploy your Worker with `npm run deploy`. ## Move API token to Secrets Sensitive information such as API keys and token should always be stored in secrets. All secrets are encrypted to add an extra layer of protection. That said, it’s a good idea to move your API token to a secret and access it from the environment of your Worker. To add secrets for local development, create a `.dev.vars` file which works exactly like a `.env` file: ```txt POSTMARK_API_TOKEN=your_postmark_api_token_here ``` Also ensure the secret is added to your deployed worker by running: ```sh title="Add secret to deployed Worker" npx wrangler secret put POSTMARK_API_TOKEN ``` The added secret can be accessed on via the `env` parameter passed to your Worker’s fetch event handler: ```jsx export default { async fetch(request, env, ctx) { return await fetch("https://api.postmarkapp.com/email", { method: "POST", headers: { "Content-Type": "application/json", "X-Postmark-Server-Token": env.POSTMARK_API_TOKEN, }, body: JSON.stringify({ From: "hello@example.com", To: "someone@example.com", Subject: "Hello World", HtmlBody: "

Hello from Workers

", }), }); }, }; ``` And finally, deploy this update with `npm run deploy`. ## Related resources - [Storing API keys and tokens with Secrets](/workers/configuration/secrets/). - [Transferring your domain to Cloudflare](/registrar/get-started/transfer-domain-to-cloudflare/). - [Send emails from Workers](/email-routing/email-workers/send-email-workers/) --- # Connect to a PostgreSQL database with Cloudflare Workers URL: https://developers.cloudflare.com/workers/tutorials/postgres/ import { Render, PackageManagers, WranglerConfig } from "~/components"; In this tutorial, you will learn how to create a Cloudflare Workers application and connect it to a PostgreSQL database using [TCP Sockets](/workers/runtime-apis/tcp-sockets/) and [Hyperdrive](/hyperdrive/). The Workers application you create in this tutorial will interact with a product database inside of PostgreSQL. ## Prerequisites To continue: 1. Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already. 2. Install [`npm`](https://docs.npmjs.com/getting-started). 3. Install [`Node.js`](https://nodejs.org/en/). Use a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later. 4. Make sure you have access to a PostgreSQL database. ## 1. Create a Worker application First, use the [`create-cloudflare` CLI](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) to create a new Worker application. To do this, open a terminal window and run the following command: This will prompt you to install the [`create-cloudflare`](https://www.npmjs.com/package/create-cloudflare) package and lead you through a setup wizard. If you choose to deploy, you will be asked to authenticate (if not logged in already), and your project will be deployed. If you deploy, you can still modify your Worker code and deploy again at the end of this tutorial. Now, move into the newly created directory: ```sh cd postgres-tutorial ``` ### Enable Node.js compatibility [Node.js compatibility](/workers/runtime-apis/nodejs/) is required for database drivers, including Postgres.js, and needs to be configured for your Workers project. ## 2. Add the PostgreSQL connection library To connect to a PostgreSQL database, you will need the `postgres` library. In your Worker application directory, run the following command to install the library: ```sh npm install postgres ``` Make sure you are using `postgres` (`Postgres.js`) version `3.4.4` or higher. `Postgres.js` is compatible with both Pages and Workers. ## 3. Configure the connection to the PostgreSQL database Choose one of the two methods to connect to your PostgreSQL database: 1. [Use a connection string](#use-a-connection-string). 2. [Set explicit parameters](#set-explicit-parameters). ### Use a connection string A connection string contains all the information needed to connect to a database. It is a URL that contains the following information: ``` postgresql://username:password@host:port/database ``` Replace `username`, `password`, `host`, `port`, and `database` with the appropriate values for your PostgreSQL database. Set your connection string as a [secret](/workers/configuration/secrets/) so that it is not stored as plain text. Use [`wrangler secret put`](/workers/wrangler/commands/#secret) with the example variable name `DB_URL`: ```sh npx wrangler secret put DB_URL ``` ```sh output ➜ wrangler secret put DB_URL ------------------------------------------------------- ? Enter a secret value: › ******************** ✨ Success! Uploaded secret DB_URL ``` Set your `DB_URL` secret locally in a `.dev.vars` file as documented in [Local Development with Secrets](/workers/configuration/secrets/). ```toml title='.dev.vars' DB_URL="" ``` ### Set explicit parameters Configure each database parameter as an [environment variable](/workers/configuration/environment-variables/) via the [Cloudflare dashboard](/workers/configuration/environment-variables/#add-environment-variables-via-the-dashboard) or in your Wrangler file. Refer to an example of aWrangler file configuration: ```toml [vars] DB_USERNAME = "postgres" # Set your password by creating a secret so it is not stored as plain text DB_HOST = "ep-aged-sound-175961.us-east-2.aws.neon.tech" DB_PORT = "5432" DB_NAME = "productsdb" ``` To set your password as a [secret](/workers/configuration/secrets/) so that it is not stored as plain text, use [`wrangler secret put`](/workers/wrangler/commands/#secret). `DB_PASSWORD` is an example variable name for this secret to be accessed in your Worker: ```sh npx wrangler secret put DB_PASSWORD ``` ```sh output ------------------------------------------------------- ? Enter a secret value: › ******************** ✨ Success! Uploaded secret DB_PASSWORD ``` ## 4. Connect to the PostgreSQL database in the Worker Open your Worker's main file (for example, `worker.ts`) and import the `Client` class from the `pg` library: ```typescript import postgres from "postgres"; ``` In the `fetch` event handler, connect to the PostgreSQL database using your chosen method, either the connection string or the explicit parameters. ### Use a connection string ```typescript const sql = postgres(env.DB_URL); ``` ### Set explicit parameters ```typescript const sql = postgres({ username: env.DB_USERNAME, password: env.DB_PASSWORD, host: env.DB_HOST, port: env.DB_PORT, database: env.DB_NAME, }); ``` ## 5. Interact with the products database To demonstrate how to interact with the products database, you will fetch data from the `products` table by querying the table when a request is received. :::note If you are following along in your own PostgreSQL instance, set up the `products` using the following SQL `CREATE TABLE` statement. This statement defines the columns and their respective data types for the `products` table: ```sql CREATE TABLE products ( id SERIAL PRIMARY KEY, name VARCHAR(255) NOT NULL, description TEXT, price DECIMAL(10, 2) NOT NULL ); ``` ::: Replace the existing code in your `worker.ts` file with the following code: ```typescript import postgres from "postgres"; export default { async fetch(request, env, ctx): Promise { const sql = postgres(env.DB_URL, { // Workers limit the number of concurrent external connections, so be sure to limit // the size of the local connection pool that postgres.js may establish. max: 5, // If you are using array types in your Postgres schema, it is necessary to fetch // type information to correctly de/serialize them. However, if you are not using // those, disabling this will save you an extra round-trip every time you connect. fetch_types: false, }); // Query the products table const result = await sql`SELECT * FROM products;`; // Return the result as JSON const resp = new Response(JSON.stringify(result), { headers: { "Content-Type": "application/json" }, }); return resp; }, } satisfies ExportedHandler; ``` This code establishes a connection to the PostgreSQL database within your Worker application and queries the `products` table, returning the results as a JSON response. ## 6. Deploy your Worker Run the following command to deploy your Worker: ```sh npx wrangler deploy ``` Your application is now live and accessible at `..workers.dev`. After deploying, you can interact with your PostgreSQL products database using your Cloudflare Worker. Whenever a request is made to your Worker's URL, it will fetch data from the `products` table and return it as a JSON response. You can modify the query as needed to retrieve the desired data from your products database. ## 7. Insert a new row into the products database To insert a new row into the `products` table, create a new API endpoint in your Worker that handles a `POST` request. When a `POST` request is received with a JSON payload, the Worker will insert a new row into the `products` table with the provided data. Assume the `products` table has the following columns: `id`, `name`, `description`, and `price`. Add the following code snippet inside the `fetch` event handler in your `worker.ts` file, before the existing query code: ```typescript {9-32} import postgres from "postgres"; export default { async fetch(request, env, ctx): Promise { const sql = postgres(env.DB_URL); const url = new URL(request.url); if (request.method === "POST" && url.pathname === "/products") { // Parse the request's JSON payload const productData = await request.json(); // Insert the new product into the database const values = { name: productData.name, description: productData.description, price: productData.price, }; const insertResult = await sql` INSERT INTO products ${sql(values)} RETURNING * `; // Return the inserted row as JSON const insertResp = new Response(JSON.stringify(insertResult), { headers: { "Content-Type": "application/json" }, }); // Clean up the client return insertResp; } // Query the products table const result = await sql`SELECT * FROM products;`; // Return the result as JSON const resp = new Response(JSON.stringify(result), { headers: { "Content-Type": "application/json" }, }); return resp; }, } satisfies ExportedHandler; ``` This code snippet does the following: 1. Checks if the request is a `POST` request and the URL path is `/products`. 2. Parses the JSON payload from the request. 3. Constructs an `INSERT` SQL query using the provided product data. 4. Executes the query, inserting the new row into the `products` table. 5. Returns the inserted row as a JSON response. Now, when you send a `POST` request to your Worker's URL with the `/products` path and a JSON payload, the Worker will insert a new row into the `products` table with the provided data. When a request to `/` is made, the Worker will return all products in the database. After making these changes, deploy the Worker again by running: ```sh npx wrangler deploy ``` You can now use your Cloudflare Worker to insert new rows into the `products` table. To test this functionality, send a `POST` request to your Worker's URL with the `/products` path, along with a JSON payload containing the new product data: ```json { "name": "Sample Product", "description": "This is a sample product", "price": 19.99 } ``` You have successfully created a Cloudflare Worker that connects to a PostgreSQL database and handles fetching data and inserting new rows into a products table. ## 8. Use Hyperdrive to accelerate queries Create a Hyperdrive configuration using the connection string for your PostgreSQL database. ```bash npx wrangler hyperdrive create --connection-string="postgres://user:password@HOSTNAME_OR_IP_ADDRESS:PORT/database_name" ``` This command outputs the Hyperdrive configuration `id` that will be used for your Hyperdrive [binding](/workers/runtime-apis/bindings/). Set up your binding by specifying the `id` in the Wrangler file. ```toml {7-9} name = "hyperdrive-example" main = "src/index.ts" compatibility_date = "2024-08-21" compatibility_flags = ["nodejs_compat"] # Pasted from the output of `wrangler hyperdrive create --connection-string=[...]` above. [[hyperdrive]] binding = "HYPERDRIVE" id = "" ``` Create the types for your Hyperdrive binding using the following command: ```bash npx wrangler types ``` Replace your existing connection string in your Worker code with the Hyperdrive connection string. ```js {3-3} export default { async fetch(request, env, ctx): Promise { const sql = postgres(env.HYPERDRIVE.connectionString) const url = new URL(request.url); //rest of the routes and database queries }, } satisfies ExportedHandler; ``` ## 9. Redeploy your Worker Run the following command to deploy your Worker: ```sh npx wrangler deploy ``` Your Worker application is now live and accessible at `..workers.dev`, using Hyperdrive. Hyperdrive accelerates database queries by pooling your connections and caching your requests across the globe. ## Next steps To build more with databases and Workers, refer to [Tutorials](/workers/tutorials) and explore the [Databases documentation](/workers/databases). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team. --- # OpenAI GPT function calling with JavaScript and Cloudflare Workers URL: https://developers.cloudflare.com/workers/tutorials/openai-function-calls-workers/ import { Render, PackageManagers } from "~/components"; In this tutorial, you will build a project that leverages [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling) feature, available in OpenAI's latest Chat Completions API models. The function calling feature allows the AI model to intelligently decide when to call a function based on the input, and respond in JSON format to match the function's signature. You will use the function calling feature to request for the model to determine a website URL which contains information relevant to a message from the user, retrieve the text content of the site, and, finally, return a final response from the model informed by real-time web data. ## What you will learn - How to use OpenAI's function calling feature. - Integrating OpenAI's API in a Cloudflare Worker. - Fetching and processing website content using Cheerio. - Handling API responses and function calls in JavaScript. - Storing API keys as secrets with Wrangler. --- ## 1. Create a new Worker project Create a Worker project in the command line: Go to your new `openai-function-calling-workers` Worker project: ```sh cd openai-function-calling-workers ``` Inside of your new `openai-function-calling-workers` directory, find the `src/index.js` file. You will configure this file for most of the tutorial. You will also need an OpenAI account and API key for this tutorial. If you do not have one, [create a new OpenAI account](https://platform.openai.com/signup) and [create an API key](https://platform.openai.com/account/api-keys) to continue with this tutorial. Make sure to store you API key somewhere safe so you can use it later. ## 2. Make a request to OpenAI With your Worker project created, make your first request to OpenAI. You will use the OpenAI node library to interact with the OpenAI API. In this project, you will also use the Cheerio library to handle processing the HTML content of websites ```sh npm install openai cheerio ``` Now, define the structure of your Worker in `index.js`: ```js export default { async fetch(request, env, ctx) { // Initialize OpenAI API // Handle incoming requests return new Response("Hello World!"); }, }; ``` Above `export default`, add the imports for `openai` and `cheerio`: ```js import OpenAI from "openai"; import * as cheerio from "cheerio"; ``` Within your `fetch` function, instantiate your `OpenAI` client: ```js async fetch(request, env, ctx) { const openai = new OpenAI({ apiKey: env.OPENAI_API_KEY, }); // Handle incoming requests return new Response('Hello World!'); }, ``` Use [`wrangler secret put`](/workers/wrangler/commands/#put) to set `OPENAI_API_KEY`. This [secret's](/workers/configuration/secrets/) value is the API key you created earlier in the OpenAI dashboard: ```sh npx wrangler secret put ``` For local development, create a new file `.dev.vars` in your Worker project and add this line. Make sure to replace `OPENAI_API_KEY` with your own OpenAI API key: ```txt OPENAI_API_KEY = "" ``` Now, make a request to the OpenAI [Chat Completions API](https://platform.openai.com/docs/guides/gpt/chat-completions-api): ```js export default { async fetch(request, env, ctx) { const openai = new OpenAI({ apiKey: env.OPENAI_API_KEY, }); const url = new URL(request.url); const message = url.searchParams.get("message"); const messages = [ { role: "user", content: message ? message : "What's in the news today?", }, ]; const tools = [ { type: "function", function: { name: "read_website_content", description: "Read the content on a given website", parameters: { type: "object", properties: { url: { type: "string", description: "The URL to the website to read", }, }, required: ["url"], }, }, }, ]; const chatCompletion = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: messages, tools: tools, tool_choice: "auto", }); const assistantMessage = chatCompletion.choices[0].message; console.log(assistantMessage); //Later you will continue handling the assistant's response here return new Response(assistantMessage.content); }, }; ``` Review the arguments you are passing to OpenAI: - **model**: This is the model you want OpenAI to use for your request. In this case, you are using `gpt-4o-mini`. - **messages**: This is an array containing all messages that are part of the conversation. Initially you provide a message from the user, and we later add the response from the model. The content of the user message is either the `message` query parameter from the request URL or the default "What's in the news today?". - **tools**: An array containing the actions available to the AI model. In this example you only have one tool, `read_website_content`, which reads the content on a given website. - **name**: The name of your function. In this case, it is `read_website_content`. - **description**: A short description that lets the model know the purpose of the function. This is optional but helps the model know when to select the tool. - **parameters**: A JSON Schema object which describes the function. In this case we request a response containing an object with the required property `url`. - **tool_choice**: This argument is technically optional as `auto` is the default. This argument indicates that either a function call or a normal message response can be returned by OpenAI. ## 3. Building your `read_website_content()` function You will now need to define the `read_website_content` function, which is referenced in the `tools` array. The `read_website_content` function fetches the content of a given URL and extracts the text from `

` tags using the `cheerio` library: Add this code above the `export default` block in your `index.js` file: ```js async function read_website_content(url) { console.log("reading website content"); const response = await fetch(url); const body = await response.text(); let cheerioBody = cheerio.load(body); const resp = { website_body: cheerioBody("p").text(), url: url, }; return JSON.stringify(resp); } ``` In this function, you take the URL that you received from OpenAI and use JavaScript's [`Fetch API`](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch) to pull the content of the website and extract the paragraph text. Now we need to determine when to call this function. ## 4. Process the Assistant's Messages Next, we need to process the response from the OpenAI API to check if it includes any function calls. If a function call is present, you should execute the corresponding function in your Worker. Note that the assistant may request multiple function calls. Modify the fetch method within the `export default` block as follows: ```js // ... your previous code ... if (assistantMessage.tool_calls) { for (const toolCall of assistantMessage.tool_calls) { if (toolCall.function.name === "read_website_content") { const url = JSON.parse(toolCall.function.arguments).url; const websiteContent = await read_website_content(url); messages.push({ role: "tool", tool_call_id: toolCall.id, name: toolCall.function.name, content: websiteContent, }); } } const secondChatCompletion = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: messages, }); return new Response(secondChatCompletion.choices[0].message.content); } else { // this is your existing return statement return new Response(assistantMessage.content); } ``` Check if the assistant message contains any function calls by checking for the `tool_calls` property. Because the AI model can call multiple functions by default, you need to loop through any potential function calls and add them to the `messages` array. Each `read_website_content` call will invoke the `read_website_content` function you defined earlier and pass the URL generated by OpenAI as an argument. \` The `secondChatCompletion` is needed to provide a response informed by the data you retrieved from each function call. Now, the last step is to deploy your Worker. Test your code by running `npx wrangler dev` and open the provided url in your browser. This will now show you OpenAI’s response using real-time information from the retrieved web data. ## 5. Deploy your Worker application To deploy your application, run the `npx wrangler deploy` command to deploy your Worker application: ```sh npx wrangler deploy ``` You can now preview your Worker at `..workers.dev`. Going to this URL will display the response from OpenAI. Optionally, add the `message` URL parameter to write a custom message: for example, `https://..workers.dev/?message=What is the weather in NYC today?`. ## 6. Next steps Reference the [finished code for this tutorial on GitHub](https://github.com/LoganGrasby/Cloudflare-OpenAI-Functions-Demo/blob/main/src/worker.js). To continue working with Workers and AI, refer to [the guide on using LangChain and Cloudflare Workers together](https://blog.cloudflare.com/langchain-and-cloudflare/) or [how to build a ChatGPT plugin with Cloudflare Workers](https://blog.cloudflare.com/magic-in-minutes-how-to-build-a-chatgpt-plugin-with-cloudflare-workers/). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team. --- # Securely access and upload assets with Cloudflare R2 URL: https://developers.cloudflare.com/workers/tutorials/upload-assets-with-r2/ import { Render, PackageManagers, WranglerConfig } from "~/components"; This tutorial explains how to create a TypeScript-based Cloudflare Workers project that can securely access files from and upload files to a [Cloudflare R2](/r2/) bucket. Cloudflare R2 allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services. ## Prerequisites To continue: 1. Sign up for a [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages) if you have not already. 2. Install [`npm`](https://docs.npmjs.com/getting-started). 3. Install [`Node.js`](https://nodejs.org/en/). Use a Node version manager like [Volta](https://volta.sh/) or [nvm](https://github.com/nvm-sh/nvm) to avoid permission issues and change Node.js versions. [Wrangler](/workers/wrangler/install-and-update/) requires a Node version of `16.17.0` or later. ## Create a Worker application First, use the [`create-cloudflare` CLI](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) to create a new Worker. To do this, open a terminal window and run the following command: Move into your newly created directory: ```sh cd upload-r2-assets ``` ## Create an R2 bucket Before you integrate R2 bucket access into your Worker application, an R2 bucket must be created: ```sh npx wrangler r2 bucket create ``` Replace `` with the name you want to assign to your bucket. List your account's R2 buckets to verify that a new bucket has been added: ```sh npx wrangler r2 bucket list ``` ## Configure access to an R2 bucket After your new R2 bucket is ready, use it inside your Worker application. Use your R2 bucket inside your Worker project by modifying the [Wrangler configuration file](/workers/wrangler/configuration/) to include an R2 bucket [binding](/workers/runtime-apis/bindings/). Add the following R2 bucket binding to your Wrangler file: ```toml [[r2_buckets]] binding = 'MY_BUCKET' bucket_name = '' ``` Give your R2 bucket binding name. Replace `` with the name of the R2 bucket you created earlier. Your Worker application can now access your R2 bucket using the `MY_BUCKET` variable. You can now perform CRUD (Create, Read, Update, Delete) operations on the contents of the bucket. ## Fetch from an R2 bucket After setting up an R2 bucket binding, you will implement the functionalities for the Worker to interact with the R2 bucket, such as, fetching files from the bucket and uploading files to the bucket. To fetch files from the R2 bucket, use the `BINDING.get` function. In the below example, the R2 bucket binding is called `MY_BUCKET`. Using `.get(key)`, you can retrieve an asset based on the URL pathname as the key. In this example, the URL pathname is `/image.png`, and the asset key is `image.png`. ```ts interface Env { MY_BUCKET: R2Bucket; } export default { async fetch(request, env): Promise { // For example, the request URL my-worker.account.workers.dev/image.png const url = new URL(request.url); const key = url.pathname.slice(1); // Retrieve the key "image.png" const object = await env.MY_BUCKET.get(key); if (object === null) { return new Response("Object Not Found", { status: 404 }); } const headers = new Headers(); object.writeHttpMetadata(headers); headers.set("etag", object.httpEtag); return new Response(object.body, { headers, }); }, } satisfies ExportedHandler; ``` The code written above fetches and returns data from the R2 bucket when a `GET` request is made to the Worker application using a specific URL path. ## Upload securely to an R2 bucket Next, you will add the ability to upload to your R2 bucket using authentication. To securely authenticate your upload requests, use [Wrangler's secret capability](/workers/wrangler/commands/#secret). Wrangler was installed when you ran the `create cloudflare@latest` command. Create a secret value of your choice -- for instance, a random string or password. Using the Wrangler CLI, add the secret to your project as `AUTH_SECRET`: ```sh npx wrangler secret put AUTH_SECRET ``` Now, add a new code path that handles a `PUT` HTTP request. This new code will check that the previously uploaded secret is correctly used for authentication, and then upload to R2 using `MY_BUCKET.put(key, data)`: ```ts interface Env { MY_BUCKET: R2Bucket; AUTH_SECRET: string; } export default { async fetch(request, env): Promise { if (request.method === "PUT") { // Note that you could require authentication for all requests // by moving this code to the top of the fetch function. const auth = request.headers.get("Authorization"); const expectedAuth = `Bearer ${env.AUTH_SECRET}`; if (!auth || auth !== expectedAuth) { return new Response("Unauthorized", { status: 401 }); } const url = new URL(request.url); const key = url.pathname.slice(1); await env.MY_BUCKET.put(key, request.body); return new Response(`Object ${key} uploaded successfully!`); } // include the previous code here... }, } satisfies ExportedHandler; ``` This approach ensures that only clients who provide a valid bearer token, via the `Authorization` header equal to the `AUTH_SECRET` value, will be permitted to upload to the R2 bucket. If you used a different binding name than `AUTH_SECRET`, replace it in the code above. ## Deploy your Worker application After completing your Cloudflare Worker project, deploy it to Cloudflare. Make sure you are in your Worker application directory that you created for this tutorial, then run: ```sh npx wrangler deploy ``` Your application is now live and accessible at `..workers.dev`. You have successfully created a Cloudflare Worker that allows you to interact with an R2 bucket to accomplish tasks such as uploading and downloading files. You can now use this as a starting point for your own projects. ## Next steps To build more with R2 and Workers, refer to [Tutorials](/workers/tutorials/) and the [R2 documentation](/r2/). If you have any questions, need assistance, or would like to share your project, join the Cloudflare Developer community on [Discord](https://discord.cloudflare.com) to connect with fellow developers and the Cloudflare team. --- # Send Emails With Resend URL: https://developers.cloudflare.com/workers/tutorials/send-emails-with-resend/ In this tutorial, you will learn how to send transactional emails from Workers using [Resend](https://resend.com/). At the end of this tutorial, you’ll be able to: - Create a Worker to send emails. - Sign up and add a Cloudflare domain to Resend. - Send emails from your Worker using Resend. - Store API keys securely with secrets. ## Prerequisites To continue with this tutorial, you’ll need: - A [Cloudflare account](https://dash.cloudflare.com/sign-up/workers-and-pages), if you don’t already have one. - A [registered](/registrar/get-started/register-domain/) domain. - Installed [npm](https://docs.npmjs.com/getting-started). - A [Resend account](https://resend.com/signup). ## Create a Worker project Start by using [C3](/pages/get-started/c3/) to create a Worker project in the command line, then, answer the prompts: ```sh npm create cloudflare@latest ``` Alternatively, you can use CLI arguments to speed things up: ```sh npm create cloudflare@latest email-with-resend -- --type=hello-world --ts=false --git=true --deploy=false ``` This creates a simple hello-world Worker having the following content: ```js export default { async fetch(request, env, ctx) { return new Response("Hello World!"); }, }; ``` ## Add your domain to Resend If you don’t already have a Resend account, you can sign up for a [free account here](https://resend.com/signup). After signing up, go to `Domains` using the side menu, and click the button to add a new domain. On the modal, enter the domain you want to add and then select a region. Next, you’re presented with a list of DNS records to add to your Cloudflare domain. On your Cloudflare dashboard, select the domain you entered earlier and navigate to `DNS` > `Records`. Copy/paste the DNS records (DKIM, SPF, and DMARC records) from Resend to your Cloudflare domain. ![Image of adding DNS records to a Cloudflare domain](~/assets/images/workers/tutorials/resend/add_dns_records.png) :::note If you need more help adding DNS records in Cloudflare, refer to [Manage DNS records](/dns/manage-dns-records/how-to/create-dns-records/). ::: When that’s done, head back to Resend and click on the `Verify DNS Records` button. If all records are properly configured, your domain status should be updated to `Verified`. ![Image of domain verification on the Resend dashboard](~/assets/images/workers/tutorials/resend/verified_domain.png) Lastly, navigate to `API Keys` with the side menu, to create an API key. Give your key a descriptive name and the appropriate permissions. Click the button to add your key and then copy your API key to a safe location. ## Send emails from your Worker The final step is putting it all together in a Worker. Open up a terminal in the directory of the Worker you created earlier. Then, install the Resend SDK: ```sh npm i resend ``` In your Worker, import and use the Resend library like so: ```jsx import { Resend } from "resend"; export default { async fetch(request, env, ctx) { const resend = new Resend("your_resend_api_key"); const { data, error } = await resend.emails.send({ from: "hello@example.com", to: "someone@example.com", subject: "Hello World", html: "

Hello from Workers

", }); return Response.json({ data, error }); }, }; ``` To test your code locally, run the following command and navigate to [http://localhost:8787/](http://localhost:8787/) in a browser: ```sh npm start ``` Deploy your Worker with `npm run deploy`. ## Move API keys to Secrets Sensitive information such as API keys and token should always be stored in secrets. All secrets are encrypted to add an extra layer of protection. That said, it’s a good idea to move your API key to a secret and access it from the environment of your Worker. To add secrets for local development, create a `.dev.vars` file which works exactly like a `.env` file: ```txt RESEND_API_KEY=your_resend_api_key ``` Also ensure the secret is added to your deployed worker by running: ```sh title="Add secret to deployed Worker" npx wrangler secret put RESEND_API_KEY ``` The added secret can be accessed on via the `env` parameter passed to your Worker’s fetch event handler: ```jsx import { Resend } from "resend"; export default { async fetch(request, env, ctx) { const resend = new Resend(env.RESEND_API_KEY); const { data, error } = await resend.emails.send({ from: "hello@example.com", to: "someone@example.com", subject: "Hello World", html: "

Hello from Workers

", }); return Response.json({ data, error }); }, }; ``` And finally, deploy this update with `npm run deploy`. ## Related resources - [Storing API keys and tokens with Secrets](/workers/configuration/secrets/). - [Transferring your domain to Cloudflare](/registrar/get-started/transfer-domain-to-cloudflare/). - [Send emails from Workers](/email-routing/email-workers/send-email-workers/) --- # Set up and use a Prisma Postgres database URL: https://developers.cloudflare.com/workers/tutorials/using-prisma-postgres-with-workers/ [Prisma Postgres](https://www.prisma.io/postgres) is a managed, serverless PostgreSQL database. It supports features like connection pooling, caching, real-time subscriptions, and query optimization recommendations. In this tutorial, you will learn how to: - Set up a Cloudflare Workers project with [Prisma ORM](https://www.prisma.io/docs). - Create a Prisma Postgres instance from the Prisma CLI. - Model data and run migrations with Prisma Postgres. - Query the database from Workers. - Deploy the Worker to Cloudflare. ## Prerequisites To follow this guide, ensure you have the following: - Node.js `v18.18` or higher installed. - An active [Cloudflare account](https://dash.cloudflare.com/). - A basic familiarity with installing and using command-line interface (CLI) applications. ## 1. Create a new Worker project Begin by using [C3](/pages/get-started/c3/) to create a Worker project in the command line: ```sh npm create cloudflare@latest prisma-postgres-worker -- --type=hello-world --ts=true --git=true --deploy=false ``` Then navigate into your project: ```sh cd ./prisma-postgres-worker ``` Your initial `src/index.ts` file currently contains a simple request handler: ```ts title="src/index.ts" export default { async fetch(request, env, ctx): Promise { return new Response("Hello World!"); }, } satisfies ExportedHandler; ``` ## 2. Setup Prisma in your project In this step, you will set up Prisma ORM with a Prisma Postgres database using the CLI. Then you will create and execute helper scripts to create tables in the database and generate a Prisma client to query it. ### 2.1. Install required dependencies Install Prisma CLI as a dev dependency: ```sh npm install prisma --save-dev ``` Install the [Prisma Accelerate client extension](https://www.npmjs.com/package/@prisma/extension-accelerate) as it is required for Prisma Postgres: ```sh npm install @prisma/extension-accelerate ``` Install the [`dotenv-cli` package](https://www.npmjs.com/package/dotenv-cli) to load environment variables from `.dev.vars`: ```sh npm install dotenv-cli --save-dev ``` ### 2.2. Create a Prisma Postgres database and initialize Prisma Initialize Prisma in your application: ```sh npx prisma@latest init --db ``` If you do not have a [Prisma Data Platform](https://console.prisma.io/) account yet, or if you are not logged in, the command will prompt you to log in using one of the available authentication providers. A browser window will open so you can log in or create an account. Return to the CLI after you have completed this step. Once logged in (or if you were already logged in), the CLI will prompt you to select a project name and a database region. Once the command has terminated, it will have created: - A project in your [Platform Console](https://console.prisma.io/) containing a Prisma Postgres database instance. - A `prisma` folder containing `schema.prisma`, where you will define your database schema. - An `.env` file in the project root, which will contain the Prisma Postgres database url `DATABASE_URL=`. Note that Cloudflare Workers do not support `.env` files. You will use a file called `.dev.vars` instead of the `.env` file that was just created. ### 2.3. Prepare environment variables Rename the `.env` file in the root of your application to `.dev.vars` file: ```sh mv .env .dev.vars ``` ### 2.4. Apply database schema changes Open the `schema.prisma` file in the `prisma` folder and add the following `User` model to your database: ```prisma title="prisma/schema.prisma" generator client { provider = "prisma-client-js" } datasource db { provider = "postgresql" url = env("DATABASE_URL") } model User { id Int @id @default(autoincrement()) email String name String } ``` Next, add the following helper scripts to the `scripts` section of your `package.json`: ```json title="package.json" "scripts": { "migrate": "dotenv -e .dev.vars -- npx prisma migrate dev", "generate": "dotenv -e .dev.vars -- npx prisma generate --no-engine", "studio": "dotenv -e .dev.vars -- npx prisma studio", // Additional worker scripts... } ``` Run the migration script to apply changes to the database: ```sh npm run migrate ``` When prompted, provide a name for the migration (for example, `init`). After these steps are complete, Prisma ORM is fully set up and connected to your Prisma Postgres database. ## 3. Develop the application Modify the `src/index.ts` file and replace its contents with the following code: ```ts title="src/index.ts" import { PrismaClient } from "@prisma/client/edge"; import { withAccelerate } from "@prisma/extension-accelerate"; export interface Env { DATABASE_URL: string; } export default { async fetch(request, env, ctx): Promise { const path = new URL(request.url).pathname; if (path === "/favicon.ico") return new Response("Resource not found", { status: 404, headers: { "Content-Type": "text/plain", }, }); const prisma = new PrismaClient({ datasourceUrl: env.DATABASE_URL, }).$extends(withAccelerate()); const user = await prisma.user.create({ data: { email: `Jon${Math.ceil(Math.random() * 1000)}@gmail.com`, name: "Jon Doe", }, }); const userCount = await prisma.user.count(); return new Response(`\ Created new user: ${user.name} (${user.email}). Number of users in the database: ${userCount}. `); }, } satisfies ExportedHandler; ``` Run the development server: ```sh npm run dev ``` Visit [`https://localhost:8787`](https://localhost:8787) to see your app display the following output: ```sh Number of users in the database: 1 ``` Every time you refresh the page, a new user is created. The number displayed will increment by `1` with each refresh as it returns the total number of users in your database. ## 4. Deploy the application to Cloudflare When the application is deployed to Cloudflare, it needs access to the `DATABASE_URL` environment variable that is defined locally in `.dev.vars`. You can use the [`npx wrangler secret put`](/workers/configuration/secrets/#adding-secrets-to-your-project) command to upload the `DATABASE_URL` to the deployment environment: ```sh npx wrangler secret put DATABASE_URL ``` When prompted, paste the `DATABASE_URL` value (from `.dev.vars`). If you are logged in via the Wrangler CLI, you will see a prompt asking if you'd like to create a new Worker. Confirm by choosing "yes": ```sh ✔ There doesn't seem to be a Worker called "prisma-postgres-worker". Do you want to create a new Worker with that name and add secrets to it? … yes ``` Then execute the following command to deploy your project to Cloudflare Workers: ```sh npm run deploy ``` The `wrangler` CLI will bundle and upload your application. If you are not already logged in, the `wrangler` CLI will open a browser window prompting you to log in to the [Cloudflare dashboard](https://dash.cloudflare.com/). :::note If you belong to multiple accounts, select the account where you want to deploy the project. ::: Once the deployment completes, verify the deployment by visiting the live URL provided in the deployment output, such as `https://{PROJECT_NAME}.workers.dev`. If you encounter any issues, ensure the secrets were added correctly and check the deployment logs for errors. ## Next steps Congratulations on building and deploying a simple application with Prisma Postgres and Cloudflare Workers! To enhance your application further: - Add [caching](https://www.prisma.io/docs/postgres/caching) to your queries. - Explore the [Prisma Postgres documentation](https://www.prisma.io/docs/postgres/getting-started). To see how to build a real-time application with Cloudflare Workers and Prisma Postgres, read [this](https://www.prisma.io/docs/guides/prisma-postgres-realtime-on-cloudflare) guide. --- # Use Workers KV directly from Rust URL: https://developers.cloudflare.com/workers/tutorials/workers-kv-from-rust/ import { Render, WranglerConfig } from "~/components"; This tutorial will teach you how to read and write to KV directly from Rust using [workers-rs](https://github.com/cloudflare/workers-rs). ## Prerequisites To complete this tutorial, you will need: - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). - [Wrangler](/workers/wrangler/) CLI. - The [Rust](https://www.rust-lang.org/tools/install) toolchain. - And `cargo-generate` sub-command by running: ```sh cargo install cargo-generate ``` ## 1. Create your Worker project in Rust Open a terminal window, and run the following command to generate a Worker project template in Rust: ```sh cargo generate cloudflare/workers-rs ``` Then select `template/hello-world-http` template, give your project a descriptive name and select enter. A new project should be created in your directory. Open the project in your editor and run `npx wrangler dev` to compile and run your project. In this tutorial, you will use Workers KV from Rust to build an app to store and retrieve cities by a given country name. ## 2. Create a KV namespace In the terminal, use Wrangler to create a KV namespace for `cities`. This generates a configuration to be added to the project: ```sh npx wrangler kv namespace create cities ``` To add this configuration to your project, open the Wrangler file and create an entry for `kv_namespaces` above the build command: ```toml kv_namespaces = [ { binding = "cities", id = "e29b263ab50e42ce9b637fa8370175e8" } ] # build command... ``` With this configured, you can access the KV namespace with the binding `"cities"` from Rust. ## 3. Write data to KV For this app, you will create two routes: A `POST` route to receive and store the city in KV, and a `GET` route to retrieve the city of a given country. For example, a `POST` request to `/France` with a body of `{"city": "Paris"}` should create an entry of Paris as a city in France. A `GET` request to `/France` should retrieve from KV and respond with Paris. Install [Serde](https://serde.rs/) as a project dependency to handle JSON `cargo add serde`. Then create an app router and a struct for `Country` in `src/lib.rs`: ```rust null {1,6,8,9,10,11,15,17} use serde::{Deserialize, Serialize}; use worker::*; #[event(fetch)] async fn fetch(req: Request, env: Env, _ctx: Context) -> Result { let router = Router::new(); #[derive(Serialize, Deserialize, Debug)] struct Country { city: String, } router // TODO: .post_async("/:country", |_, _| async move { Response::empty() }) // TODO: .get_async("/:country", |_, _| async move { Response::empty() }) .run(req, env) .await } ``` For the post handler, you will retrieve the country name from the path and the city name from the request body. Then, you will save this in KV with the country as key and the city as value. Finally, the app will respond with the city name: ```rust .post_async("/:country", |mut req, ctx| async move { let country = ctx.param("country").unwrap(); let city = match req.json::().await { Ok(c) => c.city, Err(_) => String::from(""), }; if city.is_empty() { return Response::error("Bad Request", 400); }; return match ctx.kv("cities")?.put(country, &city)?.execute().await { Ok(_) => Response::ok(city), Err(_) => Response::error("Bad Request", 400), }; }) ``` Save the file and make a `POST` request to test this endpoint: ```sh curl --json '{"city": "Paris"}' http://localhost:8787/France ``` ## 4. Read data from KV To retrieve cities stored in KV, write a `GET` route that pulls the country name from the path and searches KV. You also need some error handling if the country is not found: ```rust .get_async("/:country", |_req, ctx| async move { if let Some(country) = ctx.param("country") { return match ctx.kv("cities")?.get(country).text().await? { Some(city) => Response::ok(city), None => Response::error("Country not found", 404), }; } Response::error("Bad Request", 400) }) ``` Save and make a curl request to test the endpoint: ```sh curl http://localhost:8787/France ``` ## 5. Deploy your project The source code for the completed app should include the following: ```rust use serde::{Deserialize, Serialize}; use worker::*; #[event(fetch)] async fn fetch(req: Request, env: Env, _ctx: Context) -> Result { let router = Router::new(); #[derive(Serialize, Deserialize, Debug)] struct Country { city: String, } router .post_async("/:country", |mut req, ctx| async move { let country = ctx.param("country").unwrap(); let city = match req.json::().await { Ok(c) => c.city, Err(_) => String::from(""), }; if city.is_empty() { return Response::error("Bad Request", 400); }; return match ctx.kv("cities")?.put(country, &city)?.execute().await { Ok(_) => Response::ok(city), Err(_) => Response::error("Bad Request", 400), }; }) .get_async("/:country", |_req, ctx| async move { if let Some(country) = ctx.param("country") { return match ctx.kv("cities")?.get(country).text().await? { Some(city) => Response::ok(city), None => Response::error("Country not found", 404), }; } Response::error("Bad Request", 400) }) .run(req, env) .await } ``` To deploy your Worker, run the following command: ```sh npx wrangler deploy ``` ## Related resources - [Rust support in Workers](/workers/languages/rust/). - [Using KV in Workers](/kv/get-started/). --- # Migrations URL: https://developers.cloudflare.com/workers/wrangler/migration/ import { DirectoryListing } from "~/components"; --- # Migrate from Wrangler v2 to v3 URL: https://developers.cloudflare.com/workers/wrangler/migration/update-v2-to-v3/ There are no special instructions for migrating from Wrangler v2 to v3. You should be able to update Wrangler by following the instructions in [Install/Update Wrangler](/workers/wrangler/install-and-update/#update-wrangler). You should experience no disruption to your workflow. :::caution If you tried to update to Wrangler v3 prior to v3.3, you may have experienced some compatibility issues with older operating systems. Please try again with the latest v3 where those have been resolved. ::: ## Deprecations Refer to [Deprecations](/workers/wrangler/deprecations/#wrangler-v3) for more details on what is no longer supported in v3. ## Additional assistance If you do have an issue or need further assistance, [file an issue](https://github.com/cloudflare/workers-sdk/issues/new/choose) in the `workers-sdk` repo on GitHub. --- # Migrate from Wrangler v3 to v4 URL: https://developers.cloudflare.com/workers/wrangler/migration/update-v3-to-v4/ Wrangler v4 is a major release focused on updates to underlying systems and dependencies, along with improvements to keep Wrangler commands consistent and clear. Unlike previous major versions of Wrangler, which were [foundational rewrites](https://blog.cloudflare.com/wrangler-v2-beta/) and [rearchitectures](https://blog.cloudflare.com/wrangler3/) — Version 4 of Wrangler includes a much smaller set of changes. If you use Wrangler today, your workflow is very unlikely to change. While many users should expect a no-op upgrade, the following sections outline the more significant changes and steps for migrating where necessary. ### Summary of changes - **Updated Node.js support policy:** Node.js v16, which reached End-of-Life in 2022, is no longer supported in Wrangler v4. Wrangler now follows Node.js's [official support lifecycle](https://nodejs.org/en/about/previous-releases). - **Upgraded esbuild version**: Wrangler uses [esbuild](https://esbuild.github.io/) to bundle Worker code before deploying it, and was previously pinned to esbuild v0.17.19. Wrangler v4 uses esbuild v0.24, which could impact dynamic wildcard imports. Going forward, Wrangler will be periodically updating the `esbuild` version included with Wrangler, and since `esbuild` is a pre-1.0.0 tool, this may sometimes include breaking changes to how bundling works. In particular, we may bump the `esbuild` version in a Wrangler minor version. - **Commands default to local mode**: All commands that can run in either local or remote mode now default to local, requiring a `--remote` flag for API queries. - **Deprecated commands and configurations removed:** Legacy commands, flags, and configurations are removed. ## Detailed Changes ### Updated Node.js support policy Wrangler now supports only Node.js versions that align with [Node.js's official lifecycle](https://nodejs.org/en/about/previous-releases): - **Supported**: Current, Active LTS, Maintenance LTS - **No longer supported:** Node.js v16 (EOL in 2022) Wrangler tests no longer run on v16, and users still on this version may encounter unsupported behavior. Users still using Node.js v16 must upgrade to a supported version to continue receiving support and compatibility with Wrangler. ### Upgraded esbuild version Wrangler v4 upgrades esbuild from **v0.17.19** to **v0.24**, bringing improvements (such as the ability to use the `using` keyword with RPC) and changes to bundling behavior: - **Dynamic imports:** Wildcard imports (for example, `import('./data/' + kind + '.json')`) now automatically include all matching files in the bundle. Users relying on wildcard dynamic imports may see unwanted files bundled. Prior to esbuild v0.19, `import` statements with dynamic paths ( like `import('./data/' + kind + '.json')`) did not bundle all files matches the glob pattern (`*.json`) . Only files explicitly referenced or included using `find_additional_modules` were bundled. With esbuild v0.19, wildcard imports now automatically bundle all files matching the glob pattern. This could result in unwanted files being bundled, so users might want to avoid wildcard dynamic imports and use explicit imports instead. ### Commands default to local mode All commands now run in **local mode by default.** Wrangler has many commands for accessing resources like KV and R2, but the commands were previously inconsistent in whether they ran in a local or remote environment. For example, D1 defaulted to querying a local datastore, and required the `--remote` flag to query via the API. KV, on the other hand, previously defaulted to querying via the API (implicitly using the `--remote` flag) and required a `--local` flag to query a local datastore. In order to make the behavior consistent across Wrangler, each command now uses the `--local` flag by default, and requires an explicit `--remote` flag to query via the API. For example: - **Previous Behavior (Wrangler v3):** `wrangler kv get` queried remotely by default. - **New Behavior (Wrangler v4):** `wrangler kv get` queries locally unless `--remote` is specified. Those using `wrangler kv key` and/or `wrangler r2 object` commands to query or write to their data store will need to add the `--remote` flag in order to replicate previous behavior. ### Deprecated commands and configurations removed All previously deprecated features in [Wrangler v2](https://developers.cloudflare.com/workers/wrangler/deprecations/#wrangler-v2) and in [Wrangler v3](https://developers.cloudflare.com/workers/wrangler/deprecations/#wrangler-v3) are now removed. Additionally, the following features that were deprecated during the Wrangler v3 release are also now removed: - Legacy Assets (using `wrangler dev/deploy --legacy-assets` or the `legacy_assets` config file property). Instead, we recommend you [migrate to Workers assets](https://developers.cloudflare.com/workers/static-assets/). - Legacy Node.js compatibility (using `wrangler dev/deploy --node-compat` or the `node_compat` config file property). Instead, use the [`nodejs_compat` compatibility flag](https://developers.cloudflare.com/workers/runtime-apis/nodejs). This includes the functionality from legacy `node_compat` polyfills and natively implemented Node.js APIs. - `wrangler version`. Instead, use `wrangler --version` to check the current version of Wrangler. - `getBindingsProxy()` (via `import { getBindingsProxy } from "wrangler"`). Instead, use the [`getPlatformProxy()` API](https://developers.cloudflare.com/workers/wrangler/api/#getplatformproxy), which takes exactly the same arguments. - `usage_model`. This no longer has any effect, after the [rollout of Workers Standard Pricing](https://blog.cloudflare.com/workers-pricing-scale-to-zero/). --- # Build a Retrieval Augmented Generation (RAG) AI URL: https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai/ import { Details, Render, PackageManagers, WranglerConfig } from "~/components"; This guide will instruct you through setting up and deploying your first application with Cloudflare AI. You will build a fully-featured AI-powered application, using tools like Workers AI, Vectorize, D1, and Cloudflare Workers. :::note[Looking for a managed option?] [AutoRAG](/autorag) offers a fully managed way to build RAG pipelines on Cloudflare, handling ingestion, indexing, and querying out of the box. [Get started](/autorag/get-started/). ::: At the end of this tutorial, you will have built an AI tool that allows you to store information and query it using a Large Language Model. This pattern, known as Retrieval Augmented Generation, or RAG, is a useful project you can build by combining multiple aspects of Cloudflare's AI toolkit. You do not need to have experience working with AI tools to build this application. You will also need access to [Vectorize](/vectorize/platform/pricing/). During this tutorial, we will show how you can optionally integrate with [Anthropic Claude](http://anthropic.com) as well. You will need an [Anthropic API key](https://docs.anthropic.com/en/api/getting-started) to do so. ## 1. Create a new Worker project C3 (`create-cloudflare-cli`) is a command-line tool designed to help you setup and deploy Workers to Cloudflare as fast as possible. Open a terminal window and run C3 to create your Worker project: In your project directory, C3 has generated several files.
1. `wrangler.jsonc`: Your [Wrangler](/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file. 2. `worker.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](/workers/reference/migrate-to-module-workers/) syntax. 3. `package.json`: A minimal Node dependencies configuration file. 4. `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json). 5. `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules).
Now, move into your newly created directory: ```sh cd rag-ai-tutorial ``` ## 2. Develop with Wrangler CLI The Workers command-line interface, [Wrangler](/workers/wrangler/install-and-update/), allows you to [create](/workers/wrangler/commands/#init), [test](/workers/wrangler/commands/#dev), and [deploy](/workers/wrangler/commands/#deploy) your Workers projects. C3 will install Wrangler in projects by default. After you have created your first Worker, run the [`wrangler dev`](/workers/wrangler/commands/#dev) command in the project directory to start a local server for developing your Worker. This will allow you to test your Worker locally during development. ```sh npx wrangler dev --remote ``` :::note If you have not used Wrangler before, it will try to open your web browser to login with your Cloudflare account. If you have issues with this step or you do not have access to a browser interface, refer to the [`wrangler login`](/workers/wrangler/commands/#login) documentation for more information. ::: You will now be able to go to [http://localhost:8787](http://localhost:8787) to see your Worker running. Any changes you make to your code will trigger a rebuild, and reloading the page will show you the up-to-date output of your Worker. ## 3. Adding the AI binding To begin using Cloudflare's AI products, you can add the `ai` block to the [Wrangler configuration file](/workers/wrangler/configuration/). This will set up a binding to Cloudflare's AI models in your code that you can use to interact with the available AI models on the platform. This example features the [`@cf/meta/llama-3-8b-instruct` model](/workers-ai/models/llama-3-8b-instruct/), which generates text. ```toml [ai] binding = "AI" ``` Now, find the `src/index.js` file. Inside the `fetch` handler, you can query the `AI` binding: ```js export default { async fetch(request, env, ctx) { const answer = await env.AI.run("@cf/meta/llama-3-8b-instruct", { messages: [{ role: "user", content: `What is the square root of 9?` }], }); return new Response(JSON.stringify(answer)); }, }; ``` By querying the LLM through the `AI` binding, we can interact directly with Cloudflare AI's large language models directly in our code. In this example, we are using the [`@cf/meta/llama-3-8b-instruct` model](/workers-ai/models/llama-3-8b-instruct/), which generates text. You can deploy your Worker using `wrangler`: ```sh npx wrangler deploy ``` Making a request to your Worker will now generate a text response from the LLM, and return it as a JSON object. ```sh curl https://example.username.workers.dev ``` ```sh output {"response":"Answer: The square root of 9 is 3."} ``` ## 4. Adding embeddings using Cloudflare D1 and Vectorize Embeddings allow you to add additional capabilities to the language models you can use in your Cloudflare AI projects. This is done via **Vectorize**, Cloudflare's vector database. To begin using Vectorize, create a new embeddings index using `wrangler`. This index will store vectors with 768 dimensions, and will use cosine similarity to determine which vectors are most similar to each other: ```sh npx wrangler vectorize create vector-index --dimensions=768 --metric=cosine ``` Then, add the configuration details for your new Vectorize index to the [Wrangler configuration file](/workers/wrangler/configuration/): ```toml # ... existing wrangler configuration [[vectorize]] binding = "VECTOR_INDEX" index_name = "vector-index" ``` A vector index allows you to store a collection of dimensions, which are floating point numbers used to represent your data. When you want to query the vector database, you can also convert your query into dimensions. **Vectorize** is designed to efficiently determine which stored vectors are most similar to your query. To implement the searching feature, you must set up a D1 database from Cloudflare. In D1, you can store your app's data. Then, you change this data into a vector format. When someone searches and it matches the vector, you can show them the matching data. Create a new D1 database using `wrangler`: ```sh npx wrangler d1 create database ``` Then, paste the configuration details output from the previous command into the [Wrangler configuration file](/workers/wrangler/configuration/): ```toml # ... existing wrangler configuration [[d1_databases]] binding = "DB" # available in your Worker on env.DB database_name = "database" database_id = "abc-def-geh" # replace this with a real database_id (UUID) ``` In this application, we'll create a `notes` table in D1, which will allow us to store notes and later retrieve them in Vectorize. To create this table, run a SQL command using `wrangler d1 execute`: ```sh npx wrangler d1 execute database --remote --command "CREATE TABLE IF NOT EXISTS notes (id INTEGER PRIMARY KEY, text TEXT NOT NULL)" ``` Now, we can add a new note to our database using `wrangler d1 execute`: ```sh npx wrangler d1 execute database --remote --command "INSERT INTO notes (text) VALUES ('The best pizza topping is pepperoni')" ``` ## 5. Creating a workflow Before we begin creating notes, we will introduce a [Cloudflare Workflow](/workflows). This will allow us to define a durable workflow that can safely and robustly execute all the steps of the RAG process. To begin, add a new `[[workflows]]` block to your [Wrangler configuration file](/workers/wrangler/configuration/): ```toml # ... existing wrangler configuration [[workflows]] name = "rag" binding = "RAG_WORKFLOW" class_name = "RAGWorkflow" ``` In `src/index.js`, add a new class called `RAGWorkflow` that extends `WorkflowEntrypoint`: ```js import { WorkflowEntrypoint } from "cloudflare:workers"; export class RAGWorkflow extends WorkflowEntrypoint { async run(event, step) { await step.do("example step", async () => { console.log("Hello World!"); }); } } ``` This class will define a single workflow step that will log "Hello World!" to the console. You can add as many steps as you need to your workflow. On its own, this workflow will not do anything. To execute the workflow, we will call the `RAG_WORKFLOW` binding, passing in any parameters that the workflow needs to properly complete. Here is an example of how we can call the workflow: ```js env.RAG_WORKFLOW.create({ params: { text } }); ``` ## 6. Creating notes and adding them to Vectorize To expand on your Workers function in order to handle multiple routes, we will add `hono`, a routing library for Workers. This will allow us to create a new route for adding notes to our database. Install `hono` using `npm`: ```sh npm install hono ``` Then, import `hono` into your `src/index.js` file. You should also update the `fetch` handler to use `hono`: ```js import { Hono } from "hono"; const app = new Hono(); app.get("/", async (c) => { const answer = await c.env.AI.run("@cf/meta/llama-3-8b-instruct", { messages: [{ role: "user", content: `What is the square root of 9?` }], }); return c.json(answer); }); export default app; ``` This will establish a route at the root path `/` that is functionally equivalent to the previous version of your application. Now, we can update our workflow to begin adding notes to our database, and generating the related embeddings for them. This example features the [`@cf/baai/bge-base-en-v1.5` model](/workers-ai/models/bge-base-en-v1.5/), which can be used to create an embedding. Embeddings are stored and retrieved inside [Vectorize](/vectorize/), Cloudflare's vector database. The user query is also turned into an embedding so that it can be used for searching within Vectorize. ```js import { WorkflowEntrypoint } from "cloudflare:workers"; export class RAGWorkflow extends WorkflowEntrypoint { async run(event, step) { const env = this.env; const { text } = event.payload; const record = await step.do(`create database record`, async () => { const query = "INSERT INTO notes (text) VALUES (?) RETURNING *"; const { results } = await env.DB.prepare(query).bind(text).run(); const record = results[0]; if (!record) throw new Error("Failed to create note"); return record; }); const embedding = await step.do(`generate embedding`, async () => { const embeddings = await env.AI.run("@cf/baai/bge-base-en-v1.5", { text: text, }); const values = embeddings.data[0]; if (!values) throw new Error("Failed to generate vector embedding"); return values; }); await step.do(`insert vector`, async () => { return env.VECTOR_INDEX.upsert([ { id: record.id.toString(), values: embedding, }, ]); }); } } ``` The workflow does the following things: 1. Accepts a `text` parameter. 2. Insert a new row into the `notes` table in D1, and retrieve the `id` of the new row. 3. Convert the `text` into a vector using the `embeddings` model of the LLM binding. 4. Upsert the `id` and `vectors` into the `vector-index` index in Vectorize. By doing this, you will create a new vector representation of the note, which can be used to retrieve the note later. To complete the code, we will add a route that allows users to submit notes to the database. This route will parse the JSON request body, get the `note` parameter, and create a new instance of the workflow, passing the parameter: ```js app.post("/notes", async (c) => { const { text } = await c.req.json(); if (!text) return c.text("Missing text", 400); await c.env.RAG_WORKFLOW.create({ params: { text } }); return c.text("Created note", 201); }); ``` ## 7. Querying Vectorize to retrieve notes To complete your code, you can update the root path (`/`) to query Vectorize. You will convert the query into a vector, and then use the `vector-index` index to find the most similar vectors. The `topK` parameter limits the number of vectors returned by the function. For instance, providing a `topK` of 1 will only return the _most similar_ vector based on the query. Setting `topK` to 5 will return the 5 most similar vectors. Given a list of similar vectors, you can retrieve the notes that match the record IDs stored alongside those vectors. In this case, we are only retrieving a single note - but you may customize this as needed. You can insert the text of those notes as context into the prompt for the LLM binding. This is the basis of Retrieval-Augmented Generation, or RAG: providing additional context from data outside of the LLM to enhance the text generated by the LLM. We'll update the prompt to include the context, and to ask the LLM to use the context when responding: ```js import { Hono } from "hono"; const app = new Hono(); // Existing post route... // app.post('/notes', async (c) => { ... }) app.get("/", async (c) => { const question = c.req.query("text") || "What is the square root of 9?"; const embeddings = await c.env.AI.run("@cf/baai/bge-base-en-v1.5", { text: question, }); const vectors = embeddings.data[0]; const vectorQuery = await c.env.VECTOR_INDEX.query(vectors, { topK: 1 }); let vecId; if ( vectorQuery.matches && vectorQuery.matches.length > 0 && vectorQuery.matches[0] ) { vecId = vectorQuery.matches[0].id; } else { console.log("No matching vector found or vectorQuery.matches is empty"); } let notes = []; if (vecId) { const query = `SELECT * FROM notes WHERE id = ?`; const { results } = await c.env.DB.prepare(query).bind(vecId).all(); if (results) notes = results.map((vec) => vec.text); } const contextMessage = notes.length ? `Context:\n${notes.map((note) => `- ${note}`).join("\n")}` : ""; const systemPrompt = `When answering the question or responding, use the context provided, if it is provided and relevant.`; const { response: answer } = await c.env.AI.run( "@cf/meta/llama-3-8b-instruct", { messages: [ ...(notes.length ? [{ role: "system", content: contextMessage }] : []), { role: "system", content: systemPrompt }, { role: "user", content: question }, ], }, ); return c.text(answer); }); app.onError((err, c) => { return c.text(err); }); export default app; ``` ## 8. Adding Anthropic Claude model (optional) If you are working with larger documents, you have the option to use Anthropic's [Claude models](https://claude.ai/), which have large context windows and are well-suited to RAG workflows. To begin, install the `@anthropic-ai/sdk` package: ```sh npm install @anthropic-ai/sdk ``` In `src/index.js`, you can update the `GET /` route to check for the `ANTHROPIC_API_KEY` environment variable. If it's set, we can generate text using the Anthropic SDK. If it isn't set, we'll fall back to the existing Workers AI code: ```js import Anthropic from '@anthropic-ai/sdk'; app.get('/', async (c) => { // ... Existing code const systemPrompt = `When answering the question or responding, use the context provided, if it is provided and relevant.` let modelUsed: string = "" let response = null if (c.env.ANTHROPIC_API_KEY) { const anthropic = new Anthropic({ apiKey: c.env.ANTHROPIC_API_KEY }) const model = "claude-3-5-sonnet-latest" modelUsed = model const message = await anthropic.messages.create({ max_tokens: 1024, model, messages: [ { role: 'user', content: question } ], system: [systemPrompt, notes ? contextMessage : ''].join(" ") }) response = { response: message.content.map(content => content.text).join("\n") } } else { const model = "@cf/meta/llama-3.1-8b-instruct" modelUsed = model response = await c.env.AI.run( model, { messages: [ ...(notes.length ? [{ role: 'system', content: contextMessage }] : []), { role: 'system', content: systemPrompt }, { role: 'user', content: question } ] } ) } if (response) { c.header('x-model-used', modelUsed) return c.text(response.response) } else { return c.text("We were unable to generate output", 500) } }) ``` Finally, you'll need to set the `ANTHROPIC_API_KEY` environment variable in your Workers application. You can do this by using `wrangler secret put`: ```sh $ npx wrangler secret put ANTHROPIC_API_KEY ``` ## 9. Deleting notes and vectors If you no longer need a note, you can delete it from the database. Any time that you delete a note, you will also need to delete the corresponding vector from Vectorize. You can implement this by building a `DELETE /notes/:id` route in your `src/index.js` file: ```js app.delete("/notes/:id", async (c) => { const { id } = c.req.param(); const query = `DELETE FROM notes WHERE id = ?`; await c.env.DB.prepare(query).bind(id).run(); await c.env.VECTOR_INDEX.deleteByIds([id]); return c.status(204); }); ``` ## 10. Text splitting (optional) For large pieces of text, it is recommended to split the text into smaller chunks. This allows LLMs to more effectively gather relevant context, without needing to retrieve large pieces of text. To implement this, we'll add a new NPM package to our project, `@langchain/textsplitters': ```sh npm install @langchain/textsplitters ``` The `RecursiveCharacterTextSplitter` class provided by this package will split the text into smaller chunks. It can be customized to your liking, but the default config works in most cases: ```js import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters"; const text = "Some long piece of text..."; const splitter = new RecursiveCharacterTextSplitter({ // These can be customized to change the chunking size // chunkSize: 1000, // chunkOverlap: 200, }); const output = await splitter.createDocuments([text]); console.log(output); // [{ pageContent: 'Some long piece of text...' }] ``` To use this splitter, we'll update the workflow to split the text into smaller chunks. We'll then iterate over the chunks and run the rest of the workflow for each chunk of text: ```js export class RAGWorkflow extends WorkflowEntrypoint { async run(event, step) { const env = this.env; const { text } = event.payload; let texts = await step.do("split text", async () => { const splitter = new RecursiveCharacterTextSplitter(); const output = await splitter.createDocuments([text]); return output.map((doc) => doc.pageContent); }); console.log( "RecursiveCharacterTextSplitter generated ${texts.length} chunks", ); for (const index in texts) { const text = texts[index]; const record = await step.do( `create database record: ${index}/${texts.length}`, async () => { const query = "INSERT INTO notes (text) VALUES (?) RETURNING *"; const { results } = await env.DB.prepare(query).bind(text).run(); const record = results[0]; if (!record) throw new Error("Failed to create note"); return record; }, ); const embedding = await step.do( `generate embedding: ${index}/${texts.length}`, async () => { const embeddings = await env.AI.run("@cf/baai/bge-base-en-v1.5", { text: text, }); const values = embeddings.data[0]; if (!values) throw new Error("Failed to generate vector embedding"); return values; }, ); await step.do(`insert vector: ${index}/${texts.length}`, async () => { return env.VECTOR_INDEX.upsert([ { id: record.id.toString(), values: embedding, }, ]); }); } } } ``` Now, when large pieces of text are submitted to the `/notes` endpoint, they will be split into smaller chunks, and each chunk will be processed by the workflow. ## 11. Deploy your project If you did not deploy your Worker during [step 1](/workers/get-started/guide/#1-create-a-new-worker-project), deploy your Worker via Wrangler, to a `*.workers.dev` subdomain, or a [Custom Domain](/workers/configuration/routing/custom-domains/), if you have one configured. If you have not configured any subdomain or domain, Wrangler will prompt you during the publish process to set one up. ```sh npx wrangler deploy ``` Preview your Worker at `..workers.dev`. :::note[Note] When pushing to your `*.workers.dev` subdomain for the first time, you may see [`523` errors](/support/troubleshooting/http-status-codes/cloudflare-5xx-errors/error-523/) while DNS is propagating. These errors should resolve themselves after a minute or so. ::: ## Related resources A full version of this codebase is available on GitHub. It includes a frontend UI for querying, adding, and deleting notes, as well as a backend API for interacting with the database and vector index. You can find it here: [github.com/kristianfreeman/cloudflare-retrieval-augmented-generation-example](https://github.com/kristianfreeman/cloudflare-retrieval-augmented-generation-example/). To do more: - Explore the reference diagram for a [Retrieval Augmented Generation (RAG) Architecture](/reference-architecture/diagrams/ai/ai-rag/). - Review Cloudflare's [AI documentation](/workers-ai). - Review [Tutorials](/workers/tutorials/) to build projects on Workers. - Explore [Examples](/workers/examples/) to experiment with copy and paste Worker code. - Understand how Workers works in [Reference](/workers/reference/). - Learn about Workers features and functionality in [Platform](/workers/platform/). - Set up [Wrangler](/workers/wrangler/install-and-update/) to programmatically create, test, and deploy your Worker projects. --- # Build a Voice Notes App with auto transcriptions using Workers AI URL: https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-voice-notes-app-with-auto-transcription/ import { Render, PackageManagers, Tabs, TabItem } from "~/components"; In this tutorial, you will learn how to create a Voice Notes App with automatic transcriptions of voice recordings, and optional post-processing. The following tools will be used to build the application: - Workers AI to transcribe the voice recordings, and for the optional post processing - D1 database to store the notes - R2 storage to store the voice recordings - Nuxt framework to build the full-stack application - Workers to deploy the project ## Prerequisites To continue, you will need: ## 1. Create a new Worker project Create a new Worker project using the `c3` CLI with the `nuxt` framework preset. ### Install additional dependencies Change into the newly created project directory ```sh cd voice-notes ``` And install the following dependencies: Then add the `@nuxt/ui` module to the `nuxt.config.ts` file: ```ts title="nuxt.config.ts" export default defineNuxtConfig({ //.. modules: ['nitro-cloudflare-dev', '@nuxt/ui'], //.. }) ``` ### [Optional] Move to Nuxt 4 compatibility mode Moving to Nuxt 4 compatibility mode ensures that your application remains forward-compatible with upcoming updates to Nuxt. Create a new `app` folder in the project's root directory and move the `app.vue` file to it. Also, add the following to your `nuxt.config.ts` file: ```ts title="nuxt.config.ts" export default defineNuxtConfig({ //.. future: { compatibilityVersion: 4, }, //.. }) ``` :::note The rest of the tutorial will use the `app` folder for keeping the client side code. If you did not make this change, you should continue to use the project's root directory. ::: ### Start local development server At this point you can test your application by starting a local development server using: If everything is set up correctly, you should see a Nuxt welcome page at `http://localhost:3000`. ## 2. Create the transcribe API endpoint This API makes use of Workers AI to transcribe the voice recordings. To use Workers AI within your project, you first need to bind it to the Worker. Add the `AI` binding to the Wrangler file. ```toml title="wrangler.toml" [ai] binding = "AI" ``` Once the `AI` binding has been configured, run the `cf-typegen` command to generate the necessary Cloudflare type definitions. This makes the types definitions available in the server event contexts. Create a transcribe `POST` endpoint by creating `transcribe.post.ts` file inside the `/server/api` directory. ```ts title="server/api/transcribe.post.ts" export default defineEventHandler(async (event) => { const { cloudflare } = event.context; const form = await readFormData(event); const blob = form.get('audio') as Blob; if (!blob) { throw createError({ statusCode: 400, message: 'Missing audio blob to transcribe', }); } try { const response = await cloudflare.env.AI.run('@cf/openai/whisper', { audio: [...new Uint8Array(await blob.arrayBuffer())], }); return response.text; } catch (err) { console.error('Error transcribing audio:', err); throw createError({ statusCode: 500, message: 'Failed to transcribe audio. Please try again.', }); } }); ``` The above code does the following: 1. Extracts the audio blob from the event. 2. Transcribes the blob using the `@cf/openai/whisper` model and returns the transcription text as response. ## 3. Create an API endpoint for uploading audio recordings to R2 Before uploading the audio recordings to `R2`, you need to create a bucket first. You will also need to add the R2 binding to your Wrangler file and regenerate the Cloudflare type definitions. Create an `R2` bucket. Add the storage binding to your Wrangler file. ```toml title="wrangler.toml" [[r2_buckets]] binding = "R2" bucket_name = "" ``` Finally, generate the type definitions by rerunning the `cf-typegen` script. Now you are ready to create the upload endpoint. Create a new `upload.put.ts` file in your `server/api` directory, and add the following code to it: ```ts title="server/api/upload.put.ts" export default defineEventHandler(async (event) => { const { cloudflare } = event.context; const form = await readFormData(event); const files = form.getAll('files') as File[]; if (!files) { throw createError({ statusCode: 400, message: 'Missing files' }); } const uploadKeys: string[] = []; for (const file of files) { const obj = await cloudflare.env.R2.put(`recordings/${file.name}`, file); if (obj) { uploadKeys.push(obj.key); } } return uploadKeys; }); ``` The above code does the following: 1. The files variable retrieves all files sent by the client using form.getAll(), which allows for multiple uploads in a single request. 2. Uploads the files to the R2 bucket using the binding (`R2`) you created earlier. :::note The `recordings/` prefix organizes uploaded files within a dedicated folder in your bucket. This will also come in handy when serving these recordings to the client (covered later). ::: ## 4. Create an API endpoint to save notes entries Before creating the endpoint, you will need to perform steps similar to those for the R2 bucket, with some additional steps to prepare a notes table. Create a `D1` database. Add the D1 bindings to the Wrangler file. You can get the `DB_ID` from the output of the `d1 create` command. ```toml title="wrangler.toml" [[d1_databases]] binding = "DB" database_name = "" database_id = "" ``` As before, rerun the `cf-typegen` command to generate the types. Next, create a DB migration. "create notes table"`} /> This will create a new `migrations` folder in the project's root directory, and add an empty `0001_create_notes_table.sql` file to it. Replace the contents of this file with the code below. ```sql CREATE TABLE IF NOT EXISTS notes ( id INTEGER PRIMARY KEY AUTOINCREMENT, text TEXT NOT NULL, created_at DATETIME DEFAULT CURRENT_TIMESTAMP, updated_at DATETIME DEFAULT CURRENT_TIMESTAMP, audio_urls TEXT ); ``` And then apply this migration to create the `notes` table. :::note The above command will create the notes table locally. To apply the migration on your remote production database, use the `--remote` flag. ::: Now you can create the API endpoint. Create a new file `index.post.ts` in the `server/api/notes` directory, and change its content to the following: ```ts title="server/api/notes/index.post.ts" export default defineEventHandler(async (event) => { const { cloudflare } = event.context; const { text, audioUrls } = await readBody(event); if (!text) { throw createError({ statusCode: 400, message: 'Missing note text', }); } try { await cloudflare.env.DB.prepare( 'INSERT INTO notes (text, audio_urls) VALUES (?1, ?2)' ) .bind(text, audioUrls ? JSON.stringify(audioUrls) : null) .run(); return setResponseStatus(event, 201); } catch (err) { console.error('Error creating note:', err); throw createError({ statusCode: 500, message: 'Failed to create note. Please try again.', }); } }); ``` The above does the following: 1. Extracts the text, and optional audioUrls from the event. 2. Saves it to the database after converting the audioUrls to a `JSON` string. ## 5. Handle note creation on the client-side Now you're ready to work on the client side. Let's start by tackling the note creation part first. ### Recording user audio Create a composable to handle audio recording using the MediaRecorder API. This will be used to record notes through the user's microphone. Create a new file `useMediaRecorder.ts` in the `app/composables` folder, and add the following code to it: ```ts title="app/composables/useMediaRecorder.ts" interface MediaRecorderState { isRecording: boolean; recordingDuration: number; audioData: Uint8Array | null; updateTrigger: number; } export function useMediaRecorder() { const state = ref({ isRecording: false, recordingDuration: 0, audioData: null, updateTrigger: 0, }); let mediaRecorder: MediaRecorder | null = null; let audioContext: AudioContext | null = null; let analyser: AnalyserNode | null = null; let animationFrame: number | null = null; let audioChunks: Blob[] | undefined = undefined; const updateAudioData = () => { if (!analyser || !state.value.isRecording || !state.value.audioData) { if (animationFrame) { cancelAnimationFrame(animationFrame); animationFrame = null; } return; } analyser.getByteTimeDomainData(state.value.audioData); state.value.updateTrigger += 1; animationFrame = requestAnimationFrame(updateAudioData); }; const startRecording = async () => { try { const stream = await navigator.mediaDevices.getUserMedia({ audio: true }); audioContext = new AudioContext(); analyser = audioContext.createAnalyser(); const source = audioContext.createMediaStreamSource(stream); source.connect(analyser); mediaRecorder = new MediaRecorder(stream); audioChunks = []; mediaRecorder.ondataavailable = (e: BlobEvent) => { audioChunks?.push(e.data); state.value.recordingDuration += 1; }; state.value.audioData = new Uint8Array(analyser.frequencyBinCount); state.value.isRecording = true; state.value.recordingDuration = 0; state.value.updateTrigger = 0; mediaRecorder.start(1000); updateAudioData(); } catch (err) { console.error('Error accessing microphone:', err); throw err; } }; const stopRecording = async () => { return await new Promise((resolve) => { if (mediaRecorder && state.value.isRecording) { mediaRecorder.onstop = () => { const blob = new Blob(audioChunks, { type: 'audio/webm' }); audioChunks = undefined; state.value.recordingDuration = 0; state.value.updateTrigger = 0; state.value.audioData = null; resolve(blob); }; state.value.isRecording = false; mediaRecorder.stop(); mediaRecorder.stream.getTracks().forEach((track) => track.stop()); if (animationFrame) { cancelAnimationFrame(animationFrame); animationFrame = null; } audioContext?.close(); audioContext = null; } }); }; onUnmounted(() => { stopRecording(); }); return { state: readonly(state), startRecording, stopRecording, }; } ``` The above code does the following: 1. Exposes functions to start and stop audio recordings in a Vue application. 2. Captures audio input from the user's microphone using MediaRecorder API. 3. Processes real-time audio data for visualization using AudioContext and AnalyserNode. 4. Stores recording state including duration and recording status. 5. Maintains chunks of audio data and combines them into a final audio blob when recording stops. 6. Updates audio visualization data continuously using animation frames while recording. 7. Automatically cleans up all audio resources when recording stops or component unmounts. 8. Returns audio recordings in webm format for further processing. ### Create a component for note creation This component allows users to create notes by either typing or recording audio. It also handles audio transcription and uploading the recordings to the server. Create a new file named `CreateNote.vue` inside the `app/components` folder. Add the following template code to the newly created file: ```vue title="app/components/CreateNote.vue" ``` The above template results in the following: 1. A panel with a `textarea` inside to type the note manually. 2. Another panel to manage start/stop of an audio recording, and show the recordings done already. 3. A bottom panel to reset or save the note (along with the recordings). Now, add the following code below the template code in the same file: ```vue title="app/components/CreateNote.vue" ``` The above code does the following: 1. When a recording is stopped by calling `handleRecordingStop` function, the audio blob is sent for transcribing to the transcribe API endpoint. 2. The transcription response text is appended to the existing textarea content. 3. When the note is saved by calling the `saveNote` function, the audio recordings are uploaded first to R2 by using the upload endpoint we created earlier. Then, the actual note content along with the audioUrls (the R2 object keys) are saved by calling the notes post endpoint. ### Create a new page route for showing the component You can use this component in a Nuxt page to show it to the user. But before that you need to modify your `app.vue` file. Update the content of your `app.vue` to the following: ```vue title="/app/app.vue" ``` The above code allows for a nuxt page to be shown to the user, apart from showing an app header and a navigation sidebar. Next, add a new file named `new.vue` inside the `app/pages` folder, add the following code to it: ```vue title="app/pages/new.vue" ``` The above code shows the `CreateNote` component inside a modal, and navigates back to the home page on successful note creation. ## 6. Showing the notes on the client side To show the notes from the database on the client side, create an API endpoint first that will interact with the database. ### Create an API endpoint to fetch notes from the database Create a new file named `index.get.ts` inside the `server/api/notes` directory, and add the following code to it: ```ts title="server/api/index.get.ts" import type { Note } from '~~/types'; export default defineEventHandler(async (event) => { const { cloudflare } = event.context; const res = await cloudflare.env.DB.prepare( `SELECT id, text, audio_urls AS audioUrls, created_at AS createdAt, updated_at AS updatedAt FROM notes ORDER BY created_at DESC LIMIT 50;` ).all & { audioUrls: string | null }>(); return res.results.map((note) => ({ ...note, audioUrls: note.audioUrls ? JSON.parse(note.audioUrls) : undefined, })); }); ``` The above code fetches the last 50 notes from the database, ordered by their creation date in descending order. The `audio_urls` field is stored as a string in the database, but it's converted to an array using `JSON.parse` to handle multiple audio files seamlessly on the client side. Next, create a page named `index.vue` inside the `app/pages` directory. This will be the home page of the application. Add the following code to it: ```vue title="app/pages/index.vue" ``` The above code fetches the notes from the database by calling the `/api/notes` endpoint you created just now, and renders them as note cards. ### Serving the saved recordings from R2 To be able to play the audio recordings of these notes, you need to serve the saved recordings from the R2 storage. Create a new file named `[...pathname].get.ts` inside the `server/routes/recordings` directory, and add the following code to it: :::note The `...` prefix in the file name makes it a catch all route. This allows it to receive all events that are meant for paths starting with `/recordings` prefix. This is where the `recordings` prefix that was added previously while saving the recordings becomes helpful. ::: ```ts title="server/routes/recordings/[...pathname].get.ts" export default defineEventHandler(async (event) => { const { cloudflare, params } = event.context; const { pathname } = params || {}; return cloudflare.env.R2.get(`recordings/${pathname}`); }); ``` The above code extracts the path name from the event params, and serves the saved recording matching that object key from the R2 bucket. ## 7. [Optional] Post Processing the transcriptions Even though the speech-to-text transcriptions models perform satisfactorily, sometimes you want to post process the transcriptions for various reasons. It could be to remove any discrepancy, or to change the tone/style of the final text. ### Create a settings page Create a new file named `settings.vue` in the `app/pages` folder, and add the following code to it: ```vue title="app/pages/settings.vue" ``` The above code renders a toggle button that enables/disables the post processing of transcriptions. If enabled, users can change the prompt that will used while post processing the transcription with an AI model. The transcription settings are saved using useStorageAsync, which utilizes the browser's local storage. This ensures that users' preferences are retained even after refreshing the page. ### Send the post processing prompt with recorded audio Modify the `CreateNote` component to send the post processing prompt along with the audio blob, while calling the `transcribe` API endpoint. ```vue title="app/components/CreateNote.vue" ins={2, 6-9, 17-22} ``` The code blocks added above checks for the saved post processing setting. If enabled, and there is a defined prompt, it sends the prompt to the `transcribe` API endpoint. ### Handle post processing in the transcribe API endpoint Modify the transcribe API endpoint, and update it to the following: ```ts title="server/api/transcribe.post.ts" ins={9-20, 22} export default defineEventHandler(async (event) => { // ... try { const response = await cloudflare.env.AI.run('@cf/openai/whisper', { audio: [...new Uint8Array(await blob.arrayBuffer())], }); const postProcessingPrompt = form.get('prompt') as string; if (postProcessingPrompt && response.text) { const postProcessResult = await cloudflare.env.AI.run( '@cf/meta/llama-3.1-8b-instruct', { temperature: 0.3, prompt: `${postProcessingPrompt}.\n\nText:\n\n${response.text}\n\nResponse:`, } ); return (postProcessResult as { response?: string }).response; } else { return response.text; } } catch (err) { // ... } }); ``` The above code does the following: 1. Extracts the post processing prompt from the event FormData. 2. If present, it calls the Workers AI API to process the transcription text using the `@cf/meta/llama-3.1-8b-instruct` model. 3. Finally, it returns the response from Workers AI to the client. ## 8. Deploy the application Now you are ready to deploy the project to a `.workers.dev` sub-domain by running the deploy command. You can preview your application at `..workers.dev`. :::note If you used `pnpm` as your package manager, you may face build errors like `"stdin" is not exported by "node_modules/.pnpm/unenv@1.10.0/node_modules/unenv/runtime/node/process/index.mjs"`. To resolve it, you can try hoisting your node modules with the [`shamefully-hoist-true`](https://pnpm.io/npmrc) option. ::: ## Conclusion In this tutorial, you have gone through the steps of building a voice notes application using Nuxt 3, Cloudflare Workers, D1, and R2 storage. You learnt to: - Set up the backend to store and manage notes - Create API endpoints to fetch and display notes - Handle audio recordings - Implement optional post-processing for transcriptions - Deploy the application using the Cloudflare module syntax The complete source code of the project is available on GitHub. You can go through it to see the code for various frontend components not covered in the article. You can find it here: [github.com/ra-jeev/vnotes](https://github.com/ra-jeev/vnotes). --- # Whisper-large-v3-turbo with Cloudflare Workers AI URL: https://developers.cloudflare.com/workers-ai/guides/tutorials/build-a-workers-ai-whisper-with-chunking/ In this tutorial you will learn how to: - **Transcribe large audio files:** Use the [Whisper-large-v3-turbo](/workers-ai/models/whisper-large-v3-turbo/) model from Cloudflare Workers AI to perform automatic speech recognition (ASR) or translation. - **Handle large files:** Split large audio files into smaller chunks for processing, which helps overcome memory and execution time limitations. - **Deploy using Cloudflare Workers:** Create a scalable, low‑latency transcription pipeline in a serverless environment. ## 1: Create a new Cloudflare Worker project import { Render, PackageManagers, WranglerConfig } from "~/components"; You will create a new Worker project using the `create-cloudflare` CLI (C3). [C3](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare) is a command-line tool designed to help you set up and deploy new applications to Cloudflare. Create a new project named `whisper-tutorial` by running: Running `npm create cloudflare@latest` will prompt you to install the [`create-cloudflare` package](https://www.npmjs.com/package/create-cloudflare), and lead you through setup. C3 will also install [Wrangler](/workers/wrangler/), the Cloudflare Developer Platform CLI. This will create a new `whisper-tutorial` directory. Your new `whisper-tutorial` directory will include: - A `"Hello World"` [Worker](/workers/get-started/guide/#3-write-code) at `src/index.ts`. - A [`wrangler.jsonc`](/workers/wrangler/configuration/) configuration file. Go to your application directory: ```sh cd whisper-tutorial ``` ## 2. Connect your Worker to Workers AI You must create an AI binding for your Worker to connect to Workers AI. [Bindings](/workers/runtime-apis/bindings/) allow your Workers to interact with resources, like Workers AI, on the Cloudflare Developer Platform. To bind Workers AI to your Worker, add the following to the end of your `wrangler.toml` file: ```toml [ai] binding = "AI" ``` Your binding is [available in your Worker code](/workers/reference/migrate-to-module-workers/#bindings-in-es-modules-format) on [`env.AI`](/workers/runtime-apis/handlers/fetch/). ## 3. Configure Wrangler In your wrangler file, add or update the following settings to enable Node.js APIs and polyfills (with a compatibility date of 2024‑09‑23 or later): ```toml title="wrangler.toml" compatibility_flags = [ "nodejs_compat" ] compatibility_date = "2024-09-23" ``` ## 4. Handle large audio files with chunking Replace the contents of your `src/index.ts` file with the following integrated code. This sample demonstrates how to: (1) Extract an audio file URL from the query parameters. (2) Fetch the audio file while explicitly following redirects. (3) Split the audio file into smaller chunks (such as, 1 MB chunks). (4) Transcribe each chunk using the Whisper-large-v3-turbo model via the Cloudflare AI binding. (5) Return the aggregated transcription as plain text. ```ts import { Buffer } from "node:buffer"; import type { Ai } from "workers-ai"; export interface Env { AI: Ai; // If needed, add your KV namespace for storing transcripts. // MY_KV_NAMESPACE: KVNamespace; } /** * Fetches the audio file from the provided URL and splits it into chunks. * This function explicitly follows redirects. * * @param audioUrl - The URL of the audio file. * @returns An array of ArrayBuffers, each representing a chunk of the audio. */ async function getAudioChunks(audioUrl: string): Promise { const response = await fetch(audioUrl, { redirect: "follow" }); if (!response.ok) { throw new Error(`Failed to fetch audio: ${response.status}`); } const arrayBuffer = await response.arrayBuffer(); // Example: Split the audio into 1MB chunks. const chunkSize = 1024 * 1024; // 1MB const chunks: ArrayBuffer[] = []; for (let i = 0; i < arrayBuffer.byteLength; i += chunkSize) { const chunk = arrayBuffer.slice(i, i + chunkSize); chunks.push(chunk); } return chunks; } /** * Transcribes a single audio chunk using the Whisper‑large‑v3‑turbo model. * The function converts the audio chunk to a Base64-encoded string and * sends it to the model via the AI binding. * * @param chunkBuffer - The audio chunk as an ArrayBuffer. * @param env - The Cloudflare Worker environment, including the AI binding. * @returns The transcription text from the model. */ async function transcribeChunk( chunkBuffer: ArrayBuffer, env: Env, ): Promise { const base64 = Buffer.from(chunkBuffer, "binary").toString("base64"); const res = await env.AI.run("@cf/openai/whisper-large-v3-turbo", { audio: base64, // Optional parameters (uncomment and set if needed): // task: "transcribe", // or "translate" // language: "en", // vad_filter: "false", // initial_prompt: "Provide context if needed.", // prefix: "Transcription:", }); return res.text; // Assumes the transcription result includes a "text" property. } /** * The main fetch handler. It extracts the 'url' query parameter, fetches the audio, * processes it in chunks, and returns the full transcription. */ export default { async fetch( request: Request, env: Env, ctx: ExecutionContext, ): Promise { // Extract the audio URL from the query parameters. const { searchParams } = new URL(request.url); const audioUrl = searchParams.get("url"); if (!audioUrl) { return new Response("Missing 'url' query parameter", { status: 400 }); } // Get the audio chunks. const audioChunks: ArrayBuffer[] = await getAudioChunks(audioUrl); let fullTranscript = ""; // Process each chunk and build the full transcript. for (const chunk of audioChunks) { try { const transcript = await transcribeChunk(chunk, env); fullTranscript += transcript + "\n"; } catch (error) { fullTranscript += "[Error transcribing chunk]\n"; } } return new Response(fullTranscript, { headers: { "Content-Type": "text/plain" }, }); }, } satisfies ExportedHandler; ``` ## 5. Deploy your Worker 1. **Run the Worker locally:** Use wrangler's development mode to test your Worker locally: ```sh npx wrangler dev ``` Open your browser and go to [http://localhost:8787](http://localhost:8787), or use curl: ```sh curl "http://localhost:8787?url=https://raw.githubusercontent.com/your-username/your-repo/main/your-audio-file.mp3" ``` Replace the URL query parameter with the direct link to your audio file. (For GitHub-hosted files, ensure you use the raw file URL.) 2. **Deploy the Worker:** Once testing is complete, deploy your Worker with: ```sh npx wrangler deploy ``` 3. **Test the deployed Worker:** After deployment, test your Worker by passing the audio URL as a query parameter: ```sh curl "https://.workers.dev?url=https://raw.githubusercontent.com/your-username/your-repo/main/your-audio-file.mp3" ``` Make sure to replace ``, `your-username`, `your-repo`, and `your-audio-file.mp3` with your actual details. If successful, the Worker will return a transcript of the audio file: ```sh This is the transcript of the audio... ``` --- # Build an interview practice tool with Workers AI URL: https://developers.cloudflare.com/workers-ai/guides/tutorials/build-ai-interview-practice-tool/ import { Render, PackageManagers } from "~/components"; Job interviews can be stressful, and practice is key to building confidence. While traditional mock interviews with friends or mentors are valuable, they are not always available when you need them. In this tutorial, you will learn how to build an AI-powered interview practice tool that provides real-time feedback to help improve interview skills. By the end of this tutorial, you will have built a complete interview practice tool with the following core functionalities: - A real-time interview simulation tool using WebSocket connections - An AI-powered speech processing pipeline that converts audio to text - An intelligent response system that provides interviewer-like interactions - A persistent storage system for managing interview sessions and history using Durable Objects ### Prerequisites This tutorial demonstrates how to use multiple Cloudflare products and while many features are available in free tiers, some components of Workers AI may incur usage-based charges. Please review the pricing documentation for Workers AI before proceeding. ## 1. Create a new Worker project Create a Cloudflare Workers project using the Create Cloudflare CLI (C3) tool and the Hono framework. :::note [Hono](https://hono.dev) is a lightweight web framework that helps build API endpoints and handle HTTP requests. This tutorial uses Hono to create and manage the application's routing and middleware components. ::: Create a new Worker project by running the following commands, using `ai-interview-tool` as the Worker name: To develop and test your Cloudflare Workers application locally: 1. Navigate to your Workers project directory in your terminal: ```sh cd ai-interview-tool ``` 2. Start the development server by running: ```sh npx wrangler dev ``` When you run `wrangler dev`, the command starts a local development server and provides a `localhost` URL where you can preview your application. You can now make changes to your code and see them reflected in real-time at the provided localhost address. ## 2. Define TypeScript types for the interview system Now that the project is set up, create the TypeScript types that will form the foundation of the interview system. These types will help you maintain type safety and provide clear interfaces for the different components of your application. Create a new file `types.ts` that will contain essential types and enums for: - Interview skills that can be assessed (JavaScript, React, etc.) - Different interview positions (Junior Developer, Senior Developer, etc.) - Interview status tracking - Message handling between user and AI - Core interview data structure ```typescript title="src/types.ts" import { Context } from "hono"; // Context type for API endpoints, including environment bindings and user info export interface ApiContext { Bindings: CloudflareBindings; Variables: { username: string; }; } export type HonoCtx = Context; // List of technical skills you can assess during mock interviews. // This application focuses on popular web technologies and programming languages // that are commonly tested in real interviews. export enum InterviewSkill { JavaScript = "JavaScript", TypeScript = "TypeScript", React = "React", NodeJS = "NodeJS", Python = "Python", } // Available interview types based on different engineering roles. // This helps tailor the interview experience and questions to // match the candidate's target position. export enum InterviewTitle { JuniorDeveloper = "Junior Developer Interview", SeniorDeveloper = "Senior Developer Interview", FullStackDeveloper = "Full Stack Developer Interview", FrontendDeveloper = "Frontend Developer Interview", BackendDeveloper = "Backend Developer Interview", SystemArchitect = "System Architect Interview", TechnicalLead = "Technical Lead Interview", } // Tracks the current state of an interview session. // This will help you to manage the interview flow and show appropriate UI/actions // at each stage of the process. export enum InterviewStatus { Created = "created", // Interview is created but not started Pending = "pending", // Waiting for interviewer/system InProgress = "in_progress", // Active interview session Completed = "completed", // Interview finished successfully Cancelled = "cancelled", // Interview terminated early } // Defines who sent a message in the interview chat export type MessageRole = "user" | "assistant" | "system"; // Structure of individual messages exchanged during the interview export interface Message { messageId: string; // Unique identifier for the message interviewId: string; // Links message to specific interview role: MessageRole; // Who sent the message content: string; // The actual message content timestamp: number; // When the message was sent } // Main data structure that holds all information about an interview session. // This includes metadata, messages exchanged, and the current status. export interface InterviewData { interviewId: string; title: InterviewTitle; skills: InterviewSkill[]; messages: Message[]; status: InterviewStatus; createdAt: number; updatedAt: number; } // Input format for creating a new interview session. // Simplified interface that accepts basic parameters needed to start an interview. export interface InterviewInput { title: string; skills: string[]; } ``` ## 3. Configure error types for different services Next, set up custom error types to handle different kinds of errors that may occur in your application. This includes: - Database errors (for example, connection issues, query failures) - Interview-related errors (for example, invalid input, transcription failures) - Authentication errors (for example, invalid sessions) Create the following `errors.ts` file: ```typescript title="src/errors.ts" export const ErrorCodes = { INVALID_MESSAGE: "INVALID_MESSAGE", TRANSCRIPTION_FAILED: "TRANSCRIPTION_FAILED", LLM_FAILED: "LLM_FAILED", DATABASE_ERROR: "DATABASE_ERROR", } as const; export class AppError extends Error { constructor( message: string, public statusCode: number, ) { super(message); this.name = this.constructor.name; } } export class UnauthorizedError extends AppError { constructor(message: string) { super(message, 401); } } export class BadRequestError extends AppError { constructor(message: string) { super(message, 400); } } export class NotFoundError extends AppError { constructor(message: string) { super(message, 404); } } export class InterviewError extends Error { constructor( message: string, public code: string, public statusCode: number = 500, ) { super(message); this.name = "InterviewError"; } } ``` ## 4. Configure authentication middleware and user routes In this step, you will implement a basic authentication system to track and identify users interacting with your AI interview practice tool. The system uses HTTP-only cookies to store usernames, allowing you to identify both the request sender and their corresponding Durable Object. This straightforward authentication approach requires users to provide a username, which is then stored securely in a cookie. This approach allows you to: - Identify users across requests - Associate interview sessions with specific users - Secure access to interview-related endpoints ### Create the Authentication Middleware Create a middleware function that will check for the presence of a valid authentication cookie. This middleware will be used to protect routes that require authentication. Create a new middleware file `middleware/auth.ts`: ```typescript title="src/middleware/auth.ts" import { Context } from "hono"; import { getCookie } from "hono/cookie"; import { UnauthorizedError } from "../errors"; export const requireAuth = async (ctx: Context, next: () => Promise) => { // Get username from cookie const username = getCookie(ctx, "username"); if (!username) { throw new UnauthorizedError("User is not logged in"); } // Make username available to route handlers ctx.set("username", username); await next(); }; ``` This middleware: - Checks for a `username` cookie - Throws an `Error` if the cookie is missing - Makes the username available to downstream handlers via the context ### Create Authentication Routes Next, create the authentication routes that will handle user login. Create a new file `routes/auth.ts`: ```typescript title="src/routes/auth.ts" import { Context, Hono } from "hono"; import { setCookie } from "hono/cookie"; import { BadRequestError } from "../errors"; import { ApiContext } from "../types"; export const authenticateUser = async (ctx: Context) => { // Extract username from request body const { username } = await ctx.req.json(); // Make sure username was provided if (!username) { throw new BadRequestError("Username is required"); } // Create a secure cookie to track the user's session // This cookie will: // - Be HTTP-only for security (no JS access) // - Work across all routes via path="/" // - Last for 24 hours // - Only be sent in same-site requests to prevent CSRF setCookie(ctx, "username", username, { httpOnly: true, path: "/", maxAge: 60 * 60 * 24, sameSite: "Strict", }); // Let the client know login was successful return ctx.json({ success: true }); }; // Set up authentication-related routes export const configureAuthRoutes = () => { const router = new Hono(); // POST /login - Authenticate user and create session router.post("/login", authenticateUser); return router; }; ``` Finally, update main application file to include the authentication routes. Modify `src/index.ts`: ```typescript title="src/index.ts" import { configureAuthRoutes } from "./routes/auth"; import { Hono } from "hono"; import { logger } from "hono/logger"; import type { ApiContext } from "./types"; import { requireAuth } from "./middleware/auth"; // Create our main Hono app instance with proper typing const app = new Hono(); // Create a separate router for API endpoints to keep things organized const api = new Hono(); // Set up global middleware that runs on every request // - Logger gives us visibility into what is happening app.use("*", logger()); // Wire up all our authentication routes (login, etc) // These will be mounted under /api/v1/auth/ api.route("/auth", configureAuthRoutes()); // Mount all API routes under the version prefix (for example, /api/v1) // This allows us to make breaking changes in v2 without affecting v1 users app.route("/api/v1", api); export default app; ``` Now we have a basic authentication system that: 1. Provides a login endpoint at `/api/v1/auth/login` 2. Securely stores the username in a cookie 3. Includes middleware to protect authenticated routes ## 5. Create a Durable Object to manage interviews Now that you have your authentication system in place, create a Durable Object to manage interview sessions. Durable Objects are perfect for this interview practice tool because they provide the following functionalities: - Maintains states between connections, so users can reconnect without losing progress. - Provides a SQLite database to store all interview Q&A, feedback and metrics. - Enables smooth real-time interactions between the interviewer AI and candidate. - Handles multiple interview sessions efficiently without performance issues. - Creates a dedicated instance for each user, giving them their own isolated environment. First, you will need to configure the Durable Object in Wrangler file. Add the following configuration: ```toml title="wrangler.toml" [[durable_objects.bindings]] name = "INTERVIEW" class_name = "Interview" [[migrations]] tag = "v1" new_sqlite_classes = ["Interview"] ``` Next, create a new file `interview.ts` to define our Interview Durable Object: ```typescript title="src/interview.ts" import { DurableObject } from "cloudflare:workers"; export class Interview extends DurableObject { // We will use it to keep track of all active WebSocket connections for real-time communication private sessions: Map; constructor(state: DurableObjectState, env: CloudflareBindings) { super(state, env); // Initialize empty sessions map - we will add WebSocket connections as users join this.sessions = new Map(); } // Entry point for all HTTP requests to this Durable Object // This will handle both initial setup and WebSocket upgrades async fetch(request: Request) { // For now, just confirm the object is working // We'll add WebSocket upgrade logic and request routing later return new Response("Interview object initialized"); } // Broadcasts a message to all connected WebSocket clients. private broadcast(message: string) { this.ctx.getWebSockets().forEach((ws) => { try { if (ws.readyState === WebSocket.OPEN) { ws.send(message); } } catch (error) { console.error( "Error broadcasting message to a WebSocket client:", error, ); } }); } } ``` Now we need to export the Durable Object in our main `src/index.ts` file: ```typescript title="src/index.ts" import { Interview } from "./interview"; // ... previous code ... export { Interview }; export default app; ``` Since the Worker code is written in TypeScript, you should run the following command to add the necessary type definitions: ```sh npm run cf-typegen ``` ### Set up SQLite database schema to store interview data Now you will use SQLite at the Durable Object level for data persistence. This gives each user their own isolated database instance. You will need two main tables: - `interviews`: Stores interview session data - `messages`: Stores all messages exchanged during interviews Before you create these tables, create a service class to handle your database operations. This encapsulates database logic and helps you: - Manage database schema changes - Handle errors consistently - Keep database queries organized Create a new file called `services/InterviewDatabaseService.ts`: ```typescript title="src/services/InterviewDatabaseService.ts" import { InterviewData, Message, InterviewStatus, InterviewTitle, InterviewSkill, } from "../types"; import { InterviewError, ErrorCodes } from "../errors"; const CONFIG = { database: { tables: { interviews: "interviews", messages: "messages", }, indexes: { messagesByInterview: "idx_messages_interviewId", }, }, } as const; export class InterviewDatabaseService { constructor(private sql: SqlStorage) {} /** * Sets up the database schema by creating tables and indexes if they do not exist. * This is called when initializing a new Durable Object instance to ensure * we have the required database structure. * * The schema consists of: * - interviews table: Stores interview metadata like title, skills, and status * - messages table: Stores the conversation history between user and AI * - messages index: Helps optimize queries when fetching messages for a specific interview */ createTables() { try { // Get list of existing tables to avoid recreating them const cursor = this.sql.exec(`PRAGMA table_list`); const existingTables = new Set([...cursor].map((table) => table.name)); // The interviews table is our main table storing interview sessions. // We only create it if it does not exist yet. if (!existingTables.has(CONFIG.database.tables.interviews)) { this.sql.exec(InterviewDatabaseService.QUERIES.CREATE_INTERVIEWS_TABLE); } // The messages table stores the actual conversation history. // It references interviews table via foreign key for data integrity. if (!existingTables.has(CONFIG.database.tables.messages)) { this.sql.exec(InterviewDatabaseService.QUERIES.CREATE_MESSAGES_TABLE); } // Add an index on interviewId to speed up message retrieval. // This is important since we will frequently query messages by interview. this.sql.exec(InterviewDatabaseService.QUERIES.CREATE_MESSAGE_INDEX); } catch (error: unknown) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to initialize database: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } private static readonly QUERIES = { CREATE_INTERVIEWS_TABLE: ` CREATE TABLE IF NOT EXISTS interviews ( interviewId TEXT PRIMARY KEY, title TEXT NOT NULL, skills TEXT NOT NULL, createdAt INTEGER NOT NULL DEFAULT (strftime('%s','now') * 1000), updatedAt INTEGER NOT NULL DEFAULT (strftime('%s','now') * 1000), status TEXT NOT NULL DEFAULT 'pending' ) `, CREATE_MESSAGES_TABLE: ` CREATE TABLE IF NOT EXISTS messages ( messageId TEXT PRIMARY KEY, interviewId TEXT NOT NULL, role TEXT NOT NULL, content TEXT NOT NULL, timestamp INTEGER NOT NULL, FOREIGN KEY (interviewId) REFERENCES interviews(interviewId) ) `, CREATE_MESSAGE_INDEX: ` CREATE INDEX IF NOT EXISTS idx_messages_interview ON messages(interviewId) `, }; } ``` Update the `Interview` Durable Object to use the database service by modifying `src/interview.ts`: ```typescript title="src/interview.ts" import { InterviewDatabaseService } from "./services/InterviewDatabaseService"; export class Interview extends DurableObject { // Database service for persistent storage of interview data and messages private readonly db: InterviewDatabaseService; private sessions: Map; constructor(state: DurableObjectState, env: CloudflareBindings) { // ... previous code ... // Set up our database connection using the DO's built-in SQLite instance this.db = new InterviewDatabaseService(state.storage.sql); // First-time setup: ensure our database tables exist // This is idempotent so safe to call on every instantiation this.db.createTables(); } } ``` Add methods to create and retrieve interviews in `services/InterviewDatabaseService.ts`: ```typescript title="src/services/InterviewDatabaseService.ts" export class InterviewDatabaseService { /** * Creates a new interview session in the database. * * This is the main entry point for starting a new interview. It handles all the * initial setup like: * - Generating a unique ID using crypto.randomUUID() for reliable uniqueness * - Recording the interview title and required skills * - Setting up timestamps for tracking interview lifecycle * - Setting the initial status to "Created" * */ createInterview(title: InterviewTitle, skills: InterviewSkill[]): string { try { const interviewId = crypto.randomUUID(); const currentTime = Date.now(); this.sql.exec( InterviewDatabaseService.QUERIES.INSERT_INTERVIEW, interviewId, title, JSON.stringify(skills), // Store skills as JSON for flexibility InterviewStatus.Created, currentTime, currentTime, ); return interviewId; } catch (error: unknown) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to create interview: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } /** * Fetches all interviews from the database, ordered by creation date. * * This is useful for displaying interview history and letting users * resume previous sessions. We order by descending creation date since * users typically want to see their most recent interviews first. * * Returns an array of InterviewData objects with full interview details * including metadata and message history. */ getAllInterviews(): InterviewData[] { try { const cursor = this.sql.exec( InterviewDatabaseService.QUERIES.GET_ALL_INTERVIEWS, ); return [...cursor].map(this.parseInterviewRecord); } catch (error) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to retrieve interviews: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } // Retrieves an interview and its messages by ID getInterview(interviewId: string): InterviewData | null { try { const cursor = this.sql.exec( InterviewDatabaseService.QUERIES.GET_INTERVIEW, interviewId, ); const record = [...cursor][0]; if (!record) return null; return this.parseInterviewRecord(record); } catch (error: unknown) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to retrieve interview: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } addMessage( interviewId: string, role: Message["role"], content: string, messageId: string, ): Message { try { const timestamp = Date.now(); this.sql.exec( InterviewDatabaseService.QUERIES.INSERT_MESSAGE, messageId, interviewId, role, content, timestamp, ); return { messageId, interviewId, role, content, timestamp, }; } catch (error: unknown) { const message = error instanceof Error ? error.message : String(error); throw new InterviewError( `Failed to add message: ${message}`, ErrorCodes.DATABASE_ERROR, ); } } /** * Transforms raw database records into structured InterviewData objects. * * This helper does the heavy lifting of: * - Type checking critical fields to catch database corruption early * - Converting stored JSON strings back into proper objects * - Filtering out any null messages that might have snuck in * - Ensuring timestamps are proper numbers * * If any required data is missing or malformed, it throws an error * rather than returning partially valid data that could cause issues * downstream. */ private parseInterviewRecord(record: any): InterviewData { const interviewId = record.interviewId as string; const createdAt = Number(record.createdAt); const updatedAt = Number(record.updatedAt); if (!interviewId || !createdAt || !updatedAt) { throw new InterviewError( "Invalid interview data in database", ErrorCodes.DATABASE_ERROR, ); } return { interviewId, title: record.title as InterviewTitle, skills: JSON.parse(record.skills as string) as InterviewSkill[], messages: record.messages ? JSON.parse(record.messages) .filter((m: any) => m !== null) .map((m: any) => ({ messageId: m.messageId, role: m.role, content: m.content, timestamp: m.timestamp, })) : [], status: record.status as InterviewStatus, createdAt, updatedAt, }; } // Add these SQL queries to the QUERIES object private static readonly QUERIES = { // ... previous queries ... INSERT_INTERVIEW: ` INSERT INTO ${CONFIG.database.tables.interviews} (interviewId, title, skills, status, createdAt, updatedAt) VALUES (?, ?, ?, ?, ?, ?) `, GET_ALL_INTERVIEWS: ` SELECT interviewId, title, skills, createdAt, updatedAt, status FROM ${CONFIG.database.tables.interviews} ORDER BY createdAt DESC `, INSERT_MESSAGE: ` INSERT INTO ${CONFIG.database.tables.messages} (messageId, interviewId, role, content, timestamp) VALUES (?, ?, ?, ?, ?) `, GET_INTERVIEW: ` SELECT i.interviewId, i.title, i.skills, i.status, i.createdAt, i.updatedAt, COALESCE( json_group_array( CASE WHEN m.messageId IS NOT NULL THEN json_object( 'messageId', m.messageId, 'role', m.role, 'content', m.content, 'timestamp', m.timestamp ) END ), '[]' ) as messages FROM ${CONFIG.database.tables.interviews} i LEFT JOIN ${CONFIG.database.tables.messages} m ON i.interviewId = m.interviewId WHERE i.interviewId = ? GROUP BY i.interviewId `, }; } ``` Add RPC methods to the `Interview` Durable Object to expose database operations through API. Add this code to `src/interview.ts`: ```typescript title="src/interview.ts" import { InterviewData, InterviewTitle, InterviewSkill, Message, } from "./types"; export class Interview extends DurableObject { // Creates a new interview session createInterview(title: InterviewTitle, skills: InterviewSkill[]): string { return this.db.createInterview(title, skills); } // Retrieves all interview sessions getAllInterviews(): InterviewData[] { return this.db.getAllInterviews(); } // Adds a new message to the 'messages' table and broadcasts it to all connected WebSocket clients. addMessage( interviewId: string, role: "user" | "assistant", content: string, messageId: string, ): Message { const newMessage = this.db.addMessage( interviewId, role, content, messageId, ); this.broadcast( JSON.stringify({ ...newMessage, type: "message", }), ); return newMessage; } } ``` ## 6. Create REST API endpoints With your Durable Object and database service ready, create REST API endpoints to manage interviews. You will need endpoints to: - Create new interviews - Retrieve all interviews for a user Create a new file for your interview routes at `routes/interview.ts`: ```typescript title="src/routes/interview.ts" import { Hono } from "hono"; import { BadRequestError } from "../errors"; import { InterviewInput, ApiContext, HonoCtx, InterviewTitle, InterviewSkill, } from "../types"; import { requireAuth } from "../middleware/auth"; /** * Gets the Interview Durable Object instance for a given user. * We use the username as a stable identifier to ensure each user * gets their own dedicated DO instance that persists across requests. */ const getInterviewDO = (ctx: HonoCtx) => { const username = ctx.get("username"); const id = ctx.env.INTERVIEW.idFromName(username); return ctx.env.INTERVIEW.get(id); }; /** * Validates the interview creation payload. * Makes sure we have all required fields in the correct format: * - title must be present * - skills must be a non-empty array * Throws an error if validation fails. */ const validateInterviewInput = (input: InterviewInput) => { if ( !input.title || !input.skills || !Array.isArray(input.skills) || input.skills.length === 0 ) { throw new BadRequestError("Invalid input"); } }; /** * GET /interviews * Retrieves all interviews for the authenticated user. * The interviews are stored and managed by the user's DO instance. */ const getAllInterviews = async (ctx: HonoCtx) => { const interviewDO = getInterviewDO(ctx); const interviews = await interviewDO.getAllInterviews(); return ctx.json(interviews); }; /** * POST /interviews * Creates a new interview session with the specified title and skills. * Each interview gets a unique ID that can be used to reference it later. * Returns the newly created interview ID on success. */ const createInterview = async (ctx: HonoCtx) => { const body = await ctx.req.json(); validateInterviewInput(body); const interviewDO = getInterviewDO(ctx); const interviewId = await interviewDO.createInterview( body.title as InterviewTitle, body.skills as InterviewSkill[], ); return ctx.json({ success: true, interviewId }); }; /** * Sets up all interview-related routes. * Currently supports: * - GET / : List all interviews * - POST / : Create a new interview */ export const configureInterviewRoutes = () => { const router = new Hono(); router.use("*", requireAuth); router.get("/", getAllInterviews); router.post("/", createInterview); return router; }; ``` The `getInterviewDO` helper function uses the username from our authentication cookie to create a unique Durable Object ID. This ensures each user has their own isolated interview state. Update your main application file to include the routes and protect them with authentication middleware. Update `src/index.ts`: ```typescript title="src/index.ts" import { configureAuthRoutes } from "./routes/auth"; import { configureInterviewRoutes } from "./routes/interview"; import { Hono } from "hono"; import { Interview } from "./interview"; import { logger } from "hono/logger"; import type { ApiContext } from "./types"; const app = new Hono(); const api = new Hono(); app.use("*", logger()); api.route("/auth", configureAuthRoutes()); api.route("/interviews", configureInterviewRoutes()); app.route("/api/v1", api); export { Interview }; export default app; ``` Now you have two new API endpoints: - `POST /api/v1/interviews`: Creates a new interview session - `GET /api/v1/interviews`: Retrieves all interviews for the authenticated user You can test these endpoints running the following command: 1. Create a new interview: ```sh curl -X POST http://localhost:8787/api/v1/interviews \ -H "Content-Type: application/json" \ -H "Cookie: username=testuser; HttpOnly" \ -d '{"title":"Frontend Developer Interview","skills":["JavaScript","React","CSS"]}' ``` 2. Get all interviews: ```sh curl http://localhost:8787/api/v1/interviews \ -H "Cookie: username=testuser; HttpOnly" ``` ## 7. Set up WebSockets to handle real-time communication With the basic interview management system in place, you will now implement Durable Objects to handle real-time message processing and maintain WebSocket connections. Update the `Interview` Durable Object to handle WebSocket connections by adding the following code to `src/interview.ts`: ```typescript export class Interview extends DurableObject { // Services for database operations and managing WebSocket sessions private readonly db: InterviewDatabaseService; private sessions: Map; constructor(state: DurableObjectState, env: CloudflareBindings) { // ... previous code ... // Keep WebSocket connections alive by automatically responding to pings // This prevents timeouts and connection drops this.ctx.setWebSocketAutoResponse( new WebSocketRequestResponsePair("ping", "pong"), ); } async fetch(request: Request): Promise { // Check if this is a WebSocket upgrade request const upgradeHeader = request.headers.get("Upgrade"); if (upgradeHeader?.toLowerCase().includes("websocket")) { return this.handleWebSocketUpgrade(request); } // If it is not a WebSocket request, we don't handle it return new Response("Not found", { status: 404 }); } private async handleWebSocketUpgrade(request: Request): Promise { // Extract the interview ID from the URL - it should be the last segment const url = new URL(request.url); const interviewId = url.pathname.split("/").pop(); if (!interviewId) { return new Response("Missing interviewId parameter", { status: 400 }); } // Create a new WebSocket connection pair - one for the client, one for the server const pair = new WebSocketPair(); const [client, server] = Object.values(pair); // Keep track of which interview this WebSocket is connected to // This is important for routing messages to the right interview session this.sessions.set(server, { interviewId }); // Tell the Durable Object to start handling this WebSocket this.ctx.acceptWebSocket(server); // Send the current interview state to the client right away // This helps initialize their UI with the latest data const interviewData = await this.db.getInterview(interviewId); if (interviewData) { server.send( JSON.stringify({ type: "interview_details", data: interviewData, }), ); } // Return the client WebSocket as part of the upgrade response return new Response(null, { status: 101, webSocket: client, }); } async webSocketClose( ws: WebSocket, code: number, reason: string, wasClean: boolean, ) { // Clean up when a connection closes to prevent memory leaks // This is especially important in long-running Durable Objects console.log( `WebSocket closed: Code ${code}, Reason: ${reason}, Clean: ${wasClean}`, ); } } ``` Next, update the interview routes to include a WebSocket endpoint. Add the following to `routes/interview.ts`: ```typescript title="src/routes/interview.ts" // ... previous code ... const streamInterviewProcess = async (ctx: HonoCtx) => { const interviewDO = getInterviewDO(ctx); return await interviewDO.fetch(ctx.req.raw); }; export const configureInterviewRoutes = () => { const router = new Hono(); router.get("/", getAllInterviews); router.post("/", createInterview); // Add WebSocket route router.get("/:interviewId", streamInterviewProcess); return router; }; ``` The WebSocket system provides real-time communication features for interview practice tool: - Each interview session gets its own dedicated WebSocket connection, allowing seamless communication between the candidate and AI interviewer - The Durable Object maintains the connection state, ensuring no messages are lost even if the client temporarily disconnects - To keep connections stable, it automatically responds to ping messages with pongs, preventing timeouts - Candidates and interviewers receive instant updates as the interview progresses, creating a natural conversational flow ## 8. Add audio processing capabilities with Workers AI Now that WebSocket connection set up, the next step is to add speech-to-text capabilities using Workers AI. Let's use Cloudflare's Whisper model to transcribe audio in real-time during the interview. The audio processing pipeline will work like this: 1. Client sends audio through the WebSocket connection 2. Our Durable Object receives the binary audio data 3. We pass the audio to Whisper for transcription 4. The transcribed text is saved as a new message 5. We immediately send the transcription back to the client 6. The client receives a notification that the AI interviewer is generating a response ### Create audio processing pipeline In this step you will update the Interview Durable Object to handle the following: 1. Detect binary audio data sent through WebSocket 2. Create a unique message ID for tracking the processing status 3. Notify clients that audio processing has begun 4. Include error handling for failed audio processing 5. Broadcast status updates to all connected clients First, update Interview Durable Object to handle binary WebSocket messages. Add the following methods to your `src/interview.ts` file: ```typescript title="src/interview.ts" // ... previous code ... /** * Handles incoming WebSocket messages, both binary audio data and text messages. * This is the main entry point for all WebSocket communication. */ async webSocketMessage(ws: WebSocket, eventData: ArrayBuffer | string): Promise { try { // Handle binary audio data from the client's microphone if (eventData instanceof ArrayBuffer) { await this.handleBinaryAudio(ws, eventData); return; } // Text messages will be handled by other methods } catch (error) { this.handleWebSocketError(ws, error); } } /** * Processes binary audio data received from the client. * Converts audio to text using Whisper and broadcasts processing status. */ private async handleBinaryAudio(ws: WebSocket, audioData: ArrayBuffer): Promise { try { const uint8Array = new Uint8Array(audioData); // Retrieve the associated interview session const session = this.sessions.get(ws); if (!session?.interviewId) { throw new Error("No interview session found"); } // Generate unique ID to track this message through the system const messageId = crypto.randomUUID(); // Let the client know we're processing their audio this.broadcast( JSON.stringify({ type: "message", status: "processing", role: "user", messageId, interviewId: session.interviewId, }), ); // TODO: Implement Whisper transcription in next section // For now, just log the received audio data size console.log(`Received audio data of length: ${uint8Array.length}`); } catch (error) { console.error("Audio processing failed:", error); this.handleWebSocketError(ws, error); } } /** * Handles WebSocket errors by logging them and notifying the client. * Ensures errors are properly communicated back to the user. */ private handleWebSocketError(ws: WebSocket, error: unknown): void { const errorMessage = error instanceof Error ? error.message : "An unknown error occurred."; console.error("WebSocket error:", errorMessage); if (ws.readyState === WebSocket.OPEN) { ws.send( JSON.stringify({ type: "error", message: errorMessage, }), ); } } ``` Your `handleBinaryAudio` method currently logs when it receives audio data. Next, you'll enhance it to transcribe speech using Workers AI's Whisper model. ### Configure speech-to-text Now that audio processing pipeline is set up, you will now integrate Workers AI's Whisper model for speech-to-text transcription. Configure the Worker AI binding in your Wrangler file by adding: ```toml # ... previous configuration ... [ai] binding = "AI" ``` Next, generate TypeScript types for our AI binding. Run the following command: ```sh npm run cf-typegen ``` You will need a new service class for AI operations. Create a new file called `services/AIService.ts`: ```typescript title="src/services/AIService.ts" import { InterviewError, ErrorCodes } from "../errors"; export class AIService { constructor(private readonly AI: Ai) {} async transcribeAudio(audioData: Uint8Array): Promise { try { // Call the Whisper model to transcribe the audio const response = await this.AI.run("@cf/openai/whisper-tiny-en", { audio: Array.from(audioData), }); if (!response?.text) { throw new Error("Failed to transcribe audio content."); } return response.text; } catch (error) { throw new InterviewError( "Failed to transcribe audio content", ErrorCodes.TRANSCRIPTION_FAILED, ); } } } ``` You will need to update the `Interview` Durable Object to use this new AI service. To do this, update the handleBinaryAudio method in `src/interview.ts`: ```typescript title="src/interview.ts" import { AIService } from "./services/AIService"; export class Interview extends DurableObject { private readonly aiService: AIService; constructor(state: DurableObjectState, env: Env) { // ... previous code ... // Initialize the AI service with the Workers AI binding this.aiService = new AIService(this.env.AI); } private async handleBinaryAudio(ws: WebSocket, audioData: ArrayBuffer): Promise { try { const uint8Array = new Uint8Array(audioData); const session = this.sessions.get(ws); if (!session?.interviewId) { throw new Error("No interview session found"); } // Create a message ID for tracking const messageId = crypto.randomUUID(); // Send processing state to client this.broadcast( JSON.stringify({ type: "message", status: "processing", role: "user", messageId, interviewId: session.interviewId, }), ); // NEW: Use AI service to transcribe the audio const transcribedText = await this.aiService.transcribeAudio(uint8Array); // Store the transcribed message await this.addMessage(session.interviewId, "user", transcribedText, messageId); } catch (error) { console.error("Audio processing failed:", error); this.handleWebSocketError(ws, error); } } ``` :::note The Whisper model `@cf/openai/whisper-tiny-en` is optimized for English speech recognition. If you need support for other languages, you can use different Whisper model variants available through Workers AI. ::: When users speak during the interview, their audio will be automatically transcribed and stored as messages in the interview session. The transcribed text will be immediately available to both the user and the AI interviewer for generating appropriate responses. ## 9. Integrate AI response generation Now that you have audio transcription working, let's implement AI interviewer response generation using Workers AI's LLM capabilities. You'll create an interview system that: - Maintains context of the conversation - Provides relevant follow-up questions - Gives constructive feedback - Stays in character as a professional interviewer ### Set up Workers AI LLM integration First, update the `AIService` class to handle LLM interactions. You will need to add methods for: - Processing interview context - Generating appropriate responses - Handling conversation flow Update the `services/AIService.ts` class to include LLM functionality: ```typescript title="src/services/AIService.ts" import { InterviewData, Message } from "../types"; export class AIService { async processLLMResponse(interview: InterviewData): Promise { const messages = this.prepareLLMMessages(interview); try { const { response } = await this.AI.run("@cf/meta/llama-2-7b-chat-int8", { messages, }); if (!response) { throw new Error("Failed to generate a response from the LLM model."); } return response; } catch (error) { throw new InterviewError("Failed to generate a response from the LLM model.", ErrorCodes.LLM_FAILED); } } private prepareLLMMessages(interview: InterviewData) { const messageHistory = interview.messages.map((msg: Message) => ({ role: msg.role, content: msg.content, })); return [ { role: "system", content: this.createSystemPrompt(interview), }, ...messageHistory, ]; } ``` :::note The @cf/meta/llama-2-7b-chat-int8 model is optimized for chat-like interactions and provides good performance while maintaining reasonable resource usage. ::: ### Create the conversation prompt Prompt engineering is crucial for getting high-quality responses from the LLM. Next, you will create a system prompt that: - Sets the context for the interview - Defines the interviewer's role and behavior - Specifies the technical focus areas - Guides the conversation flow Add the following method to your `services/AIService.ts` class: ```typescript title="src/services/AIService.ts" private createSystemPrompt(interview: InterviewData): string { const basePrompt = "You are conducting a technical interview."; const rolePrompt = `The position is for ${interview.title}.`; const skillsPrompt = `Focus on topics related to: ${interview.skills.join(", ")}.`; const instructionsPrompt = "Ask relevant technical questions and provide constructive feedback."; return `${basePrompt} ${rolePrompt} ${skillsPrompt} ${instructionsPrompt}`; } ``` ### Implement response generation logic Finally, integrate the LLM response generation into the interview flow. Update the `handleBinaryAudio` method in the `src/interview.ts` Durable Object to: - Process transcribed user responses - Generate appropriate AI interviewer responses - Maintain conversation context Update the `handleBinaryAudio` method in `src/interview.ts`: ```typescript title="src/interview.ts" private async handleBinaryAudio(ws: WebSocket, audioData: ArrayBuffer): Promise { try { // Convert raw audio buffer to uint8 array for processing const uint8Array = new Uint8Array(audioData); const session = this.sessions.get(ws); if (!session?.interviewId) { throw new Error("No interview session found"); } // Generate a unique ID to track this message through the system const messageId = crypto.randomUUID(); // Let the client know we're processing their audio // This helps provide immediate feedback while transcription runs this.broadcast( JSON.stringify({ type: "message", status: "processing", role: "user", messageId, interviewId: session.interviewId, }), ); // Convert the audio to text using our AI transcription service // This typically takes 1-2 seconds for normal speech const transcribedText = await this.aiService.transcribeAudio(uint8Array); // Save the user's message to our database so we maintain chat history await this.addMessage(session.interviewId, "user", transcribedText, messageId); // Look up the full interview context - we need this to generate a good response const interview = await this.db.getInterview(session.interviewId); if (!interview) { throw new Error(`Interview not found: ${session.interviewId}`); } // Now it's the AI's turn to respond // First generate an ID for the assistant's message const assistantMessageId = crypto.randomUUID(); // Let the client know we're working on the AI response this.broadcast( JSON.stringify({ type: "message", status: "processing", role: "assistant", messageId: assistantMessageId, interviewId: session.interviewId, }), ); // Generate the AI interviewer's response based on the conversation history const llmResponse = await this.aiService.processLLMResponse(interview); await this.addMessage(session.interviewId, "assistant", llmResponse, assistantMessageId); } catch (error) { // Something went wrong processing the audio or generating a response // Log it and let the client know there was an error console.error("Audio processing failed:", error); this.handleWebSocketError(ws, error); } } ``` ## Conclusion You have successfully built an AI-powered interview practice tool using Cloudflare's Workers AI. In summary, you have: - Created a real-time WebSocket communication system using Durable Objects - Implemented speech-to-text processing with Workers AI Whisper model - Built an intelligent interview system using Workers AI LLM capabilities - Designed a persistent storage system with SQLite in Durable Objects The complete source code for this tutorial is available on GitHub: [ai-interview-practice-tool](https://github.com/berezovyy/ai-interview-practice-tool) --- # Explore Code Generation Using DeepSeek Coder Models URL: https://developers.cloudflare.com/workers-ai/guides/tutorials/explore-code-generation-using-deepseek-coder-models/ import { Stream } from "~/components" A handy way to explore all of the models available on [Workers AI](/workers-ai) is to use a [Jupyter Notebook](https://jupyter.org/). You can [download the DeepSeek Coder notebook](/workers-ai/static/documentation/notebooks/deepseek-coder-exploration.ipynb) or view the embedded notebook below. [comment]: <> "The markdown below is auto-generated from https://github.com/craigsdennis/notebooks-cloudflare-workers-ai" *** ## Exploring Code Generation Using DeepSeek Coder AI Models being able to generate code unlocks all sorts of use cases. The [DeepSeek Coder](https://github.com/deepseek-ai/DeepSeek-Coder) models `@hf/thebloke/deepseek-coder-6.7b-base-awq` and `@hf/thebloke/deepseek-coder-6.7b-instruct-awq` are now available on [Workers AI](/workers-ai). Let's explore them using the API! ```python import sys !{sys.executable} -m pip install requests python-dotenv ``` ``` Requirement already satisfied: requests in ./venv/lib/python3.12/site-packages (2.31.0) Requirement already satisfied: python-dotenv in ./venv/lib/python3.12/site-packages (1.0.1) Requirement already satisfied: charset-normalizer<4,>=2 in ./venv/lib/python3.12/site-packages (from requests) (3.3.2) Requirement already satisfied: idna<4,>=2.5 in ./venv/lib/python3.12/site-packages (from requests) (3.6) Requirement already satisfied: urllib3<3,>=1.21.1 in ./venv/lib/python3.12/site-packages (from requests) (2.1.0) Requirement already satisfied: certifi>=2017.4.17 in ./venv/lib/python3.12/site-packages (from requests) (2023.11.17) ``` ```python import os from getpass import getpass from IPython.display import display, Image, Markdown, Audio import requests ``` ```python %load_ext dotenv %dotenv ``` ### Configuring your environment To use the API you'll need your [Cloudflare Account ID](https://dash.cloudflare.com) (head to Workers & Pages > Overview > Account details > Account ID) and a [Workers AI enabled API Token](https://dash.cloudflare.com/profile/api-tokens). If you want to add these files to your environment, you can create a new file named `.env` ```bash CLOUDFLARE_API_TOKEN="YOUR-TOKEN" CLOUDFLARE_ACCOUNT_ID="YOUR-ACCOUNT-ID" ``` ```python if "CLOUDFLARE_API_TOKEN" in os.environ: api_token = os.environ["CLOUDFLARE_API_TOKEN"] else: api_token = getpass("Enter you Cloudflare API Token") ``` ```python if "CLOUDFLARE_ACCOUNT_ID" in os.environ: account_id = os.environ["CLOUDFLARE_ACCOUNT_ID"] else: account_id = getpass("Enter your account id") ``` ### Generate code from a comment A common use case is to complete the code for the user after they provide a descriptive comment. ````python model = "@hf/thebloke/deepseek-coder-6.7b-base-awq" prompt = "# A function that checks if a given word is a palindrome" response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "user", "content": prompt} ]} ) inference = response.json() code = inference["result"]["response"] display(Markdown(f""" ```python {prompt} {code.strip()} ``` """)) ```` ```python # A function that checks if a given word is a palindrome def is_palindrome(word): # Convert the word to lowercase word = word.lower() # Reverse the word reversed_word = word[::-1] # Check if the reversed word is the same as the original word if word == reversed_word: return True else: return False # Test the function print(is_palindrome("racecar")) # Output: True print(is_palindrome("hello")) # Output: False ``` ### Assist in debugging We've all been there, bugs happen. Sometimes those stacktraces can be very intimidating, and a great use case of using Code Generation is to assist in explaining the problem. ```python model = "@hf/thebloke/deepseek-coder-6.7b-instruct-awq" system_message = "The user is going to give you code that isn't working. Explain to the user what might be wrong" code = """# Welcomes our user def hello_world(first_name="World"): print(f"Hello, {name}!") """ response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "system", "content": system_message}, {"role": "user", "content": code}, ]} ) inference = response.json() response = inference["result"]["response"] display(Markdown(response)) ``` The error in your code is that you are trying to use a variable `name` which is not defined anywhere in your function. The correct variable to use is `first_name`. So, you should change `f"Hello, {name}!"` to `f"Hello, {first_name}!"`. Here is the corrected code: ```python # Welcomes our user def hello_world(first_name="World"): print(f"Hello, {first_name}") ``` Now, when you call `hello_world()`, it will print "Hello, World" by default. If you call `hello_world("John")`, it will print "Hello, John". ### Write tests! Writing unit tests is a common best practice. With the enough context, it's possible to write unit tests. ```python model = "@hf/thebloke/deepseek-coder-6.7b-instruct-awq" system_message = "The user is going to give you code and would like to have tests written in the Python unittest module." code = """ class User: def __init__(self, first_name, last_name=None): self.first_name = first_name self.last_name = last_name if last_name is None: self.last_name = "Mc" + self.first_name def full_name(self): return self.first_name + " " + self.last_name """ response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "system", "content": system_message}, {"role": "user", "content": code}, ]} ) inference = response.json() response = inference["result"]["response"] display(Markdown(response)) ``` Here is a simple unittest test case for the User class: ```python import unittest class TestUser(unittest.TestCase): def test_full_name(self): user = User("John", "Doe") self.assertEqual(user.full_name(), "John Doe") def test_default_last_name(self): user = User("Jane") self.assertEqual(user.full_name(), "Jane McJane") if __name__ == '__main__': unittest.main() ``` In this test case, we have two tests: * `test_full_name` tests the `full_name` method when the user has both a first name and a last name. * `test_default_last_name` tests the `full_name` method when the user only has a first name and the last name is set to "Mc" + first name. If all these tests pass, it means that the `full_name` method is working as expected. If any of these tests fail, it ### Fill-in-the-middle Code Completion A common use case in Developer Tools is to autocomplete based on context. DeepSeek Coder provides the ability to submit existing code with a placeholder, so that the model can complete in context. Warning: The tokens are prefixed with `<|` and suffixed with `|>` make sure to copy and paste them. ````python model = "@hf/thebloke/deepseek-coder-6.7b-base-awq" code = """ <|fim▁begin|>import re from jklol import email_service def send_email(email_address, body): <|fim▁hole|> if not is_valid_email: raise InvalidEmailAddress(email_address) return email_service.send(email_address, body)<|fim▁end|> """ response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "user", "content": code} ]} ) inference = response.json() response = inference["result"]["response"] display(Markdown(f""" ```python {response.strip()} ``` """)) ```` ```python is_valid_email = re.match(r"[^@]+@[^@]+\.[^@]+", email_address) ``` ### Experimental: Extract data into JSON No need to threaten the model or bring grandma into the prompt. Get back JSON in the format you want. ````python model = "@hf/thebloke/deepseek-coder-6.7b-instruct-awq" # Learn more at https://json-schema.org/ json_schema = """ { "title": "User", "description": "A user from our example app", "type": "object", "properties": { "firstName": { "description": "The user's first name", "type": "string" }, "lastName": { "description": "The user's last name", "type": "string" }, "numKids": { "description": "Amount of children the user has currently", "type": "integer" }, "interests": { "description": "A list of what the user has shown interest in", "type": "array", "items": { "type": "string" } }, }, "required": [ "firstName" ] } """ system_prompt = f""" The user is going to discuss themselves and you should create a JSON object from their description to match the json schema below. {json_schema} Return JSON only. Do not explain or provide usage examples. """ prompt = """Hey there, I'm Craig Dennis and I'm a Developer Educator at Cloudflare. My email is craig@cloudflare.com. I am very interested in AI. I've got two kids. I love tacos, burritos, and all things Cloudflare""" response = requests.post( f"https://api.cloudflare.com/client/v4/accounts/{account_id}/ai/run/{model}", headers={"Authorization": f"Bearer {api_token}"}, json={"messages": [ {"role": "system", "content": system_prompt}, {"role": "user", "content": prompt} ]} ) inference = response.json() response = inference["result"]["response"] display(Markdown(f""" ```json {response.strip()} ``` """)) ```` ```json { "firstName": "Craig", "lastName": "Dennis", "numKids": 2, "interests": ["AI", "Cloudflare", "Tacos", "Burritos"] } ``` --- # Explore Workers AI Models Using a Jupyter Notebook URL: https://developers.cloudflare.com/workers-ai/guides/tutorials/explore-workers-ai-models-using-a-jupyter-notebook/ import { Stream } from "~/components" A handy way to explore all of the models available on [Workers AI](/workers-ai) is to use a [Jupyter Notebook](https://jupyter.org/). You can [download the Workers AI notebook](/workers-ai-notebooks/cloudflare-workers-ai.ipynb) or view the embedded notebook below. Or you can run this on [Google Colab](https://colab.research.google.com/github/craigsdennis/notebooks-cloudflare-workers-ai/blob/main/cloudflare-workers-ai.ipynb) [comment]: <> "The markdown below is auto-generated from https://github.com/craigsdennis/notebooks-cloudflare-workers-ai the