Skip to content

When throttling and debouncing meet asynchronous, what kind of sparks will they create?

Published at  at 06:14 PM

All the codes have been included in the async-utilities repository, and has been published to npm


In the HTML form, there is a scenario like this:

<form id="form">
  <!-- <label for="name">Name:</label>
  <input type="text" name="name" id="name"> -->
  <button type="submit">submit</button>

Clicking “Submit” will send a request to the server:

// make a network request
function api(data) {
  console.log("submiting", data);
  return fetch("", {
    body: JSON.stringify(data),
    method: "POST",
    mode: "cors",

const form = document.getElementById("form");
const handler = async function (e) {
  const rez = await someApi({
    msg: "some data to be sent",

form.addEventListener("submit", handler);

To prevent users from submitting repeatedly, we usually maintain a loading state… but after writing it many times, there is inevitably a feeling of mechanical labor. Moreover, when a form has many buttons, wouldn’t I have to maintain many loading variables?

Looking at this makes my eyes tired and the interface response is very fast; sneaking in one less loading probably won’t be noticed right🌚, but what if the server goes down… never mind, no time to think about that now.

Have you ever experienced the scenario above? In fact, most products’ buttons do not have a loading effect because the whole world is just a big slapdash operation😂. However, as a qualified front-end developer, everyone needs to be responsible for user experience!

Can we just make money standing tall?

Let’s sort it out first:

  1. Within a short period of time, each event will generate a promise, and the core requirement is to reduce frequency. That is, “for three thousand promises, I only take one result.”

  2. The response time of a promise is uncertain.

Frequency reduction

Recall the event frequency reduction in synchronous code: throttle and debounce. Regarding these two, I believe you are already very familiar with them; let’s summarize in one sentence:

Both are taking one call from multiple same events within a unit of time (also can be said as: for three thousand events, I only execute once); the difference is that the former takes the first occurrence while the latter takes the last.

Let’s rephrase our requirement in this style: taking one call from multiple same events within a short period of time. So, this “short period of time” is key!

Interval redefinition

We hope that before the previous promise ends, all subsequent operations to create new promises are discarded. Therefore, “within a short period of time” equals “during the pending period of the previous promise”, and discarding all subsequent promise creation operations means “taking the first occurrence”. The discarding of promise can be achieved by creating a promise that is “forever pending”; thus our requirement becomes:

During the pending period of the previous promise, take only the first operation out of multiple attempts to create new promises (which would be this currently pending promise) for execution.


Now that we’ve discussed ideas let’s also refer to some code, here’s a simple version implementation code for throttling:

 * @description throttling
 * @param {function} fn
 * @param {number} ms milliseconds
 * @returns {function} Throttled Function
function throttle(fn, ms = 300) {
  let lastInvoke = 0;
  return function throttled(...args) {
    const now =;
    if (now - lastInvoke < ms) return;
    lastInvoke = now;, ...args);

Imitate the gourd to draw a ladle, simply modify it a bit:

 * @description Asynchronous throttling: During the last promise pending period, it will not be triggered again
 * @param {() => Promise<any>} fn
 * @returns {() => Promise<any>} Throttled Function
function throttleAsyncResult(fn) {
  let isPending = false;
  return function (...args) {
    if (isPending) return new Promise(() => {});
    isPending = true;
    return fn
      .call(this, ...args)
      .then((...args1) => {
        isPending = false;
        return Promise.resolve(...args1);
      .catch((...args2) => {
        isPending = false;
        return Promise.reject(...args2);


The following demo takes a network request as an example and opens Devtool to see the effect.

View Source Code
import { throttleAsyncResult } from "@bowencool/async-utilities";
/* make a network request */
function api(data: { msg: string }) {
  console.log("submiting", data);
  return fetch("", {
    body: JSON.stringify(data),
    method: "POST",
    mode: "cors",

const throttledApi = throttleAsyncResult(api);

export default function ThrottleAsyncResultDemo() {
  return (
      onClick={async function () {
        const rez = await throttledApi({
          msg: "some data to be sent",
      submit(click me quickly)

When you open the developer tools, you can see that no matter how fast you click, there will never be a situation where requests are made in parallel:


Mission accomplished!

A twin brother


The throttleAsyncResult just now is about controlling how to create a promise. So, if we have already created many promises, how can we get the latest result? After all, nobody knows which promise will run faster.

Therefore, there is debounceAsyncResult (Demo): Among the many created promises, take the result of the last created promise.

”Being lazy” is the primary productive force for programmers, have you learned it🤔?

Share on: