Hacking Shared Mutable React Refs and Solving Realtime Performance Issues Inspired by Making Games - Practical Example

looking into heavy react components that crunch on deeply nested data very frequently

ยท

6 min read

If you haven't read the previous blog post on this series, you can read it here

To give you a recap, here's what we are required to build for our "hypothetical" application

Assume you are tasked to build a real-time analytical dashboard where...

  • there are two views, a real-time dashboard view and an on-demand tabular view

  • the dashboard should contain multiple react components for data visualization on a single screen

  • the data for the dashboard is frequently updated through a websocket at a frequency of 15 messages per second at regular intervals

  • the data for the tabular view should be static and only be updated when user clicks the refresh button on the UI

  • the data from websocket can be deeply nested and there is very high randomness in the contents of the messages although the data structure is the same

Starting off

Let's create the realtime-dashboard alone first and then we can extend the idea to the tabular view.

To start, assume we have created all of the components required to visualize the websocket data which is enclosed in a top-level component called <Dashboard />, here the dashboard component connects to websocket on mount and disconnects on unmount. Pretty simple right?

const Dashboard = () => {
  const [data, setData] = useState(null);

  useEffect(() => {
    const socket = new WebSocket('ws://someurl');
    socket.addEventListener('message', (event) => {
      setData(JSON.parse(event.data))
    });
    return () => {
      socket.close();
    };
  }, []);

  return <>
    {/* renders all the components with the using "data" */}
  </>
}

function App() {
  return <Dashboard />
}

This works! but you can immediately notice that whenever websocket starts streaming messages at a high frequency, the entire screen lags and the application becomes unusable! To solve this we implement some trivial optimizations like...

  • memoizing the child components in <Dashboard />

  • using useMemo() to perform complex calculations inside the child components

This can solve some problems, but since the randomness(entropy) in the contents of the data is very high the component rerenders whenever there is a websocket message. So precisely, every component rerenders at the rate of 15 times per second(as per our use case). This is a huge bottleneck for our imaginary application since it becomes unusable when a wave of websocket messages comes in!

Throttling Websocket Messages

Since we have a stream of websocket messages coming in at a fast pace we need to somehow limit it before it destroys our application. So, we throttle the websocket messages at a constant throttle time.

A demo of a throttled event and a regular event

Throttling a function foo() at a throttle time of 3000ms means that, foo() will be called at most once per 3000ms. That means when a stream of websocket messages arrives we can have clear control over when we would update the state. We just went from an uncontrollable chaotic state to a controllable state, which is a great improvement.

Since we are throttling at a rate of 3000ms, we would be losing some of the intermediate data that came from websocket when the function was throttling. So, we use a ref to act as a buffer or a container to store the data in any space-efficient data structure irrespective of the throttling.

Since refs can be mutated, it's faster to update a ref frequently than to update a state variable frequently

Here's a high-level code example...

const Dashboard = () => {
  // can be a space-efficient hashmap of messages
  const dataBufferRef = useRef({});
  const [data, setData] = useState(null);

  // this triggers re-render
  const flushBufferDataIntoState = () => {
    // use dataBufferRef to update the state
    setData(getNewStateFromDataBuffer(dataBufferRef.current));
  }

  // this does not trigger re-render
  const putWSDataIntoBufferRef = (wsData) => {
    // some algorithm to set the data into buffer
    dataBufferRef.current = updateDataBuffer(dataBufferRef.current, wsData);
  }

  // throttling for 3000ms
  const throttledDataUpdate = useCallback(
    throttle((wsData) => {
      flushBufferDataIntoState();
    }, 3000),
    []
  );

  // updates buffer with latest data and throttles the re-render
  const processWebSocketMessage = (message) => {
    putWSDataIntoBufferRef(message);
    throttledDataUpdate();
  }

  useEffect(() => {
    const socket = new WebSocket("ws://someurl");
    socket.addEventListener("message", (event) => {
      processWebSocketMessage(JSON.parse(event.data))
    });
    return () => {
      // clear the buffer
      dataBufferRef.current = {}
      socket.close();
    };
  }, []);

  return <>{/* renders all the components with the "data" state */}</>;
};

function App() {
  return <Dashboard />;
}

This could potentially reduce the lag and makes the application finally usable!

Moving the State up and Sharing the Ref

Now for the tabular view, we know that both of them use the same data from the WebSocket. This could be easily achieved by moving the state upwards to a parent component and sharing the refs alone!

const TabularView = ({ dataRef }) => {
  const [data, setData] = useState(null);

  const renderDataFromBuffer = () => {
    setData(getNewStateFromDataBuffer(dataRef.current))
  }

  return (
    <>
      <button onClick={renderDataFromBuffer}>refresh</button>
      <Table data={data} />
    </>
  );
};

const Dashboard = ({ data }) => {
  return <>{/* renders all the components with the "data" state */}</>;
};

function AppContainer() {
  // can be a space-efficient hashmap of messages
  const dataBufferRef = useRef({});
  const [data, setData] = useState(null);

  // this triggers re-render
  const flushBufferDataIntoState = () => {
    // use dataBufferRef to update the state
    setData(newStateFromDataBufferRef);
  };

  // this does not trigger re-render
  const putWSDataIntoBufferRef = (wsData) => {
    // some algorithm to set the data into buffer
    dataBufferRef.current = updatedBufferWithNewData;
  };

  // throttling for 3000ms
  const throttledDataUpdate = useCallback(
    throttle((wsData) => {
      flushBufferDataIntoState();
    }, 3000),
    []
  );

  // updates buffer with latest data and throttles the re-render
  const processWebSocketMessage = (message) => {
    putWSDataIntoBufferRef(message);
    throttledDataUpdate();
  };

  useEffect(() => {
    const socket = new WebSocket("ws://someurl");
    socket.addEventListener("message", (event) => {
      processWebSocketMessage(JSON.parse(event.data));
    });
    return () => {
      // clear the buffer
      dataBufferRef.current = {};
      socket.close();
    };
  }, []);

  return (
    <Tabs>
      <Tab name="Realtime Dashboard">
        <Dashboard data={data} />
      </Tab>
      <Tab name="Tabular View">
        <TabularView dataRef={dataRef} />
      </Tab>
    </Tabs>
  );
}

Here we are guaranteeing none of the data coming in from the websocket is lost since we are passing the entire buffer ref to the <TabularView /> we get the best performance and updated data in tabular view at no cost.

That's how I solved the performance issues on one of my projects before the UI was literally unusable and there were random FPS drops. After all these fixes, the UI became snappy and felt natural to use.

This doesn't stop here, based on your use case, you can extend this to efficiently mutate the data ref anywhere inside any component that consumes it(which can lead to unexpected side effects when not done carefully). Components that consume the refs can update/re-render themselves on demand or asynchronously. Hence the term "shared mutable ref object".

This may be considered an antipattern and hacky. But as long as it solves huge UX problems for the end users, it doesn't matter.

~ ciao ๐ŸŒป

ย