How to Optimize Data Fetching in React for Better Performance
Learn how to optimize data fetching in React by understanding initial vs on‑demand requests, avoiding waterfall patterns, leveraging Promise.all, parallel promises, data providers, and browser limits, while applying best practices for useEffect, lifecycle handling, and performance‑focused component design.
Introduction
If you have ever thought about data fetching in React, you will notice many moving parts: a plethora of state‑management libraries, GraphQL debates, useEffect‑induced waterfalls, and experimental Suspense. The article asks what the "correct" way to fetch data in React is and promises an answer.
Data Fetching Classification
Modern front‑end data fetching can be roughly divided into two categories: initial data fetching (loading data before the component appears on screen) and on‑demand data fetching (requesting data after user interaction to improve interactivity). Both share the same core principles, but initial fetching is crucial for first‑time user experience.
React Data Fetching and Library Support
Whether you need a third‑party library depends on the scenario. For a simple one‑time request you can use
fetchdirectly inside
useEffect:
<code>const Component = () => {
const [data, setData] = useState();
useEffect(() => {
// fetch data
const dataFetch = async () => {
const data = await (await fetch("https://run.mocky.io/v3/b3bcb9d2-d8e9-43c5-bfb7-0062c85be6f9")).json();
setState(data);
};
dataFetch();
}, []);
return <>...</>;
};</code>Complex scenarios raise questions about error handling, caching, race conditions, request cancellation, and memory leaks. You can either reinvent the wheel or rely on mature libraries such as
axios(which abstracts cancellation) or
swr(which handles caching, revalidation, and more). Understanding the fundamentals remains essential.
Performance of a React Application
Performance is not just render time; for asynchronous data fetching it involves perceived latency. The article uses an issue‑tracking app example with a sidebar, main issue view, and comments section. Three implementation strategies are compared:
Show a loading spinner until all data is ready, then render everything (≈3 s total).
Render the sidebar after its data loads (1 s) while keeping the rest loading (≈4 s total).
Render the main issue first (2 s), then the sidebar (1 s), then comments (2 s) (≈5 s total).
The fastest total time is the first approach, but it leaves the user with a blank screen. The second approach shows content sooner but delays the main area. The third approach respects natural reading order but has the longest overall time. Choosing a strategy depends on storytelling priorities and user‑perceived performance.
React Lifecycle and Data Fetching
Component mounting order matters. In the example below, the
Childcomponent’s
useEffectwill not run until the
Parentrenders it:
<code>const Child = () => {
useEffect(() => {
// fetch data for Child
}, []);
return <div>Some child</div>;
};
const Parent = () => {
const [isLoading, setIsLoading] = useState(true);
if (isLoading) return 'loading';
return <Child />;
};</code>Even if you create a
childvariable before the conditional return, the effect is not triggered because the element is never rendered. Understanding when React actually mounts a component is key to avoiding hidden waterfalls.
Browser Limits and Data Fetching
Browsers limit parallel requests per host (≈6 in Chrome). Issuing many simultaneous requests can saturate this limit, causing queuing and slower perceived performance. For example, firing six 10‑second requests before the main app adds a 10‑second delay even if the app itself is fast.
<code>fetch('https://some-url.com/url1');
fetch('https://some-url.com/url2');
fetch('https://some-url.com/url3');
fetch('https://some-url.com/url4');
fetch('https://some-url.com/url5');
fetch('https://some-url.com/url6');
</code>Removing any one of these requests reduces total load time.
Causes of Request Waterfall
When each component fetches its own data after being rendered, a classic waterfall appears: parent fetches, renders child, child fetches, etc. This pattern is demonstrated with the issue‑tracker components.
<code>const App = () => {
return (
<>
<Sidebar />
<Issue />
</>
);
};
</code>Solutions to Request Waterfall
Promise.all Approach
Trigger all requests at the top level and await them in parallel. Using
Promise.allreduces the total waiting time to the longest individual request.
<code>useEffect(async () => {
const [sidebar, issue, comments] = await Promise.all([
fetch('/get-sidebar'),
fetch('/get-issue'),
fetch('/get-comments')
]);
}, []);
</code>After fetching, store each result in state and pass them down as props. This improves performance but may cause multiple top‑level state updates and re‑renders.
Parallel Promise Approach
If you don’t need to wait for all data, fire each
fetchwith
.thenand update state independently. The UI can render parts as soon as their data arrives.
<code>fetch('/get-sidebar').then(r => r.json()).then(data => setSidebar(data));
fetch('/get-issue').then(r => r.json()).then(data => setIssue(data));
fetch('/get-comments').then(r => r.json()).then(data => setComments(data));
</code>The app can render the sidebar once its data is ready while showing loading placeholders for the issue and comments.
Data Providers (Context) Abstraction
Encapsulate each request in a React context provider so that any component can consume the data without prop‑drilling.
<code>const CommentsContext = React.createContext();
export const CommentsDataProvider = ({ children }) => {
const [comments, setComments] = useState();
useEffect(() => {
fetch('/get-comments').then(r => r.json()).then(setComments);
}, []);
return (
<CommentsContext.Provider value={comments}>
{children}
</CommentsContext.Provider>
);
};
export const useComments = () => useContext(CommentsContext);
</code>Wrap the app with the three providers (sidebar, issue, comments) so that each request starts as soon as the provider mounts, eliminating prop‑drilling and keeping components focused on rendering.
<code>export const VeryRootApp = () => (
<SidebarDataProvider>
<IssueDataProvider>
<CommentsDataProvider>
<App />
</CommentsDataProvider>
</IssueDataProvider>
</SidebarDataProvider>
);
</code>Fetching Data Before React
Moving a
fetchcall outside of a component makes the request start as soon as the JavaScript bundle loads, before any React lifecycle runs. This can eliminate waterfalls but removes control: the request is no longer tied to component visibility and may consume one of the limited parallel slots.
<code>const commentsPromise = fetch('/get-comments');
const Comments = () => {
useEffect(() => {
const dataFetch = async () => {
const data = await (await commentsPromise).json();
setState(data);
};
dataFetch();
}, []);
};
</code>Such “pre‑React” fetching is useful for route pre‑loading or lazy‑loaded components, but should be used sparingly.
Using Third‑Party Libraries
Libraries like
axiosprovide richer APIs (cancellation, interceptors) while still behaving like
fetch. React‑specific libraries such as
swrwrap the whole fetching‑state‑caching cycle into a hook, simplifying component code.
<code>const { data } = useSWR('/get-comments', fetcher);
</code>About Suspense
Suspense is still experimental for data fetching. It mainly replaces manual loading UI with a declarative
<Suspense fallback="loading">wrapper. The underlying performance considerations—browser limits, lifecycle timing, and request ordering—remain unchanged.
<code>const Issue = () => (
<>
{/* issue data */}
<Suspense fallback="loading">
<Comments />
</Suspense>
</>
);
</code>Conclusion
Fetching data in React does not require third‑party libraries, though they can be helpful.
Performance is subjective and always tied to user experience.
Browsers limit parallel requests (≈6); avoid excessive pre‑fetching.
useEffectitself does not cause waterfalls; component composition and loading strategies do.
KooFE Frontend Team
Follow the latest frontend updates
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.