React

How I Improved App Performance by 40% with React Rendering Optimization

8 min read
R

The Performance Problem

A client's analytics dashboard had become painfully slow. Users reported noticeable lag when switching between views, and the interactive charts took several seconds to respond to filter changes. The React DevTools Profiler revealed that a single filter change was triggering re-renders across 847 components, most of which had no reason to update.

The application had grown organically over eighteen months. What started as a clean component tree had accumulated layers of context providers, inline callbacks, and unoptimized list renders. Performance degraded gradually enough that no single commit was the culprit, which made the investigation harder.

Diagnosing the Root Causes

The three biggest culprits were inline object creation in props, a context provider wrapping the entire app with frequently changing values, and large list components rendering without virtualization. Each problem compounded the others: the context change triggered a tree-wide re-render, and every list item re-created its inline style objects on each render.

// Before: new object reference every render
<ChartCard style={{ padding: 16, margin: 8 }} data={data} />

// After: stable reference
const chartCardStyle = { padding: 16, margin: 8 } as const;
<ChartCard style={chartCardStyle} data={data} />

I used React DevTools Profiler's "Why did this render?" feature extensively. It pointed to three patterns that accounted for roughly 90% of the unnecessary renders: new object references in props, unstable callback references, and context consumers that subscribed to values they did not use.

Applying React.memo Strategically

The first instinct when encountering re-render issues is to wrap everything in React.memo. This is usually a mistake. Memoization has a cost: React must perform a shallow comparison of all props on every render. If the component is cheap to render or its props change frequently, React.memo adds overhead without benefit.

I profiled each component to identify the ones where memoization would have the highest impact: components that are expensive to render and receive stable props most of the time. The data table rows and chart components were the clear winners.

interface MetricCardProps {
  title: string;
  value: number;
  trend: "up" | "down" | "flat";
  formatFn: (val: number) => string;
}

const MetricCard = React.memo<MetricCardProps>(
  ({ title, value, trend, formatFn }) => {
    const trendIcon = trend === "up" ? "arrow-up" : trend === "down" ? "arrow-down" : "minus";
    const trendColor = trend === "up" ? "text-green-600" : trend === "down" ? "text-red-600" : "text-gray-500";

    return (
      <div className="rounded-lg border bg-white p-6 shadow-sm">
        <p className="text-sm font-medium text-gray-500">{title}</p>
        <p className="mt-2 text-3xl font-semibold">{formatFn(value)}</p>
        <div className={`mt-2 flex items-center ${trendColor}`}>
          <Icon name={trendIcon} size={16} />
          <span className="ml-1 text-sm">{Math.abs(value)}%</span>
        </div>
      </div>
    );
  }
);

MetricCard.displayName = "MetricCard";

The key insight is that React.memo only works when the props are actually stable between renders. If you pass a new callback function or a new object on every render, the shallow comparison always fails and the memoization does nothing. This is where useMemo and useCallback become essential.

Stabilizing References with useMemo and useCallback

The dashboard had a filter panel that passed callback functions down to child components. Every time the parent re-rendered, these callbacks were recreated, which invalidated the memoization on the children.

function DashboardFilters({ onFilterChange, dateRange }: FilterProps) {
  // Without useCallback, this creates a new function reference on every render,
  // causing all memoized children to re-render unnecessarily
  const handleDateChange = useCallback(
    (start: Date, end: Date) => {
      onFilterChange({ ...dateRange, start, end });
    },
    [onFilterChange, dateRange]
  );

  const handleMetricToggle = useCallback(
    (metricId: string) => {
      onFilterChange((prev: FilterState) => ({
        ...prev,
        activeMetrics: prev.activeMetrics.includes(metricId)
          ? prev.activeMetrics.filter((id) => id !== metricId)
          : [...prev.activeMetrics, metricId],
      }));
    },
    [onFilterChange]
  );

  // Memoize derived data to prevent recalculation
  const availableMetrics = useMemo(
    () =>
      ALL_METRICS.filter((m) => m.category === dateRange.category).map((m) => ({
        id: m.id,
        label: m.displayName,
        active: dateRange.activeMetrics.includes(m.id),
      })),
    [dateRange.category, dateRange.activeMetrics]
  );

  return (
    <div className="flex gap-4">
      <DateRangePicker onChange={handleDateChange} value={dateRange} />
      <MetricSelector metrics={availableMetrics} onToggle={handleMetricToggle} />
    </div>
  );
}

The useMemo for availableMetrics was particularly impactful. The filtering and mapping operation was not expensive on its own, but its result was passed as a prop to a memoized MetricSelector. Without useMemo, the new array reference on every render defeated the React.memo wrapper on the selector.

Virtualizing Large Lists

The data tables were the single biggest performance bottleneck. One view rendered a table with 2,400 rows, each containing six columns with formatted numbers and interactive elements. The browser was creating and maintaining DOM nodes for every row, even though only about 15 were visible at any time.

I replaced the naive .map() rendering with react-window, which only renders the rows currently in the viewport plus a small overscan buffer.

import { FixedSizeList as List } from "react-window";
import AutoSizer from "react-virtualized-auto-sizer";

interface DataRow {
  id: string;
  name: string;
  revenue: number;
  growth: number;
  status: string;
  lastUpdated: Date;
}

function VirtualizedDataTable({ data }: { data: DataRow[] }) {
  const Row = useCallback(
    ({ index, style }: { index: number; style: React.CSSProperties }) => {
      const row = data[index];
      return (
        <div style={style} className="flex items-center border-b px-4 hover:bg-gray-50">
          <span className="w-1/4 truncate font-medium">{row.name}</span>
          <span className="w-1/6 text-right">${row.revenue.toLocaleString()}</span>
          <span className={`w-1/6 text-right ${row.growth >= 0 ? "text-green-600" : "text-red-600"}`}>
            {row.growth >= 0 ? "+" : ""}{row.growth.toFixed(1)}%
          </span>
          <span className="w-1/6">{row.status}</span>
          <span className="w-1/4 text-gray-500 text-sm">
            {row.lastUpdated.toLocaleDateString()}
          </span>
        </div>
      );
    },
    [data]
  );

  return (
    <AutoSizer>
      {({ height, width }) => (
        <List
          height={height}
          width={width}
          itemCount={data.length}
          itemSize={48}
          overscanCount={5}
        >
          {Row}
        </List>
      )}
    </AutoSizer>
  );
}

The overscan count of 5 means react-window renders 5 extra rows above and below the viewport. This prevents flashing when scrolling at moderate speeds. For fast scrolling, we added a placeholder skeleton row that displays during the brief moment before the real content renders.

Splitting the Monolithic Context

The application had a single AppContext that held user preferences, active filters, cached data, sidebar state, and notification counts. Any change to any of these values re-rendered every component that consumed the context, regardless of which value they actually used.

I split this into focused providers with clear boundaries:

// Before: one context for everything
const AppContext = createContext<{
  user: User;
  filters: FilterState;
  cache: DataCache;
  sidebarOpen: boolean;
  notifications: Notification[];
}>(/* ... */);

// After: focused contexts that change independently
const UserContext = createContext<UserContextValue>(/* ... */);
const FilterContext = createContext<FilterContextValue>(/* ... */);
const DataCacheContext = createContext<DataCacheContextValue>(/* ... */);
const UIContext = createContext<UIContextValue>(/* ... */);

Components now subscribe only to the context they need. The sidebar subscribes to UIContext and UserContext, but not FilterContext. The chart components subscribe to FilterContext and DataCacheContext, but not UIContext. A filter change no longer re-renders the sidebar, header, or notification panel.

Measuring the Impact

After the optimizations, the numbers told the story clearly:

| Metric | Before | After | Improvement | |--------|--------|-------|-------------| | Lighthouse Performance Score | 54 | 89 | +65% | | Interaction to Next Paint | 380ms | 120ms | -68% | | Filter-to-chart update | 1.8s | 390ms | -78% | | Component re-renders per filter change | 847 | 38 | -95% | | DOM nodes (data table view) | 2,400+ | ~60 | -97% | | JS heap size (data table) | 48MB | 12MB | -75% |

The total time investment was about three days of focused profiling and refactoring. The Lighthouse score improvement from 54 to 89 was the headline number for stakeholders, but the metric that mattered most to users was the filter-to-chart-update cycle dropping from 1.8 seconds to under 400 milliseconds. That was the interaction they performed dozens of times per session.

Common Pitfalls

Memoizing everything. Wrapping every component in React.memo adds comparison overhead. Profile first, then memoize the components where the cost of re-rendering exceeds the cost of comparing props. In our case, only about 15% of components benefited from memoization.

Unstable dependency arrays. A useCallback with an inline object in its dependency array recreates on every render, defeating the purpose entirely. Ensure every dependency is a stable reference or a primitive value.

Forgetting displayName. When you wrap components in React.memo or forwardRef, the React DevTools display them as "Anonymous." Adding displayName seems trivial, but it makes profiling sessions dramatically more productive when you are trying to find the expensive component in a tree of hundreds.

Virtualizing too aggressively. We initially virtualized a list of 30 items, which added complexity without meaningful performance gain. Virtualization pays off when the list exceeds roughly 100 items. Below that threshold, the overhead of the virtualization library and the dynamic measurement logic is not worth it.

Ignoring the network waterfall. Rendering optimization means nothing if the data takes three seconds to arrive. We combined our rendering work with API response optimization, adding pagination and field selection to the endpoints that powered the heaviest views. The combined effect was greater than either change in isolation.

Performance optimization in React is not about applying every technique you know. It is about measuring, identifying the actual bottlenecks, and applying the right fix to each one. The React DevTools Profiler is the most important tool in this process, and spending an hour learning its features will save days of guesswork.

Related Posts