A practical, developer-friendly process to audit website speed using Lighthouse, PageSpeed Insights, and real-user data (CrUX). Includes tools, time/cost estimates, what to screenshot, and how to turn results into a fix backlog.
WebHouz
Most people "audit" their website performance by running one Lighthouse test, feeling vaguely bad about the score, and then not knowing what to do next. The result is usually a vague to-do list, a confused developer, or an expensive agency engagement that doesn't solve the right problem.
This guide gives you a repeatable workflow: capture the right data in the right order, interpret it correctly, and convert findings into a prioritised fix backlog that a developer can actually work from.
Before opening any tool, understand the difference between the two data sources you'll use:
Field data (real-user data): Collected from actual Chrome browser users who have visited your site. Available via PageSpeed Insights and the Chrome UX Report (CrUX). This is what Google uses for Core Web Vitals ranking signals. It reflects what your actual visitors experience.
Lab data (simulated data): Generated by tools like Lighthouse using a simulated device and network connection. Useful for debugging and reproducing issues in a controlled environment. Does not reflect real-user experience.
The correct approach: use field data to understand your real problem, then use lab data to find and fix the cause.
For Australian market cost benchmarks: Website speed optimisation cost in Australia (2026).
Don't start with the homepage. Start with the pages that matter most to your business:
Testing five URLs is enough to identify patterns without drowning in data.
Run PageSpeed Insights (pagespeed.web.dev) for each of your five URLs.
For each URL, record:
Field data is the most important number in this entire audit. If your LCP field data is 4.2 seconds, that's what your real visitors are experiencing — not what you see on your fast broadband connection on a desktop machine.
In Chrome, open DevTools (F12 or right-click → Inspect), navigate to the Lighthouse tab, select Mobile, and run the report.
For each URL, record:
Take screenshots of the Opportunities and Diagnostics sections. These become the evidence base for your fix list.
Run each test two or three times and use the median score — Lighthouse results vary between runs due to network and CPU conditions.
The LCP element is your first fix target on every page. Lighthouse will label it in the report — it is almost always one of:
For each LCP element, ask:
font-display: swap need to be set?For the full definitions of LCP, INP, and CLS: What are Core Web Vitals? LCP, INP, CLS explained.
Open your site in Chrome, open DevTools → Network tab, filter by "Third-party" or look at the domain column. Make a list of every external domain loading scripts or resources.
Common categories:
For each script, decide:
Third-party scripts are the most common cause of poor INP scores. Removing even one or two heavy scripts can produce significant improvements.
Put every finding from Lighthouse into one of these buckets:
| Bucket | Examples | |--------|---------| | Images | Format, size, responsive sizes, lazy loading, LCP preload | | JavaScript | Bundle size, unused code, third-party scripts, long tasks | | CSS and fonts | Render-blocking stylesheets, font-display, unused CSS | | Server and hosting | TTFB, caching, CDN, slow database queries | | Layout stability | Missing dimensions, late-loading banners, font swaps |
This classification step converts Lighthouse noise into a coherent plan.
Assign every issue a priority:
P1 — Fix first: Issues affecting LCP, INP, or CLS on your high-value pages. These directly impact rankings and conversions.
P2 — Fix next sprint: Template-level fixes that improve multiple pages at once (font loading, caching headers, image pipeline). Medium effort, high leverage.
P3 — Later: Secondary Lighthouse recommendations that won't meaningfully move your Core Web Vitals. Tidying up, not solving problems.
Assign each task an owner (content team, marketing team, developer, agency) and a rough effort estimate (hours or days).
For a copy-paste version of this backlog template: Website performance audit checklist + report template.
This is normal for lab data. Mitigate it by:
Mobile devices have less CPU power, slower networks, and tighter performance budgets. Heavy JavaScript, large images, and complex animations are far more punishing on mobile — which is where the majority of Australian business website traffic arrives.
It usually means one of three things: you're loading a large framework or library and only using a small part of it; you have third-party scripts adding code that isn't needed on that page; or your CMS or page builder is shipping JavaScript for features that don't exist on that specific page.
If your platform is generating the problem, optimisation has a ceiling. A rebuild on a modern framework may be the practical solution. Start here: Website Redesign.
WordPress can perform well on a lean setup. If you're using a page builder (Elementor, Divi, WPBakery) and many plugins, you're likely hitting an architectural ceiling. Two options: a lean WordPress rebuild with a custom theme and no page builder, or a migration to a modern framework. Next.js vs WordPress vs Webflow helps you decide which direction makes sense.
A good performance brief for a developer includes:
This is the difference between "make it faster" (vague) and a scoped engagement that a developer can quote and deliver.
CrUX (Chrome UX Report) is aggregated real-user field data. Lighthouse is a lab test for debugging. Use CrUX to understand reality; use Lighthouse to find and fix the cause. Always validate improvements with field data, not just an improved Lighthouse score.
Five is the right starting point. It's enough to identify patterns (e.g., every page has the same slow third-party script) without overwhelming you with data. If patterns vary significantly across pages, expand the audit.
Quarterly is a solid default for most business sites. Run one after any major site change, redesign, or significant addition to your marketing tag stack — these are the most common causes of performance regressions.
For image optimisation and script removal, many businesses can handle this in-house. For caching, code-level changes, and template fixes, developer support is typically needed. If you want professional audit and implementation end-to-end: Website Performance service.
Start with PageSpeed Insights on your top five pages, note your weakest metric, and build your first fix list using the process above. Then use the checklist to structure it: Website performance audit checklist + report template.
If you want the full performance playbook: Website Performance & Core Web Vitals: the Australian business guide. Or if you'd like a professional audit with a developer-ready backlog, get in touch.
Let's talk about your project and how we can help you build a website that actually performs.