Frontend Developer Interview Questions
32 questions — 8 easy · 17 medium · 7 hard
HTML & CSS
(8)The CSS box model describes how elements are sized:
- Content — the actual content area
- Padding — space between content and border
- Border — the element's border
- Margin — space outside the border
content-box (default):
width/heightapply to the content area only- Total size = width + padding + border
border-box:
width/heightinclude content + padding + border- Total size = width (padding and border are included)
*, *::before, *::after {
box-sizing: border-box;
}
Most developers prefer border-box globally because it makes sizing more predictable — the element never exceeds the declared width.
Follow-up
Follow-up: Which box-sizing do you prefer to use globally and why?
Flexbox — one-dimensional layout (row OR column):
- Best for distributing items along a single axis
- Great for navigation bars, card rows, centering content
- Items can wrap but layout logic is per-line
CSS Grid — two-dimensional layout (rows AND columns):
- Best for complex page layouts with rows and columns
- Great for page skeletons, dashboards, image galleries
- Explicit control over both axes simultaneously
/* Flexbox — single row of cards */
.card-row {
display: flex;
gap: 1rem;
flex-wrap: wrap;
}
/* Grid — full page layout */
.page {
display: grid;
grid-template-columns: 250px 1fr;
grid-template-rows: auto 1fr auto;
}
Yes, they combine well — use Grid for the overall page structure and Flexbox for component-level alignment within grid cells.
Follow-up
Follow-up: Can you combine them in a single layout?
Core techniques:
- Viewport meta tag:
<meta name="viewport" content="width=device-width, initial-scale=1.0">
- Media queries:
.container { padding: 1rem; }
@media (min-width: 768px) {
.container { padding: 2rem; max-width: 720px; }
}
- Fluid units:
%,rem,em,vw,vh,clamp() - Flexible layouts: Flexbox and CSS Grid with
frunits - Responsive images:
srcset,<picture>,max-width: 100%
Mobile-first approach:
- Write base styles for mobile, then add complexity with
min-widthqueries - Benefits: simpler base CSS, progressive enhancement, better mobile performance
- Mobile styles load first — no unnecessary overrides on small screens
Other considerations:
- Touch targets minimum 44x44px
- Readable font sizes (minimum 16px base)
- Test on real devices, not just browser resize
Follow-up
Follow-up: What is the mobile-first approach and why is it recommended?
WCAG (Web Content Accessibility Guidelines) defines how to make web content accessible to people with disabilities. Built on four principles — POUR:
- Perceivable — content must be presentable in ways users can perceive
- Operable — UI must be navigable and usable
- Understandable — content and UI must be understandable
- Robust — content must work with assistive technologies
Key practices:
- Semantic HTML (
<nav>,<main>,<button>,<h1>-<h6>) - Alt text for images
- Sufficient color contrast (4.5:1 for normal text)
- Keyboard navigation for all interactive elements
- ARIA attributes when semantic HTML is insufficient
- Focus management and visible focus indicators
- Form labels associated with inputs
- Skip navigation links
Testing tools:
- Browser DevTools accessibility panel
- axe, Lighthouse, WAVE browser extensions
- Screen reader testing (VoiceOver, NVDA)
- Keyboard-only navigation testing
Follow-up
Follow-up: How do you test for accessibility issues?
CSS specificity determines which rule wins when multiple selectors target the same element. It is calculated as a four-part value (inline, IDs, classes/attributes/pseudo-classes, elements/pseudo-elements):
| Selector | Specificity |
|---|---|
* |
0,0,0,0 |
p |
0,0,0,1 |
.card |
0,0,1,0 |
#header |
0,1,0,0 |
style="" |
1,0,0,0 |
/* Specificity: 0,0,1,1 */
p.intro { color: blue; }
/* Specificity: 0,0,2,0 — wins */
.section .intro { color: red; }
When rules have equal specificity, the last declared rule wins (source order).
Resolving conflicts:
- Avoid deep selector nesting — it inflates specificity unnecessarily
- Use BEM or CSS Modules to keep all selectors at a single class level
- Prefer adding a class over increasing specificity with IDs or nesting
!important overrides all specificity. Use it only for utility classes (e.g., Tailwind's ! prefix) or to override third-party styles. Avoid it in your own component styles — it makes future overrides impossible without another !important.
Follow-up
Follow-up: What is the !important rule and when is it acceptable to use it?
CSS custom properties (also called CSS variables) are properties prefixed with -- that can be reused throughout a stylesheet:
:root {
--color-primary: #3b82f6;
--spacing-md: 1rem;
}
.button {
background: var(--color-primary);
padding: var(--spacing-md);
}
Key differences from Sass/Less variables:
| Feature | CSS Variables | Sass/Less Variables |
|---|---|---|
| Runtime | Live in browser | Compiled away |
| Cascade | Yes — inherit through DOM | No |
| JS access | getComputedStyle() / setProperty() |
Not accessible |
| Scope | Element-scoped | File-scoped |
Theming with CSS variables:
:root { --bg: #ffffff; --text: #111111; }
[data-theme='dark'] { --bg: #111111; --text: #ffffff; }
body { background: var(--bg); color: var(--text); }
Switching themes is as simple as toggling the data-theme attribute — no JavaScript style manipulation needed. CSS variables can also be updated via JavaScript:
document.documentElement.style.setProperty('--color-primary', '#10b981');
Follow-up
Follow-up: How would you use CSS custom properties to implement a theming system?
Responsive images ensure users download an appropriately sized image for their device, saving bandwidth and improving performance.
srcset for resolution switching — same image, different sizes:
<img
src="hero-800.jpg"
srcset="hero-400.jpg 400w, hero-800.jpg 800w, hero-1600.jpg 1600w"
sizes="(max-width: 600px) 100vw, 800px"
alt="Hero image"
/>
srcsetlists candidate images with their intrinsic widths (wdescriptor)sizestells the browser how wide the image will be rendered at each breakpoint- The browser picks the best candidate based on device pixel ratio and layout width
<picture> for art direction — different images for different viewports:
<picture>
<source media="(max-width: 600px)" srcset="hero-portrait.webp">
<source srcset="hero-landscape.webp">
<img src="hero-landscape.jpg" alt="Hero">
</picture>
Format selection with <picture>:
<picture>
<source type="image/avif" srcset="photo.avif">
<source type="image/webp" srcset="photo.webp">
<img src="photo.jpg" alt="Photo">
</picture>
The browser picks the first <source> it supports. Always include a fallback <img> for browsers that do not support <picture>.
Follow-up
Follow-up: What is the difference between resolution switching and art direction?
CSS Transitions — animate between two states when a property changes:
.button {
background: blue;
transition: background 200ms ease, transform 200ms ease;
}
.button:hover {
background: darkblue;
transform: scale(1.05);
}
Use transitions for: hover effects, state changes (open/closed), simple A-to-B animations.
CSS Animations — define keyframes for complex, multi-step, looping animations:
@keyframes spin {
from { transform: rotate(0deg); }
to { transform: rotate(360deg); }
}
.loader {
animation: spin 1s linear infinite;
}
Use animations for: loaders, attention-grabbing effects, complex sequences.
Performance: Animate only transform and opacity — they are composited on the GPU and skip layout and paint steps.
Accessibility — prefers-reduced-motion:
@media (prefers-reduced-motion: reduce) {
*, *::before, *::after {
animation-duration: 0.01ms !important;
transition-duration: 0.01ms !important;
}
}
Users who have enabled "reduce motion" in their OS settings should not see spinning, bouncing, or parallax animations, which can trigger vestibular disorders.
Follow-up
Follow-up: How do you ensure animations are accessible for users with vestibular disorders?
JavaScript
(7)Event delegation uses event bubbling to handle events on a parent element instead of attaching listeners to each child.
const list = document.getElementById('todo-list');
list.addEventListener('click', (event) => {
if (event.target.matches('.delete-btn')) {
const item = event.target.closest('.todo-item');
item.remove();
}
});
How it works:
- An event fires on the target element (e.g., a button)
- The event bubbles up through the DOM tree
- A parent listener catches it and checks
event.target
Benefits:
- One listener instead of N listeners — less memory
- Automatically works for dynamically added elements
- Simpler cleanup — remove one listener instead of many
Key methods: event.target (clicked element), event.target.matches() (check selector), event.target.closest() (find ancestor).
Follow-up
Follow-up: What are the performance benefits of event delegation in a large list?
| Feature | var |
let |
const |
|---|---|---|---|
| Scope | Function | Block | Block |
| Hoisting | Yes (undefined) | Yes (TDZ) | Yes (TDZ) |
| Re-declaration | Yes | No | No |
| Re-assignment | Yes | Yes | No |
if (true) {
var a = 1;
let b = 2;
const c = 3;
}
console.log(a); // 1 — var leaks out of block
console.log(b); // ReferenceError — let is block-scoped
console.log(c); // ReferenceError — const is block-scoped
TDZ (Temporal Dead Zone): let and const are hoisted but cannot be accessed before declaration — accessing them throws a ReferenceError.
Best practice: Use const by default, let when reassignment is needed, avoid var.
A Promise represents a value that may be available now, later, or never. It has three states: pending, fulfilled, rejected.
const promise = new Promise((resolve, reject) => {
setTimeout(() => resolve('done'), 1000);
});
promise
.then(result => console.log(result))
.catch(error => console.error(error));
async/await is syntactic sugar over Promises — it makes asynchronous code look synchronous:
async function fetchUser(id) {
try {
const response = await fetch(`/api/users/${id}`);
const user = await response.json();
return user;
} catch (error) {
console.error('Failed to fetch user:', error);
throw error;
}
}
Key points:
asyncfunctions always return a Promiseawaitpauses execution until the Promise settles- Error handling uses
try/catchinstead of.catch() Promise.all()runs multiple promises in parallel
Follow-up
Follow-up: How do you handle errors with async/await?
The DOM (Document Object Model) is a tree-structured representation of an HTML document that JavaScript can read and modify.
Browser rendering pipeline:
- Parse HTML → build the DOM tree
- Parse CSS → build the CSSOM (CSS Object Model)
- Combine DOM + CSSOM → Render Tree (only visible elements)
- Layout (reflow) — calculate position and size of each element
- Paint — fill in pixels (colors, borders, shadows, text)
- Composite — combine layers and display on screen
Reflow vs Repaint:
- Reflow — layout recalculation (triggered by size/position changes). Expensive.
- Repaint — visual update without layout change (e.g., color change). Cheaper.
Performance tips:
- Batch DOM reads and writes separately
- Use
transformandopacityfor animations (GPU-accelerated, skip layout) - Avoid reading layout properties (e.g.,
offsetHeight) between writes
Follow-up
Follow-up: What is the difference between reflow and repaint?
Map and Set hold strong references to their keys/values — the garbage collector cannot reclaim those objects as long as the collection exists.
WeakMap and WeakSet hold weak references — if the only reference to an object is inside a WeakMap/WeakSet, it can be garbage collected.
Key differences:
| Feature | Map | WeakMap |
|---|---|---|
| Key types | Any | Objects only |
| Garbage collection | Strong reference | Weak reference |
| Iterable | Yes | No |
.size property |
Yes | No |
Real-world use case — private data per instance:
const privateData = new WeakMap();
class Controller {
constructor(element) {
privateData.set(this, { clickCount: 0, element });
}
handleClick() {
const data = privateData.get(this);
data.clickCount++;
}
}
When the Controller instance is discarded, the associated data in privateData is automatically garbage collected — no memory leak.
WeakSet use case — tracking visited nodes during a DOM traversal without preventing garbage collection when nodes are removed from the document.
Follow-up
Follow-up: Give a real-world use case where WeakMap is the right choice.
A generator function is a function that can pause its execution and resume later, yielding values one at a time.
function* counter(start = 0) {
while (true) {
yield start++;
}
}
const gen = counter(1);
console.log(gen.next().value); // 1
console.log(gen.next().value); // 2
console.log(gen.next().value); // 3
How they work:
function*declares a generatoryieldpauses execution and returns a value to the caller.next()resumes from where it paused- Returns an iterator with
{ value, done }objects
Problems generators solve:
- Lazy evaluation — produce values on demand, not all at once (infinite sequences, large datasets)
- Custom iterators — implement the iterator protocol without boilerplate
- Control flow — libraries like
redux-sagause generators to manage complex async flows
Async generators combine generators with async/await for streaming data:
async function* streamLines(url) {
const response = await fetch(url);
const reader = response.body.getReader();
// yield decoded chunks...
}
for await (const line of streamLines('/api/logs')) {
console.log(line);
}
Follow-up
Follow-up: How do generators relate to async iterators?
CommonJS (CJS) — Node.js original module system:
// Export
module.exports = { add, subtract };
// Import
const { add } = require('./math');
require()is synchronous and dynamic — can be called anywhere, conditionally- Module graph is resolved at runtime
ES Modules (ESM) — the standard JavaScript module system:
// Export
export function add(a, b) { return a + b; }
export default class Calculator {}
// Import
import { add } from './math.js';
import Calculator from './math.js';
import/exportare static — must be at the top level- Module graph is resolved at parse time (before execution)
- Supports
import()for dynamic/lazy loading - Works natively in browsers
Tree-shaking removes unused exports from the final bundle. It requires static analysis of the import graph — the bundler must know at build time which exports are used. ESM's static import/export makes this possible. CommonJS's dynamic require() makes it impossible — the bundler cannot know which exports are needed until runtime.
Follow-up
Follow-up: What is tree-shaking and why does it only work with ES modules?
Frontend Architecture
(5)REST:
- Multiple endpoints (e.g.,
/users,/users/1/posts) - Server defines the response shape
- Uses HTTP methods (GET, POST, PUT, DELETE)
- Easy to cache (HTTP caching, CDN)
- Can lead to over-fetching or under-fetching
GraphQL:
- Single endpoint (
/graphql) - Client specifies exactly what data it needs
- Always uses POST (typically)
- Solves over-fetching and under-fetching
- Built-in type system and introspection
Choose REST when:
- Simple CRUD operations
- Heavy caching requirements
- File uploads/downloads
- Team is more familiar with REST
Choose GraphQL when:
- Complex data relationships
- Multiple clients need different data shapes
- Reducing number of network requests matters
- Rapid frontend iteration without backend changes
Caching challenge: REST uses URL-based HTTP caching. GraphQL uses a single endpoint, so HTTP caching does not work — you need client-side caching libraries (Apollo, urql).
Follow-up
Follow-up: What are the challenges of caching with GraphQL compared to REST?
Browser DevTools panels:
- Elements — inspect and modify DOM/CSS in real time
- Console — run JavaScript, view errors and logs
- Network — monitor requests, check response times, find failed calls
- Sources — set breakpoints, step through code, watch variables
- Performance — record and analyze runtime performance, find long tasks
- Application — inspect localStorage, sessionStorage, cookies, service workers
- Lighthouse — audit performance, accessibility, SEO, best practices
Debugging workflow for a slow page:
- Open Performance tab, record the interaction
- Look for long tasks (>50ms) in the flame chart
- Check Network tab for slow or blocking requests
- Use Coverage tab to find unused CSS/JS
- Check for layout thrashing in the Performance panel
- Use
console.time()/console.timeEnd()to measure specific operations
Other tools:
- React DevTools / Vue DevTools for component state
- Redux DevTools for state management debugging
debuggerstatement for programmatic breakpoints
Follow-up
Follow-up: How would you debug a performance issue where the page feels slow?
Micro-frontends apply the microservices idea to the frontend — the UI is split into independently deployable pieces owned by separate teams.
Common integration approaches:
- Build-time integration — publish packages to npm, compose at build (tight coupling)
- Runtime integration — load remote modules at runtime (Module Federation,
<script>tags) - iframe isolation — strong isolation but poor UX and communication overhead
- Web Components — framework-agnostic, encapsulated custom elements
Benefits:
- Independent deployments per team
- Technology diversity (different frameworks per team)
- Isolated failure domains
- Scales organizational autonomy
Trade-offs:
- Bundle duplication (React shipped by multiple remotes)
- Increased operational complexity
- Shared state and routing become cross-team concerns
- Performance overhead from multiple network requests
- Styling conflicts without strict isolation
Module Federation (Webpack 5 / Vite plugin) allows one build to expose modules that another build consumes at runtime:
// In remote app's vite.config.ts
federation({
name: 'remoteApp',
exposes: { './Button': './src/Button.tsx' },
shared: ['react', 'react-dom'],
})
// In host app
import Button from 'remoteApp/Button';
Shared dependencies (like React) are loaded once even if multiple remotes depend on them.
Follow-up
Follow-up: How does Module Federation in Webpack/Vite enable micro-frontends?
State management choice depends on the type and scope of state:
Categories of state:
- UI state — open/closed, selected tab, form values →
useState,useReducer - Server state — data fetched from an API → TanStack Query, SWR
- Global client state — user session, theme, cart → Zustand, Jotai, Redux Toolkit
- URL state — filters, pagination → router search params
When built-in state is enough:
- State is local to a component or a small subtree
- No need to share state across distant components
- Application is small with a simple data model
Reaching for external libraries:
- Many components need the same data and prop drilling becomes unwieldy
- Complex update logic with many actions (reducers help organize this)
- Need time-travel debugging, middleware, or devtools
- Server data with caching, invalidation, and background refetch (use TanStack Query instead of Redux for this)
Modern recommendation:
- TanStack Query for server state — handles caching, loading, error states automatically
- Zustand for simple global client state — minimal boilerplate
- Redux Toolkit for large teams needing strict patterns and excellent devtools
- Avoid putting server data in Redux — it duplicates what TanStack Query does better
Follow-up
Follow-up: When is React's built-in state (useState/useContext) enough, and when do you reach for an external library?
A monorepo is a single repository containing multiple projects (apps, libraries, services). It is the opposite of polyrepo, where each project has its own repository.
Benefits:
- Atomic commits — a single commit can change a shared library and all consumers simultaneously
- Code sharing — internal packages (UI library, utilities) without npm publishing
- Unified tooling — one lint config, one CI pipeline, consistent developer experience
- Easier refactoring — rename across all projects in one operation
- Dependency deduplication — shared
node_modulesreduces disk usage
Challenges:
- CI/CD performance — running all tests on every change is slow without incremental builds
- Access control — harder to restrict who can see/modify which parts
- Tooling complexity — requires monorepo-aware tools
- Git performance — large repos with long history can slow down git operations
Tools that make monorepos practical:
- Turborepo — task orchestration with remote caching; only rebuilds what changed
- Nx — more opinionated; built-in generators, dependency graph visualization, affected commands
- pnpm workspaces — fast installs, strict hoisting, efficient disk usage
- Changesets — manages versioning and changelogs for publishable packages
Affected builds — a key concept: only run tests/builds for packages that changed or depend on what changed. Turborepo and Nx both support this via dependency graph analysis.
Follow-up
Follow-up: What tools make monorepos practical at scale?
Accessibility
(4)Semantic HTML uses elements that convey meaning about the content they contain, rather than just defining presentation.
Non-semantic vs semantic:
<!-- Non-semantic -->
<div class="header">
<div class="nav">
<div onclick="goHome()">Home</div>
</div>
</div>
<!-- Semantic -->
<header>
<nav>
<a href="/">Home</a>
</nav>
</header>
Why it matters:
- Screen readers — announce elements by role. A
<button>is announced as "button"; a<div>withonclickis announced as nothing useful. - Keyboard navigation —
<button>,<a>,<input>are natively focusable and respond to Enter/Space.<div>is not. - Browser features —
<form>enables submit-on-Enter;<details>gives a native disclosure widget. - SEO — search engines use semantic structure to understand content hierarchy.
- Maintainability — code communicates intent to developers.
Common semantic elements:
<header>,<footer>,<main>,<nav>,<aside>— landmarks<article>,<section>— content grouping<h1>–<h6>— heading hierarchy<button>vs<div onclick>— interactive controls<time datetime="2026-04-01">— dates<figure>+<figcaption>— media with caption
Follow-up
Follow-up: Give an example of replacing a non-semantic element with a semantic one and explain the difference.
ARIA (Accessible Rich Internet Applications) is a set of HTML attributes that add semantic information for assistive technologies when native HTML cannot express it.
Three categories of ARIA attributes:
- Roles — what the element is:
role="dialog",role="tablist",role="alert" - Properties — describe characteristics:
aria-label,aria-labelledby,aria-required - States — reflect current state:
aria-expanded,aria-checked,aria-disabled
When to use ARIA:
<!-- Custom toggle button — no native element for this pattern -->
<div
role="button"
tabindex="0"
aria-pressed="true"
aria-label="Toggle dark mode"
>
🌙
</div>
<!-- Live region — announce dynamic content -->
<div aria-live="polite" aria-atomic="true">
Item added to cart.
</div>
First rule of ARIA: Use native HTML elements when possible. <button> is better than <div role="button"> because you get keyboard support, focus management, and click handling for free.
"No ARIA is better than bad ARIA" — incorrect ARIA actively harms screen reader users. Examples of bad ARIA:
role="button"withouttabindex="0"— not keyboard accessiblearia-labelthat contradicts visible text — confusingaria-hidden="true"on focused elements — traps focus in an invisible location
Only add ARIA when you fully understand what it communicates to assistive technology.
Follow-up
Follow-up: What does "no ARIA is better than bad ARIA" mean?
Proper focus management is critical for keyboard and screen reader users. When content appears dynamically, focus must be moved to the new content and returned when it closes.
Modal focus management pattern:
function openModal(modal) {
const trigger = document.activeElement;
modal.removeAttribute('hidden');
// Move focus to modal
const firstFocusable = modal.querySelector(
'button, [href], input, select, textarea, [tabindex]:not([tabindex="-1"])'
);
firstFocusable?.focus();
// Return focus on close
modal.addEventListener('close', () => trigger.focus(), { once: true });
}
Focus trap — keeping focus inside the modal while it is open:
modal.addEventListener('keydown', (e) => {
if (e.key !== 'Tab') return;
const focusable = [...modal.querySelectorAll('button, [href], input, [tabindex]:not([tabindex="-1"])')].filter(el => !el.disabled);
const first = focusable[0];
const last = focusable[focusable.length - 1];
if (e.shiftKey && document.activeElement === first) {
e.preventDefault();
last.focus();
} else if (!e.shiftKey && document.activeElement === last) {
e.preventDefault();
first.focus();
}
});
Simpler approach: Use the native HTML <dialog> element — it handles focus trapping and focus restoration automatically in modern browsers.
Additional patterns:
aria-modal="true"signals to screen readers that content outside is inert- Use the
inertattribute on background content to prevent interaction - Close on Escape key and clicking the backdrop
Follow-up
Follow-up: What is a focus trap and how do you implement one?
WCAG color contrast requirements:
- AA (minimum): 4.5:1 for normal text, 3:1 for large text (18px+ or 14px+ bold)
- AAA (enhanced): 7:1 for normal text, 4.5:1 for large text
- UI components and graphics: 3:1 against adjacent colors
Checking contrast:
/* Check: does this combination pass? */
.button {
background: #3b82f6; /* blue-500 */
color: #ffffff; /* white — passes 4.5:1 ✓ */
}
.muted-text {
color: #9ca3af; /* gray-400 on white — FAILS 4.5:1 ✗ */
}
Tools for checking:
- Chrome DevTools — hover over color in Elements panel to see contrast ratio
- axe browser extension — flags failing contrast in one click
- Figma plugins (Stark, A11y Annotation Kit) — check contrast in design
color-contrast()CSS function (experimental) — future native solution
Testing multi-theme contrast:
- Document all color token combinations that appear together (text on background, icon on button)
- Write automated tests using the
axe-corelibrary against each theme variant:
test('dark theme passes contrast', async () => {
document.body.setAttribute('data-theme', 'dark');
const results = await axe.run(document.body);
expect(results.violations.filter(v => v.id === 'color-contrast')).toHaveLength(0);
});
- Include contrast ratios as acceptance criteria in design tokens documentation
- Use Storybook with accessibility addon to surface violations per component story
Follow-up
Follow-up: How do you test color contrast in a design system with multiple themes?
Performance
(4)Core Web Vitals are Google's metrics for measuring real-world page experience. They directly affect search ranking.
The three metrics:
| Metric | Measures | Good threshold |
|---|---|---|
| LCP (Largest Contentful Paint) | Loading performance — when the largest visible element renders | ≤ 2.5s |
| INP (Interaction to Next Paint) | Responsiveness — delay from user interaction to visual update | ≤ 200ms |
| CLS (Cumulative Layout Shift) | Visual stability — unexpected layout shifts | ≤ 0.1 |
Measuring:
- Chrome DevTools → Lighthouse
- PageSpeed Insights (real user data via CrUX)
web-vitalsnpm package for in-app measurement- Google Search Console for field data
Improving LCP:
- Preload the hero image:
<link rel="preload" as="image" href="hero.webp"> - Use
fetchpriority="high"on the LCP image - Avoid lazy-loading above-the-fold images
- Use a CDN to serve static assets from edge locations
- Eliminate render-blocking resources
Improving INP:
- Break long JavaScript tasks with
scheduler.yield()orsetTimeout - Defer non-critical scripts
- Avoid heavy synchronous work in event handlers
Improving CLS:
- Always set
widthandheighton images and videos - Reserve space for ads and embeds with
min-height - Avoid inserting content above existing content after load
Follow-up
Follow-up: How does LCP differ from FCP, and what are typical causes of a poor LCP?
Code splitting divides the JavaScript bundle into smaller chunks that are loaded on demand, reducing the initial bundle size and time-to-interactive.
Lazy loading defers loading of a module until it is actually needed.
In React — React.lazy + Suspense:
import { lazy, Suspense } from 'react';
// Component is only loaded when first rendered
const HeavyChart = lazy(() => import('./HeavyChart'));
function Dashboard() {
return (
<Suspense fallback={<Spinner />}>
<HeavyChart />
</Suspense>
);
}
Route-based splitting — split at route boundaries (most impactful):
const Settings = lazy(() => import('./pages/Settings'));
const Profile = lazy(() => import('./pages/Profile'));
// Each route chunk only loads when the user navigates there
Component-based splitting — split individual heavy components (charts, editors, maps):
// Only load the rich text editor when the user clicks 'Edit'
const [showEditor, setShowEditor] = useState(false);
const RichEditor = showEditor ? lazy(() => import('./RichEditor')) : null;
Dynamic import is the underlying mechanism:
button.addEventListener('click', async () => {
const { initMap } = await import('./map.js');
initMap();
});
Bundlers (Vite, Webpack) automatically create separate chunks for each dynamic import.
Follow-up
Follow-up: What is the difference between route-based and component-based code splitting?
HTTP Caching — the browser caches responses based on headers:
Cache-Control: public, max-age=31536000, immutable
max-age— seconds before the browser re-validatespublic— CDNs can cache itimmutable— tells the browser never to revalidate (for content-hashed assets)
Typical caching strategy:
- HTML files:
Cache-Control: no-cache— always revalidate (check for updates) - JS/CSS/Images with content hash in filename:
max-age=31536000, immutable— cache forever
no-cache vs no-store:
no-cache— the browser caches the response but must revalidate with the server before using it (sends conditional request withIf-None-Match/ETag). Good for HTML.no-store— nothing is saved to cache at all. Use for sensitive data (bank account pages).
Service Workers — a JavaScript file that runs in a background thread and intercepts network requests:
self.addEventListener('fetch', (event) => {
event.respondWith(
caches.match(event.request).then(cached => cached ?? fetch(event.request))
);
});
Service worker caching strategies:
- Cache-first — serve from cache, fall back to network (fast, for static assets)
- Network-first — try network, fall back to cache (fresh data when online)
- Stale-while-revalidate — serve cached immediately, update in background
Service workers also enable offline support and background sync.
Follow-up
Follow-up: What is the difference between Cache-Control: no-cache and no-store?
Images are typically the largest assets on a web page and a leading cause of poor LCP scores. A systematic approach covers format, size, delivery, and loading.
Modern image formats (smallest to largest at equal quality):
- AVIF — best compression, broad browser support (2023+)
- WebP — good compression, universal browser support
- JPEG — photos; use when AVIF/WebP are not available
- PNG — lossless; use only for images needing transparency without quality loss
- SVG — icons and illustrations (vector, infinitely scalable)
Optimization techniques:
- Compress images (Squoosh, Sharp, imagemin)
- Serve correctly-sized images with
srcset— do not serve 2000px images in a 300px slot - Use
loading="lazy"for below-the-fold images - Use
fetchpriority="high"for the LCP image - Serve from a CDN with image transformation (Cloudflare Images, Imgix)
- Use
widthandheightattributes to prevent CLS
<img
src="photo.jpg"
srcset="photo-400.webp 400w, photo-800.webp 800w"
sizes="(max-width: 600px) 100vw, 800px"
width="800"
height="600"
loading="lazy"
alt="Description"
/>
<img> vs CSS background-image:
- Use
<img>for content images — accessible (alt text), indexed by search engines, supportssrcset/sizes - Use
background-imagefor decorative images — no need for alt text, easy to swap via CSS, good for patterns/gradients
Follow-up
Follow-up: When would you use CSS background-image versus an img element?
Tooling
(4)A bundler takes your source files (JS, CSS, images, etc.), resolves their dependencies, and produces optimized output files suitable for the browser.
Core bundler tasks:
- Module resolution (follow
importstatements) - Transpilation (TypeScript → JS, JSX → JS via Babel/esbuild/SWC)
- Tree-shaking (remove unused exports)
- Code splitting (create async chunks)
- Asset optimization (minification, content hashing)
Webpack:
- Mature, highly configurable
- Bundles everything with a custom module graph
- Dev server rebuilds the entire affected bundle on file change
- Plugin ecosystem is enormous but configuration is complex
Vite:
- Dev server uses native ES modules in the browser — no bundling during development
- Only transforms the file you changed; the browser fetches individual modules via HTTP
- Dramatically faster cold start and HMR than Webpack (especially on large projects)
- Uses Rollup for production builds (optimized, tree-shaken bundles)
- Uses esbuild for dependency pre-bundling and TypeScript transpilation (10-100x faster than Babel)
HMR (Hot Module Replacement): Updates a changed module in the running app without a full page reload, preserving application state.
In Vite, HMR is precise — only the changed module and its direct importers are invalidated. The browser fetches just those modules over the existing ES module connection, making updates near-instant even in large apps.
Follow-up
Follow-up: What is HMR (Hot Module Replacement) and how does Vite implement it?
ESLint is a static analysis tool that finds and fixes problems in JavaScript/TypeScript code based on configurable rules.
What ESLint catches:
- Code quality issues (unused variables, unreachable code)
- Style and formatting (with
eslint-config-prettierdeferring to Prettier) - Potential bugs (
no-implicit-coercion,eqeqeq) - React-specific issues (
react-hooks/rules-of-hooks,react/jsx-key) - Accessibility issues (
jsx-a11yplugin)
Modern flat config (eslint.config.js — ESLint v9+):
import js from '@eslint/js';
import tsParser from '@typescript-eslint/parser';
import tsPlugin from '@typescript-eslint/eslint-plugin';
export default [
js.configs.recommended,
{
files: ['**/*.ts', '**/*.tsx'],
languageOptions: { parser: tsParser },
plugins: { '@typescript-eslint': tsPlugin },
rules: {
'@typescript-eslint/no-explicit-any': 'error',
'@typescript-eslint/no-unused-vars': 'error',
},
},
];
ESLint vs TypeScript:
- TypeScript checks types — it knows the shape and type of every value and catches type errors
- ESLint checks patterns — it finds style issues, potential bugs, and enforces conventions that type checking cannot
They complement each other. TypeScript catches obj.nonExistentProp; ESLint catches if (x == null) (should be ===) or a useEffect with a missing dependency.
Follow-up
Follow-up: How does ESLint differ from a type checker like TypeScript?
The testing pyramid describes the ideal distribution of tests: many fast unit tests at the base, fewer integration tests in the middle, and a small number of slow E2E tests at the top.
Unit tests:
- Test a single function, class, or component in isolation
- Dependencies are mocked
- Fast (milliseconds), run on every save
- Tools: Vitest, Jest
test('formats currency', () => {
expect(formatCurrency(1234.5, 'USD')).toBe('$1,234.50');
});
Integration tests:
- Test multiple units working together (component + hooks + store, or API route + database)
- Some real dependencies, some mocked
- Slower than unit tests but faster than E2E
- Tools: Testing Library + Vitest, Supertest for APIs
test('submitting the form saves the user', async () => {
render(<UserForm />);
await userEvent.type(screen.getByLabelText('Name'), 'Alice');
await userEvent.click(screen.getByRole('button', { name: 'Save' }));
expect(await screen.findByText('User saved')).toBeInTheDocument();
});
End-to-end (E2E) tests:
- Test a full user flow through the real browser against a real (or test) backend
- Slowest and most brittle — reserve for critical paths
- Tools: Playwright, Cypress
What to test where:
- Complex business logic → unit tests
- Component behavior with user interaction → integration tests
- Critical user journeys (signup, checkout, core workflow) → E2E tests
- Do not duplicate tests across levels — test each behavior at the lowest level that gives confidence
Follow-up
Follow-up: What is the testing pyramid and how does it guide test distribution?
A CI/CD pipeline automates the process of validating and deploying code changes. For frontend projects it typically runs on every pull request and on merges to the main branch.
Typical pipeline stages:
Push / PR
↓
[Install] — npm ci (fast, reproducible)
↓
[Quality] — type check + lint (run in parallel)
↓
[Test] — unit tests + integration tests
↓
[Build] — production bundle
↓
[Preview deploy] — deploy PR to staging URL (Vercel, Cloudflare Pages)
↓
[E2E tests] — Playwright against staging URL
↓
[Merge to main]
↓
[Production deploy]
Preventing broken builds from reaching production:
- Branch protection rules — require all CI checks to pass before merge
- Required reviewers — at least one approval before merge
- Preview deployments — test in a production-like environment before merging
- Deployment gates — E2E tests run against the preview; merge is blocked if they fail
- Automatic rollback — if error rate spikes post-deploy, revert to the previous deployment automatically
Performance budgets in CI:
// Fail the build if bundle exceeds limit
bundlesize: [
{ path: './dist/index.js', maxSize: '150kb' }
]
Tools: GitHub Actions (most common), Vercel/Netlify for automatic preview deployments, Playwright for E2E in CI, Turborepo for affected-only builds in monorepos.
Follow-up
Follow-up: How would you prevent a broken build from reaching production?
Use these questions in your next interview
Import all 32 questions into Intervy with one click. Add scoring rubrics, organize by template, and conduct structured interviews.
Try Intervy Free