JavaScript Interview Questions
45 questions — 14 easy · 18 medium · 13 hard
Fundamentals
(9)A closure is a function that retains access to variables from its lexical scope even after the outer function has returned. This happens because JavaScript functions carry a reference to their surrounding environment, not a snapshot of values.
Closures are essential for data privacy, factory functions, and callback patterns. They let you create private state without classes:
function createCounter() {
let count = 0;
return {
increment: () => ++count,
getCount: () => count,
};
}
const counter = createCounter();
counter.increment();
counter.getCount(); // 1
The classic loop bug occurs with var because all iterations share the same variable. Using let (block-scoped) or an IIFE fixes it by creating a new binding per iteration.
var is function-scoped and hoisted with an initial value of undefined. let and const are block-scoped and hoisted but not initialized, creating a Temporal Dead Zone (TDZ) where accessing them before declaration throws a ReferenceError.
const prevents reassignment of the binding, but does not make the value immutable — object properties can still be mutated.
const obj = { a: 1 };
obj.a = 2; // works fine
obj = {}; // TypeError: Assignment to constant variable
In modern codebases, there is almost no reason to use var. The only edge case is when you intentionally want function-scoped hoisting behavior, but even then let with explicit scoping is clearer. Default to const, use let when reassignment is needed, avoid var.
In regular functions, this is determined by how the function is called: as a method it refers to the object, as a standalone call it's undefined in strict mode (or globalThis in sloppy mode), and with new it refers to the newly created instance.
Arrow functions do not have their own this. They capture this from the enclosing lexical scope at the time they are defined. This makes them ideal for callbacks where you want to preserve the outer this:
class Timer {
constructor() {
this.seconds = 0;
setInterval(() => {
this.seconds++; // "this" refers to the Timer instance
}, 1000);
}
}
Calling call, apply, or bind on an arrow function has no effect on its this value — it remains lexically bound. This is why arrow functions should not be used as methods on objects or constructors.
In classical inheritance (Java, C#), classes are blueprints and instances are created from them in a rigid hierarchy. In JavaScript, objects inherit directly from other objects through the prototype chain. Every object has an internal [[Prototype]] link to another object, and property lookups walk this chain until they find the property or reach null.
const animal = { speak() { return 'sound'; } };
const dog = Object.create(animal);
dog.bark = function() { return 'woof'; };
dog.bark(); // 'woof' (own property)
dog.speak(); // 'sound' (found on prototype)
ES6 class syntax is syntactic sugar over prototypal inheritance — it doesn't change the underlying mechanism. Object.create(null) creates an object with no prototype, useful for clean dictionaries without inherited properties like toString or hasOwnProperty interfering with key lookups.
[] == false evaluates to true. The == operator applies the Abstract Equality Comparison algorithm: both sides are coerced to numbers. false becomes 0. The array [] is first converted to a primitive via ToPrimitive, which calls [].toString() producing "", and "" coerced to a number is 0. So 0 == 0 is true.
JavaScript has two kinds of coercion: implicit (triggered by operators like ==, +, if) and explicit (using Number(), String(), Boolean()). The rules for implicit coercion are complex and often surprising:
[] == false // true
[] == ![] // true (! coerces [] to boolean true, negates to false)
'' == 0 // true
null == undefined // true
null == 0 // false
Use === (strict equality) by default to avoid coercion. The only commonly accepted use of == is value == null to check for both null and undefined in one expression.
JavaScript offers several ways to create objects, each suited to different situations.
Object literal is the most common for simple one-off objects:
const user = { name: 'Alice', age: 30 };
Constructor function or ES6 class when you need multiple instances with shared behavior:
class User {
constructor(name) { this.name = name; }
greet() { return `Hi, ${this.name}`; }
}
Object.create(proto) when you want explicit control over the prototype chain, especially useful for inheritance without constructors.
Factory function when you want to avoid new and this, often preferred in functional styles:
function createUser(name) {
return { name, greet: () => `Hi, ${name}` };
}
Object.assign or spread for composing objects from multiple sources. Use Object.create(null) for plain dictionaries without prototype pollution. In practice, object literals and classes cover 90% of use cases.
JavaScript uses automatic memory management with a garbage collector. The primary algorithm in modern engines (V8, SpiderMonkey) is mark-and-sweep: the GC starts from root references (global object, stack variables), marks all reachable objects, and sweeps (frees) unreachable ones. V8 also uses a generational approach — a fast "minor GC" for short-lived objects in the young generation and a slower "major GC" for long-lived objects in the old generation.
Common causes of memory leaks in SPAs include:
// Forgotten event listeners
element.addEventListener('click', handler);
// If element is removed from DOM but handler still references it
// Closures holding large scopes
function process(hugeData) {
return function() {
// hugeData is retained even if unused
};
}
// Detached DOM nodes stored in variables
const cache = document.getElementById('temp');
document.body.removeChild(cache); // still referenced by 'cache'
Other sources include uncleared timers (setInterval), growing Maps/Sets, and global variable accumulation. Use browser DevTools Memory tab with heap snapshots and allocation timelines to identify leaks.
Hoisting is JavaScript's behavior of moving declarations to the top of their scope during the compilation phase, before code execution. However, only the declarations are hoisted, not the initializations.
var declarations are hoisted and initialized to undefined. Function declarations are fully hoisted (both name and body). let and const are hoisted but enter a Temporal Dead Zone until the declaration is reached:
console.log(a); // undefined (var is hoisted)
console.log(b); // ReferenceError: Cannot access 'b' before initialization
console.log(greet()); // 'hello' (function declaration fully hoisted)
console.log(farewell); // undefined (var hoisted, but function not assigned yet)
var a = 1;
let b = 2;
function greet() { return 'hello'; }
var farewell = function() { return 'bye'; };
Function declarations are fully hoisted because the engine processes them during compilation and binds the entire function object. Function expressions assigned to var only hoist the variable declaration (as undefined), and the function assignment happens at runtime. This is why function declarations can be called before they appear in code, but function expressions cannot.
A shallow copy duplicates only the top-level properties. Nested objects and arrays are still shared references between the original and the copy. A deep copy recursively duplicates everything, creating fully independent data.
const original = { a: 1, nested: { b: 2 } };
// Shallow copies
const spread = { ...original };
const assign = Object.assign({}, original);
spread.nested.b = 99;
console.log(original.nested.b); // 99 (shared reference!)
// Deep copy with structuredClone (modern)
const deep = structuredClone(original);
deep.nested.b = 99;
console.log(original.nested.b); // 2 (independent)
structuredClone() handles most types: nested objects, arrays, Dates, RegExp, Map, Set, ArrayBuffer, and even circular references. However, it cannot clone functions, DOM nodes, Error objects, or objects with a prototype chain (class instances lose their class). The older JSON.parse(JSON.stringify()) trick also fails on undefined, functions, Infinity, Date objects (become strings), and circular references.
Use a library like Lodash's cloneDeep when you need to clone class instances or handle edge cases structuredClone doesn't support.
Async
(8)The event loop is the mechanism that allows JavaScript to perform non-blocking I/O despite being single-threaded. It continuously checks the call stack: when the stack is empty, it picks the next task from the queue.
There are two task queues with different priorities. Macrotasks (or just "tasks") include setTimeout, setInterval, I/O callbacks, and UI rendering. Microtasks include Promise.then/catch/finally, queueMicrotask, and MutationObserver. After each macrotask completes, the engine drains the entire microtask queue before processing the next macrotask or rendering.
console.log('1');
setTimeout(() => console.log('2'), 0);
Promise.resolve().then(() => console.log('3'));
queueMicrotask(() => console.log('4'));
console.log('5');
// Output: 1, 5, 3, 4, 2
Synchronous code runs first (1, 5), then all microtasks are drained (3, 4 — in order of scheduling), and finally the macrotask runs (2). This is why a Promise callback always executes before a setTimeout(..., 0) callback.
These four static methods handle multiple concurrent promises differently:
Promise.all(promises) resolves when all promises resolve. If any one rejects, the whole thing rejects immediately with that error. Use it when you need all results and any failure is fatal.
Promise.allSettled(promises) waits for every promise to settle (resolve or reject) and returns an array of { status, value/reason } objects. Use it when you want to know the outcome of each promise regardless of failures.
Promise.race(promises) settles with the first promise that settles, whether it resolves or rejects. Useful for timeouts.
Promise.any(promises) resolves with the first promise that fulfills. It only rejects if all promises reject, throwing an AggregateError. Useful for redundant sources where you want the fastest success.
// Timeout pattern with Promise.race
const fetchWithTimeout = (url, ms) =>
Promise.race([
fetch(url),
new Promise((_, reject) =>
setTimeout(() => reject(new Error('Timeout')), ms)
),
]);
Choose based on your failure tolerance: all is strict, allSettled is resilient, race is first-wins, any is first-success-wins.
An async function always returns a Promise. When the engine encounters await, it wraps the awaited expression in Promise.resolve(), then suspends the function's execution and returns control to the caller. The remainder of the function is scheduled as a microtask continuation that runs when the awaited promise settles.
Conceptually, the engine transforms async/await into a state machine similar to generators:
async function fetchUser(id) {
const res = await fetch(`/users/${id}`);
const data = await res.json();
return data;
}
// Roughly equivalent to:
function fetchUser(id) {
return fetch(`/users/${id}`)
.then(res => res.json())
.then(data => data);
}
Using await in a loop makes requests sequential — each waits for the previous to finish. For independent operations, Promise.all runs them concurrently:
// Sequential - slow
for (const id of ids) { await fetchUser(id); }
// Concurrent - fast
await Promise.all(ids.map(id => fetchUser(id)));
The sequential approach is O(n * latency) while Promise.all is O(max latency).
Several pitfalls regularly catch developers:
Swallowing errors — forgetting to add .catch() at the end of a chain or not using try/catch with await leads to unhandled rejections that silently fail.
Creating but not returning — inside a .then() callback, if you create a new Promise but don't return it, the chain doesn't wait for it.
// Bug: inner promise is fire-and-forget
fetch('/api').then(res => {
res.json().then(data => save(data)); // not returned!
});
// Fix: return the inner promise
fetch('/api')
.then(res => res.json())
.then(data => save(data));
Mixing callbacks and promises without proper wrapping leads to errors that escape the Promise chain entirely.
Sequential await in loops when operations are independent (use Promise.all instead).
For cleanup, use .finally() which runs regardless of outcome:
const conn = await openConnection();
try {
await doWork(conn);
} finally {
await conn.close();
}
Callback hell (or the "pyramid of doom") occurs when asynchronous operations are nested inside each other's callbacks, creating deeply indented, hard-to-read code that's difficult to debug and maintain:
getUser(id, (err, user) => {
getOrders(user.id, (err, orders) => {
getOrderDetails(orders[0].id, (err, details) => {
// deeply nested, error handling repeated everywhere
});
});
});
Modern JavaScript solves this in two ways. Promises flatten the nesting into a chain:
getUser(id)
.then(user => getOrders(user.id))
.then(orders => getOrderDetails(orders[0].id))
.catch(err => handleError(err));
async/await makes asynchronous code read like synchronous code:
const user = await getUser(id);
const orders = await getOrders(user.id);
const details = await getOrderDetails(orders[0].id);
Beyond readability, Promises provide centralized error handling through .catch() or try/catch blocks, replacing the repetitive if (err) checks in every callback.
When you forget to await a Promise, the function continues executing without waiting for the asynchronous operation to complete. The Promise runs in the background, and its result is silently ignored. This leads to several problems.
First, any error thrown inside the forgotten Promise becomes an unhandled rejection rather than being caught by the surrounding try/catch:
async function saveData(data) {
try {
validate(data);
fetch('/api/save', { method: 'POST', body: JSON.stringify(data) });
// fetch is not awaited — errors won't be caught here
console.log('Saved!'); // runs immediately, before save completes
} catch (err) {
// never catches fetch errors
}
}
Second, subsequent code that depends on the result will use undefined (or the Promise object itself) instead of the resolved value. Third, race conditions emerge because operations that should be sequential run concurrently.
ESLint rules like @typescript-eslint/no-floating-promises flag Promises that are neither awaited, returned, nor explicitly voided, catching this mistake at lint time.
Exponential backoff retries a failing operation with progressively longer delays between attempts. This is essential for resilient API clients that handle transient failures (network blips, rate limiting) gracefully.
async function withRetry(fn, { maxRetries = 3, baseDelay = 1000 } = {}) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (err) {
if (attempt === maxRetries) throw err;
const delay = baseDelay * Math.pow(2, attempt);
const jitter = delay * (0.5 + Math.random() * 0.5);
await new Promise(resolve => setTimeout(resolve, jitter));
}
}
}
// Usage
const data = await withRetry(() => fetch('/api/data').then(r => {
if (!r.ok) throw new Error(`HTTP ${r.status}`);
return r.json();
}));
The jitter (random variation) prevents the "thundering herd" problem where many clients retry at the exact same time after an outage. A good implementation also distinguishes between retryable errors (5xx, network) and non-retryable ones (4xx client errors) to avoid wasting attempts on permanent failures.
AbortController provides a standard way to cancel asynchronous operations. You create a controller, pass its signal to fetch, and call abort() when cancellation is needed:
const controller = new AbortController();
fetch('/api/large-data', { signal: controller.signal })
.then(res => res.json())
.catch(err => {
if (err.name === 'AbortError') {
console.log('Request was cancelled');
} else {
throw err;
}
});
// Cancel after 5 seconds
setTimeout(() => controller.abort(), 5000);
This is particularly important in SPAs where a user navigates away before a request completes. Without cancellation, the response callback might try to update unmounted component state. In React, you wire it into useEffect cleanup. The AbortSignal.timeout(ms) static method is a shorthand for timeout-based cancellation without manual timers. Multiple fetch calls can share the same signal for batch cancellation.
ES6+
(9)Destructuring is a syntax for extracting values from arrays or properties from objects into distinct variables. It reduces boilerplate and makes code more readable.
With nested objects, you mirror the structure on the left side. Default values kick in when the extracted value is undefined:
const response = {
data: {
user: { name: 'Alice', role: 'admin' },
meta: { page: 1 },
},
};
const {
data: {
user: { name, role = 'viewer' },
meta: { page, limit = 10 },
},
} = response;
// name='Alice', role='admin' (exists, default ignored), page=1, limit=10
Array destructuring works by position and supports skipping elements:
const [first, , third] = [1, 2, 3]; // first=1, third=3
You can rename during destructuring with { originalName: newName } and combine with rest: const { id, ...rest } = obj. Destructuring in function parameters is particularly powerful for options objects, making the API self-documenting.
CommonJS (require/module.exports) is the original Node.js module system. Modules are loaded synchronously at runtime, require can appear anywhere in code (conditionally, inside loops), and module.exports is a plain object that can be reassigned.
ES Modules (import/export) are the language standard. Imports are hoisted and statically analyzed at parse time, making them suitable for tree-shaking. They are asynchronous by nature.
// CommonJS
const fs = require('fs');
module.exports = { readFile: fs.readFile };
// ESM
import { readFile } from 'fs';
export { readFile };
Key differences: ESM import statements must be at the top level (no conditional imports without dynamic import()). ESM exports are live bindings — if the exporting module updates a value, importers see the update. CommonJS exports are value copies.
Mixing them is possible but tricky: ESM can import from CommonJS, but CommonJS cannot require() ESM directly (it must use dynamic import()). In Node.js, .mjs files are ESM and .cjs files are CommonJS, or you set "type": "module" in package.json.
An iterator is any object with a next() method that returns { value, done }. An iterable is any object with a [Symbol.iterator]() method that returns an iterator. Arrays, strings, Maps, and Sets are all built-in iterables.
A generator is a special function (declared with function*) that produces an iterator. Execution pauses at each yield and resumes when next() is called:
function* paginate(items, pageSize) {
for (let i = 0; i < items.length; i += pageSize) {
yield items.slice(i, i + pageSize);
}
}
const pages = paginate(largeArray, 10);
pages.next().value; // first 10 items
pages.next().value; // next 10 items
Practical use cases include lazy evaluation of large datasets (only computing values as needed), implementing custom iteration protocols, paginated API consumption, and building streaming data pipelines. yield* delegates to another iterable, letting you compose generators:
function* concat(a, b) {
yield* a;
yield* b;
}
Generators are also the foundation of how async/await was originally polyfilled (via libraries like co).
Optional chaining short-circuits a property access chain when it encounters null or undefined, returning undefined instead of throwing a TypeError. It replaces verbose defensive checks:
// Before: manual checks at every level
const city = user && user.address && user.address.city;
// After: clean and readable
const city = user?.address?.city;
It works with property access (obj?.prop), bracket notation (obj?.[expr]), and method calls (obj?.method()). If the left side is null or undefined, the expression short-circuits immediately.
It pairs well with the nullish coalescing operator (??) for defaults:
const theme = config?.appearance?.theme ?? 'light';
Important: optional chaining only checks for null/undefined, not other falsy values. 0?.toString() still returns "0" because 0 is not nullish. Overusing it can hide bugs — if a property should always exist, a TypeError is actually helpful for catching mistakes early.
A Proxy wraps an object and intercepts fundamental operations (get, set, delete, function calls, etc.) through handler "traps". This lets you add custom behavior to object interactions without modifying the target:
const validator = new Proxy({}, {
set(target, prop, value) {
if (prop === 'age' && typeof value !== 'number') {
throw new TypeError('Age must be a number');
}
target[prop] = value;
return true;
},
get(target, prop) {
return prop in target ? target[prop] : `Property ${prop} not found`;
},
});
validator.age = 25; // works
validator.age = 'old'; // throws TypeError
Practical use cases include: validation layers, reactive systems (Vue 3 uses Proxies for reactivity), logging/debugging property access, implementing negative array indices, default property values, and API response memoization.
Proxies are slower than direct property access because every operation goes through the handler. Avoid them in hot paths (tight loops, frequent reads). Reflect methods are the companion API — they mirror proxy traps and provide the default behavior you often want to delegate to.
Symbol is a primitive type introduced in ES6 that creates guaranteed unique identifiers. Every Symbol() call produces a distinct value, even with the same description:
const s1 = Symbol('id');
const s2 = Symbol('id');
s1 === s2; // false
Practical applications include creating truly private-like object properties (Symbols don't appear in for...in, Object.keys, or JSON.stringify), preventing property name collisions in shared objects, and defining protocol interfaces.
Well-known Symbols let you customize built-in behavior:
class Range {
constructor(start, end) { this.start = start; this.end = end; }
[Symbol.iterator]() {
let current = this.start;
const end = this.end;
return {
next: () => current <= end
? { value: current++, done: false }
: { done: true },
};
}
}
for (const n of new Range(1, 5)) console.log(n); // 1,2,3,4,5
Symbol.for('key') creates globally shared Symbols via a registry, useful for cross-realm communication.
WeakMap and WeakSet hold "weak" references to their keys (WeakMap) or values (WeakSet), meaning the garbage collector can reclaim them if there are no other references. In contrast, Map and Set hold strong references that prevent garbage collection.
let element = document.getElementById('btn');
const metadata = new WeakMap();
metadata.set(element, { clicks: 0 });
// When element is removed from DOM and variable is reassigned:
element = null;
// The WeakMap entry is automatically garbage-collected
Key differences: WeakMap keys must be objects (not primitives). WeakMaps and WeakSets are not iterable — you cannot loop over them or check their size, because entries can disappear at any time.
Practical use cases include: associating metadata with DOM nodes without causing memory leaks, caching computed values for objects without preventing their cleanup, and storing private data for class instances:
const privateData = new WeakMap();
class User {
constructor(name) {
privateData.set(this, { name });
}
getName() {
return privateData.get(this).name;
}
}
Use Map/Set when you need iteration or size tracking; use WeakMap/WeakSet when lifecycle should follow the key object.
Despite using the same ... syntax, spread and rest serve opposite purposes. Spread expands an iterable into individual elements. Rest collects multiple elements into an array or object.
// Spread: expanding
const arr = [1, 2, 3];
const copy = [...arr, 4, 5]; // [1, 2, 3, 4, 5]
const obj = { a: 1, b: 2 };
const merged = { ...obj, c: 3 }; // { a: 1, b: 2, c: 3 }
// Rest: collecting
function sum(...numbers) {
return numbers.reduce((a, b) => a + b, 0);
}
const { a, ...remaining } = { a: 1, b: 2, c: 3 };
// a = 1, remaining = { b: 2, c: 3 }
Spread creates a shallow copy — nested objects are still references to the originals. Mutating a nested property in the copy affects the original. For deep copying, use structuredClone() (modern) or a library. Spread with objects only copies enumerable own properties, and later properties override earlier ones, making it useful for applying defaults.
Template literals (backtick strings) support embedded expressions and multi-line strings. Tagged templates are a more powerful feature where a function processes the template literal's parts.
A tag function receives an array of string segments and the interpolated values as separate arguments:
function highlight(strings, ...values) {
return strings.reduce((result, str, i) => {
const value = values[i] !== undefined ? `<mark>${values[i]}</mark>` : '';
return result + str + value;
}, '');
}
const name = 'Alice';
const role = 'admin';
highlight`User ${name} has role ${role}`;
// 'User <mark>Alice</mark> has role <mark>admin</mark>'
Practical use cases include SQL query builders that auto-escape parameters to prevent injection, CSS-in-JS libraries (styled-components uses styled.div as a tag), internationalization (i18n) where the tag function handles translations, and HTML sanitization. The built-in String.raw tag preserves raw strings without processing escape sequences, useful for regex patterns and file paths.
DOM & Browser
(7)Event delegation is a pattern where you attach a single event listener to a parent element instead of individual listeners on each child. It works because of event bubbling — events fired on child elements propagate up through the DOM tree.
// Instead of adding listeners to every button:
document.querySelector('.toolbar').addEventListener('click', (e) => {
const button = e.target.closest('button');
if (!button) return;
const action = button.dataset.action;
if (action === 'save') save();
if (action === 'delete') remove();
});
Benefits: fewer event listeners means less memory usage, dynamically added elements automatically work without re-attaching listeners, and initialization is simpler.
event.target is the element that originally triggered the event (the deepest child clicked). event.currentTarget is the element the listener is attached to (the parent). Using closest() is essential for delegation because event.target might be a span inside the button rather than the button itself.
Event delegation doesn't work for events that don't bubble (like focus, blur, scroll), though you can use the capture phase as a workaround.
All three store data in the browser, but differ in capacity, lifetime, and server accessibility.
localStorage: ~5-10MB per origin, persists until explicitly cleared, not sent to the server, synchronous API. Best for user preferences, cached data, and offline state.
sessionStorage: same API and capacity as localStorage, but data is scoped to the browser tab and cleared when the tab closes. Best for temporary form state or wizard progress.
Cookies: ~4KB per cookie, sent with every HTTP request to the matching domain (adding bandwidth overhead), can be set from the server via Set-Cookie header. Support expiration dates, Secure, HttpOnly, and SameSite flags for security.
localStorage.setItem('theme', 'dark');
sessionStorage.setItem('formStep', '2');
document.cookie = 'token=abc; Secure; HttpOnly; SameSite=Strict; Max-Age=3600';
Use cookies for authentication tokens (with HttpOnly to prevent XSS access), localStorage for persistent client-side data, and sessionStorage for ephemeral tab-specific state. For structured data beyond simple key-value pairs, consider IndexedDB.
The Critical Rendering Path (CRP) is the sequence of steps the browser takes to convert HTML, CSS, and JavaScript into pixels on screen: parse HTML into DOM, parse CSS into CSSOM, combine into Render Tree, compute Layout, and Paint.
CSS is render-blocking — the browser won't paint until CSSOM is built. JavaScript is parser-blocking — <script> tags halt HTML parsing until the script downloads and executes.
Optimization strategies:
<!-- Inline critical CSS -->
<style>/* above-the-fold styles */</style>
<!-- Defer non-critical CSS -->
<link rel="preload" href="full.css" as="style" onload="this.rel='stylesheet'">
<!-- defer: download parallel, execute after HTML parsed -->
<script defer src="app.js"></script>
<!-- async: download parallel, execute immediately when ready -->
<script async src="analytics.js"></script>
defer maintains execution order and runs after parsing — ideal for app code. async executes as soon as downloaded with no order guarantee — ideal for independent scripts like analytics. Other optimizations include minimizing DOM depth, using font-display to prevent font-blocking, lazy-loading below-the-fold images, and using will-change for GPU-composited animations.
requestAnimationFrame (rAF) synchronizes with the browser's repaint cycle (typically 60fps), while setTimeout fires at a fixed interval regardless of the browser's rendering schedule.
// Bad: setTimeout-based animation
function animateWithTimeout(element, target) {
let pos = 0;
function step() {
pos += 2;
element.style.transform = `translateX(${pos}px)`;
if (pos < target) setTimeout(step, 16); // ~60fps guess
}
step();
}
// Good: rAF-based animation
function animateWithRAF(element, target) {
let pos = 0;
function step() {
pos += 2;
element.style.transform = `translateX(${pos}px)`;
if (pos < target) requestAnimationFrame(step);
}
requestAnimationFrame(step);
}
Key advantages of rAF: it pauses when the tab is in the background (saving CPU/battery), aligns with the display refresh rate for jank-free rendering, and batches DOM reads/writes efficiently. setTimeout can drift, miss frames, or run when the tab is invisible. rAF also receives a high-resolution timestamp parameter useful for time-based animations independent of frame rate. For modern CSS-only animations, prefer CSS transitions or the Web Animations API, using rAF only for complex programmatic animations.
Web Workers run JavaScript in a separate background thread, preventing heavy computation from blocking the main thread (and thus the UI). They communicate with the main thread via message passing.
// main.js
const worker = new Worker('worker.js');
worker.postMessage({ data: largeArray });
worker.onmessage = (e) => {
console.log('Result:', e.data);
};
// worker.js
self.onmessage = (e) => {
const result = heavyComputation(e.data.data);
self.postMessage(result);
};
Use cases include image/video processing, data parsing (large CSV/JSON), encryption, complex calculations, and search indexing — any CPU-intensive task that would cause the UI to freeze.
Limitations: workers have no access to the DOM, window, or document. They communicate only through postMessage (which serializes data via structured cloning) or SharedArrayBuffer for zero-copy sharing. Each worker has overhead (separate thread, memory), so don't spawn them for trivial tasks. For transferring large data efficiently, use Transferable objects which move ownership rather than copying.
IntersectionObserver asynchronously watches for changes in the intersection of a target element with an ancestor element or the viewport. Unlike scroll event listeners, it doesn't run on the main thread and doesn't require expensive getBoundingClientRect() calls.
const observer = new IntersectionObserver(
(entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
const img = entry.target;
img.src = img.dataset.src;
observer.unobserve(img); // stop watching once loaded
}
});
},
{ rootMargin: '200px', threshold: 0 }
);
document.querySelectorAll('img[data-src]').forEach(img => {
observer.observe(img);
});
This is the standard pattern for lazy-loading images. The rootMargin: '200px' starts loading images 200px before they enter the viewport for a smoother experience.
Other use cases: infinite scroll (observe a sentinel element at the bottom of the list), analytics (track which sections users actually see), sticky header transitions, and fade-in animations on scroll. It's far more performant than scroll listeners because the browser optimizes observation internally and batches callback invocations.
XSS prevention requires defense in depth across multiple layers:
Output encoding is the primary defense. Never insert untrusted data as raw HTML. Use context-appropriate encoding — HTML entity encoding for HTML content, JavaScript encoding for script contexts, URL encoding for URLs:
// Dangerous: innerHTML with user input
element.innerHTML = userInput; // XSS vulnerability
// Safe: textContent escapes HTML automatically
element.textContent = userInput;
// In React, JSX auto-escapes by default
<div>{userInput}</div> // safe
<div dangerouslySetInnerHTML={{ __html: userInput }} /> // dangerous
Content Security Policy (CSP) headers restrict which scripts can execute, blocking injected inline scripts. Input validation rejects or sanitizes data at the boundary. HttpOnly cookies prevent JavaScript from reading authentication tokens.
The three XSS types: Stored (malicious script saved to database, affects all users viewing it), Reflected (script in URL parameter reflected back in response), and DOM-based (client-side JavaScript inserts untrusted data into the DOM without server involvement). Use libraries like DOMPurify when you must render user-supplied HTML.
Patterns & Architecture
(7)The module pattern uses closures and IIFEs to create private scope, encapsulating implementation details and exposing only a public API. Before ES modules, all scripts shared the global scope, making name collisions and uncontrolled access to internals a constant problem.
const UserService = (function() {
let users = [];
function validate(user) {
return user.name && user.email;
}
return {
add(user) {
if (!validate(user)) throw new Error('Invalid user');
users.push(user);
},
getAll() {
return [...users];
},
};
})();
UserService.add({ name: 'Alice', email: 'a@b.com' });
// UserService.users is undefined (private)
// UserService.validate is undefined (private)
The revealing module pattern is a variation where all functions are defined privately, and the return object maps public names to private functions. This makes it easier to see the public API at a glance. While ES modules have largely replaced this pattern, understanding it is important for maintaining legacy codebases and understanding closure-based encapsulation.
The Observer pattern defines a one-to-many dependency: when a subject changes state, all registered observers are notified automatically. This decouples the source of events from the consumers.
class EventEmitter {
#listeners = new Map();
on(event, callback) {
if (!this.#listeners.has(event)) {
this.#listeners.set(event, new Set());
}
this.#listeners.get(event).add(callback);
return () => this.#listeners.get(event).delete(callback);
}
emit(event, data) {
this.#listeners.get(event)?.forEach(cb => cb(data));
}
}
const bus = new EventEmitter();
const unsub = bus.on('userLogin', user => console.log(`${user} logged in`));
bus.emit('userLogin', 'Alice');
unsub(); // cleanup
The DOM's addEventListener is a built-in Observer implementation. Node.js has EventEmitter in its core. RxJS extends the pattern with Observables that support operators for transforming, filtering, and combining event streams. React's state management (Context, Redux, Zustand) also builds on this pattern — components subscribe to store changes and re-render when relevant state updates.
Inheritance creates an "is-a" relationship through a class hierarchy. A child class extends a parent, inheriting all its behavior. It leads to tight coupling and fragile hierarchies when requirements change.
Composition creates a "has-a" relationship by combining small, focused pieces of functionality. Objects are assembled from behaviors rather than inheriting from a base class:
// Inheritance: rigid hierarchy
class Animal { move() {} }
class Bird extends Animal { fly() {} }
class Penguin extends Bird {} // Penguins can't fly!
// Composition: flexible assembly
const canWalk = (state) => ({
walk: () => state.position += 1,
});
const canSwim = (state) => ({
swim: () => state.position += 2,
});
function createPenguin() {
const state = { position: 0 };
return { ...canWalk(state), ...canSwim(state) };
}
Composition avoids the "gorilla-banana problem" (wanting a banana but getting the entire gorilla holding it plus the jungle). Inheritance is still appropriate for genuine taxonomic hierarchies where "is-a" truly applies and the hierarchy is unlikely to change (e.g., ReadableStream extends Stream).
Robust error handling in a large application requires a layered strategy with custom error types, boundaries, and centralized reporting.
Custom error classes provide semantic meaning:
class AppError extends Error {
constructor(message, code, statusCode = 500) {
super(message);
this.name = 'AppError';
this.code = code;
this.statusCode = statusCode;
}
}
class NotFoundError extends AppError {
constructor(resource) {
super(`${resource} not found`, 'NOT_FOUND', 404);
}
}
Operational errors (network failures, invalid input, missing resources) are expected and recoverable — handle gracefully with user-friendly messages. Programmer errors (null references, type errors) are bugs — log them, report to monitoring, and let the process crash cleanly in server contexts.
Best practices: use try/catch at async boundaries, add global handlers (window.onerror, unhandledrejection, process.on('uncaughtException')), implement React Error Boundaries for UI resilience, never catch errors silently, and send structured error data to monitoring services (Sentry, Datadog). Every catch block should either recover, re-throw, or report.
Immutability means data cannot be changed after creation. Instead of modifying existing objects, you create new ones with the desired changes. This eliminates a whole class of bugs caused by unexpected mutations and makes state changes explicit and traceable.
// Mutable: hard to track who changed what
const state = { users: [{ name: 'Alice' }] };
state.users.push({ name: 'Bob' }); // mutates in-place
// Immutable: every change produces a new reference
const newState = {
...state,
users: [...state.users, { name: 'Bob' }],
};
Enforcement techniques:
Object.freeze()for shallow immutability (nested objects remain mutable)- Spread/
Object.assignfor immutable updates structuredClone()for deep copies when needed- Libraries like Immer that let you write "mutable" code that produces immutable updates
- TypeScript's
readonlyandReadonly<T>for compile-time enforcement
Immutability is central to React's rendering model (shallow comparison for re-renders), Redux, and functional programming. The performance concern (creating new objects) is largely mitigated by structural sharing in libraries and engine optimizations.
A higher-order function (HOF) is a function that takes a function as an argument, returns a function, or both. They enable abstraction over behavior, not just data.
Beyond the common array methods:
// Debounce: returns a throttled function
function debounce(fn, delay) {
let timer;
return (...args) => {
clearTimeout(timer);
timer = setTimeout(() => fn(...args), delay);
};
}
// Memoize: caches results of pure functions
function memoize(fn) {
const cache = new Map();
return (...args) => {
const key = JSON.stringify(args);
if (cache.has(key)) return cache.get(key);
const result = fn(...args);
cache.set(key, result);
return result;
};
}
// Pipe: compose left-to-right
const pipe = (...fns) => (x) => fns.reduce((v, f) => f(v), x);
const transform = pipe(
str => str.trim(),
str => str.toLowerCase(),
str => str.replace(/\s+/g, '-'),
);
transform(' Hello World '); // 'hello-world'
Other examples: addEventListener (accepts a callback), setTimeout, Array.from (accepts a mapping function), middleware patterns in Express, and React's HOC pattern. HOFs are the foundation of functional programming in JavaScript.
Both techniques limit how often a function executes, but with different strategies.
Debouncing delays execution until a pause in events. The function only fires after the caller stops invoking it for a specified time. Use for: search-as-you-type, window resize handlers, form validation after typing stops.
Throttling ensures the function fires at most once per time interval, regardless of how many times it's called. Use for: scroll event handlers, mouse move tracking, rate-limiting API calls.
function debounce(fn, delay) {
let timer;
return (...args) => {
clearTimeout(timer);
timer = setTimeout(() => fn(...args), delay);
};
}
function throttle(fn, interval) {
let lastTime = 0;
return (...args) => {
const now = Date.now();
if (now - lastTime >= interval) {
lastTime = now;
fn(...args);
}
};
}
// Debounce: fires 300ms after user stops typing
input.addEventListener('input', debounce(handleSearch, 300));
// Throttle: fires at most once per 100ms during scroll
window.addEventListener('scroll', throttle(handleScroll, 100));
Key distinction: debounce waits for silence, throttle guarantees regular intervals. For scroll-based animations, throttle provides smooth updates. For search input, debounce avoids unnecessary API calls while typing.
Testing & Tooling
(5)The key principle is to isolate the function from the network by mocking the HTTP layer. There are several approaches depending on your testing strategy.
Mocking fetch directly with your test framework:
import { describe, it, expect, vi } from 'vitest';
import { getUser } from './api';
describe('getUser', () => {
it('returns parsed user data', async () => {
global.fetch = vi.fn().mockResolvedValue({
ok: true,
json: () => Promise.resolve({ id: 1, name: 'Alice' }),
});
const user = await getUser(1);
expect(user).toEqual({ id: 1, name: 'Alice' });
expect(fetch).toHaveBeenCalledWith('/api/users/1');
});
it('throws on non-ok response', async () => {
global.fetch = vi.fn().mockResolvedValue({ ok: false, status: 404 });
await expect(getUser(999)).rejects.toThrow();
});
});
Always test both success and failure paths. Mock at the boundary (HTTP layer), not internal implementation. Use msw (Mock Service Worker) when you want realistic network behavior in integration tests — it intercepts at the network level, works with any HTTP client, and avoids coupling tests to specific fetch call signatures.
Mocking replaces real dependencies with controlled substitutes that simulate specific behaviors. Mocks let you test a unit in isolation, control edge cases (errors, timeouts), and run tests fast without external services.
// Mock a database repository
const mockRepo = {
findById: vi.fn().mockResolvedValue({ id: 1, name: 'Alice' }),
save: vi.fn().mockResolvedValue(true),
};
const service = new UserService(mockRepo);
await service.rename(1, 'Bob');
expect(mockRepo.findById).toHaveBeenCalledWith(1);
expect(mockRepo.save).toHaveBeenCalledWith({ id: 1, name: 'Bob' });
Use mocks for unit tests: testing business logic, edge cases, error handling, and fast feedback loops. Use integration tests (real database, real HTTP) when you need to verify that components work together correctly — schema changes, query correctness, middleware chains.
The danger of over-mocking is testing your mocks instead of your code. A good rule: mock at boundaries (I/O, network, time), but let internal logic execute for real. If you find yourself mocking internal functions of the module under test, the test is too tightly coupled to implementation.
Debugging memory leaks follows a systematic process using Chrome DevTools Memory tab.
Step 1: Confirm the leak. Open the Performance Monitor (Cmd+Shift+P > "Performance Monitor") and watch the JS Heap size over time. If it grows steadily without dropping during GC cycles, there's a leak.
Step 2: Take heap snapshots. Take a snapshot, perform the suspected leaking action (e.g., open/close a modal 10 times), force GC, take another snapshot. Compare them using the "Comparison" view to see which objects accumulated.
Step 3: Use Allocation Timeline. Record an allocation timeline while reproducing the leak. Blue bars that persist represent memory that was allocated but never freed.
Step 4: Trace retainers. Click on a leaking object to see its retainer tree — the chain of references keeping it alive. Common culprits:
// Forgotten event listeners
window.addEventListener('resize', this.handleResize);
// Fix: remove in cleanup
// Closures capturing large scope
setInterval(() => { /* captures outer variables */ }, 1000);
// Fix: clear the interval
// Detached DOM trees
const cached = document.querySelector('.modal');
parent.removeChild(cached); // still referenced by 'cached'
For detached DOM nodes specifically, use the DevTools "Detached Elements" panel or search for "Detached" in heap snapshots. The retainer path shows what's keeping the node alive.
Tree-shaking is a dead code elimination technique used by bundlers (Webpack, Rollup, esbuild) to remove unused exports from the final bundle. It relies on the static structure of ES module import/export statements, which can be analyzed at build time.
// utils.js
export function used() { return 'needed'; }
export function unused() { return 'dead code'; }
// app.js
import { used } from './utils.js';
console.log(used());
// unused() is tree-shaken out of the bundle
Tree-shaking only works with ESM because import/export are statically analyzable. CommonJS require() is dynamic and opaque to static analysis.
Things that prevent tree-shaking: side effects at module scope (code that runs on import), dynamic imports via computed strings, re-exporting everything (export * from), and CommonJS modules. The "sideEffects": false field in package.json tells the bundler that all files in the package are side-effect-free, enabling more aggressive elimination. You can also list specific files with side effects: "sideEffects": ["./src/polyfills.js", "*.css"].
Source maps are files that map minified/bundled code back to the original source code, enabling meaningful debugging in browser DevTools even when running optimized production builds.
A source map (.map file) contains: the original source files, the mapping between generated and original positions (encoded in Base64 VLQ format), and optionally the original source content. The bundled file includes a comment linking to the map:
// app.min.js
var a=function(){console.log("hello")};
//# sourceMappingURL=app.min.js.map
In production, you have several strategies:
- Hidden source maps: Generate maps but don't include the
sourceMappingURLcomment. Upload maps to your error monitoring service (Sentry) for symbolicated stack traces, but don't expose them to users. - Restricted source maps: Serve maps only to authenticated/internal users via server-side access control.
- No source maps in production: Only generate them in development (simplest but limits production debugging).
Source maps add to build time and storage but have zero runtime cost unless DevTools are open (the browser only fetches the map when needed). They support CSS and other languages too, not just JavaScript.
Use these questions in your next interview
Import all 45 questions into Intervy with one click. Add scoring rubrics, organize by template, and conduct structured interviews.
Try Intervy Free