JavaScript Interview Questions
Practice frequently asked JavaScript interview questions from beginner to advanced levels. Use the filters below to focus your preparation by difficulty.
A programming language is your way of talking to a computer. You write instructions in a specific syntax, and the computer follows them. Each language has its own strengths - Python is great for data work, JavaScript dominates the web, and C++ is the go-to for performance-heavy stuff like games. The key thing interviewers want to hear is that you understand why different languages exist for different jobs.
Front-end is everything the user sees and clicks on. When you open a website and see buttons, text, images, and animations - that's all front-end. It's built with HTML for structure, CSS for styling, and JavaScript for making things interactive. If it runs in the browser, it's front-end.
Back-end is the behind-the-scenes part users never see. It handles the server, the database, and the business logic that makes an app actually work. When you log in, your credentials get verified on the back-end. When you post a comment, the back-end saves it. It powers everything the front-end.
HTML (Hypertext Markup Language) gives web pages their structure. Think of it as the skeleton of every website. You use tags to tell the browser "this is a heading," "this is a paragraph," "this is an image." Without HTML, a browser wouldn't know how to organize content on the screen.
CSS (Cascading Style Sheets) controls how HTML elements look. Colors, fonts, spacing, layouts, animations - that's all CSS. HTML builds the structure, and CSS makes it look good. The "cascading" part means styles can override each other based on specificity, which is a common gotcha for beginners.
JavaScript is the language that makes websites come alive. Click a button and a dropdown appears? That's JavaScript. Submit a form without the page reloading? JavaScript again. It handles everything interactive on the web.
What makes it special is that with Node.js, you can also run JavaScript on the server. So you can build a full application, front-end and back-end, using just one language. That's a huge productivity win.
| JavaScript | Java |
|---|---|
| Primarily for front-end web development and browser-based apps | Used for enterprise server-side apps and Android development |
| Dynamically typed - no type declarations needed | Statically typed - types declared at compile time |
| Interpreted with JIT compilation in browsers | Compiled to bytecode, then runs on the JVM |
| Single-threaded, uses async patterns (event loop) | Natively supports multi-threading |
| Runs anywhere there's a browser or Node.js | Runs on any platform with a JVM installed |
| Powers front-end frameworks, Node.js servers, full-stack apps | Enterprise backends, Android apps, big data pipelines |
| Huge ecosystem: React, Vue, Angular, Express | Mature ecosystem: Spring Boot, Hibernate, Maven |
Yes, JavaScript is case-sensitive, and this trips up
beginners constantly. myVariable and
myvariable are two completely different
identifiers. This applies to variable names, function
names, and even built-in keywords. Writing
getElementById as getElementByID
will throw an error. Always double-check your casing.
let myVariable;
let myvariable;
Think of building a house: HTML is the walls and rooms (structure), CSS is the paint and furniture (styling), and JavaScript is the electricity and plumbing (behavior). You need all three working together. HTML alone gives you a plain document. Add CSS and it looks great. Add JavaScript and it becomes interactive. Read more
Brendan Eich wrote JavaScript in May 1995, working at Netscape - under a tight deadline that gave him just 10 days to build the first version. It shipped initially as "LiveScript" and was renamed just before release, partly to capitalize on the buzz around Java at the time. By 1997 it had a formal specification called ECMAScript. Since then it has evolved yearly, growing from a browser scripting tool into a language that now powers servers (Node.js), mobile apps, and even machine learning workloads. Read more
ECMAScript is the official specification that defines how JavaScript should work. Think of it as the rulebook. JavaScript is the most popular implementation of that rulebook. When someone says "ES6 features," they're referring to the ECMAScript 2015 specification. Other implementations exist too, like JScript (old IE), but JavaScript is the one that stuck.
JavaScript follows the ECMAScript specification, and new versions ship yearly. The big ones to know about are ES5 (2009) which gave us widespread browser support, and ES6 (2015) which was the biggest upgrade in the language's history. Here's the full list:
- ECMAScript 1 (1997) - the beginning
- ECMAScript 2 (1998)
- ECMAScript 3 (1999) - regex, try/catch added
- ECMAScript 4 (never released - too ambitious)
- ECMAScript 5 (2009) - JSON, strict mode, array methods
- ECMAScript 6 (2015), also known as ES2015 - the game changer
- ECMAScript 7 (2016), also known as ES2016
- ECMAScript 8 (2017), also known as ES2017
- ECMAScript 9 (2018), also known as ES2018
- ECMAScript 10 (2019), also known as ES2019
- ECMAScript 11 (2020), also known as ES2020
- ECMAScript 12 (2021), also known as ES2021
ES5 (2009) was when JavaScript got serious - it added
strict mode, JSON support, and array methods like
forEach. ES6 (2015) was the biggest
overhaul ever, introducing arrow functions,
let/const, classes, template
literals, destructuring, and modules. Most modern
JavaScript you write today is ES6+. If a job posting
mentions "modern JavaScript," they mean ES6 and beyond.
Client-side JavaScript runs in the user's browser. It handles UI interactions, DOM manipulation, animations, and form validation. It's what makes a page feel responsive without a full reload.
Server-side JavaScript (typically Node.js) runs on the server. It handles API requests, database queries, authentication, and file operations. The key difference? Client-side code is visible to anyone (open DevTools and you can see it), while server-side code stays private. Never put secrets in client-side JavaScript.
- High-Level Language: You write readable code, not raw machine instructions.
- Garbage Collected: Memory is cleaned up automatically - no manual free() calls.
- Interpreted (JIT): Code gets compiled on-the-fly by the engine (V8, SpiderMonkey) for speed.
- Multi-Paradigm: You can write OOP, functional, or procedural code - your choice.
- Prototype-Based: Objects inherit from other objects directly, not from classes (classes are syntactic sugar).
- First-Class Functions: Functions are values - assign them to variables, pass them around, return them.
- Dynamically Typed: Variables can hold any type, and types are checked at runtime, not compile time.
- Single Threaded: One call stack, one thing happening at a time.
- Non-Blocking Event Loop: Async operations (network, timers) don't freeze the main thread.
- Platform Independent: Runs in any browser, on any OS with Node.js, and even on IoT devices.
JavaScript is dynamically typed . You don't declare types when creating variables - JavaScript figures out the type at runtime. A variable can hold a string one moment and a number the next. This gives you flexibility but can cause sneaky bugs. That's exactly why TypeScript (which adds static types on top of JavaScript) has become so popular.
In a single-threaded environment, there's only one call stack, so code runs one line at a time. JavaScript processes one operation, finishes it, then moves to the next. This makes things simpler to reason about (no race conditions or deadlocks), but it means a heavy computation can freeze the entire page. That's why long-running tasks should be handled asynchronously or offloaded to Web Workers.
The secret to JavaScript's speed despite being single-threaded is the event loop. When you hit something slow - a fetch call, a setTimeout - JavaScript registers a callback and immediately moves on to the next line. It does not wait. When the slow operation finishes, the callback is placed in a queue and runs once the main thread is free. This is what keeps the main thread available for other work instead of sitting idle. The flip side: a long-running synchronous loop will freeze everything, since there is only one thread.
In JavaScript, naming variables has a few rules you need to follow, and breaking them will cause errors:
- 1. Must start with a letter, underscore (_), or dollar sign ($). Starting with a number is a syntax error.
- 2. Can only contain letters, digits, underscores, or dollar signs after the first character.
- 3. Case-sensitive:
nameandNameare different variables. -
4. Can't use reserved words like
let,class, orreturnas variable names. -
5. Use descriptive names.
userAgebeatsxevery time. Your future self (and your teammates) will thank you.
Variables are named containers that hold data. You
create them with var, let,
or const. Use const by
default (for values that won't be reassigned),
let when you need to reassign, and
avoid var in modern code since its
function-scoping causes bugs. Once declared, you can
store numbers, strings, arrays, objects, or anything
else, then read, update, and pass those values
around your program.
'Data' is the broad term for any information your
program works with - user input, API responses,
config settings, whatever. A 'value' is one specific
piece of that data at a particular moment. For
example, a user's profile is data. The string
"John" stored in user.name is a value.
Data is the concept, values are the concrete things
your code actually manipulates.
You'll notice you never write "use strict" at the top
of your React or Angular files. That's because ES6
modules automatically run in strict mode. Since both
frameworks use import/export
syntax everywhere, strict mode is already on by
default. Adding it manually would just be redundant.
This is a good interview detail - it shows you
understand how modules work under the hood.
A data type tells JavaScript what kind of value
you're working with. Is it a number? A piece of text?
True or false? The data type determines what operations
you can perform on it and how much memory it takes up.
You can't do math on a string or call
.toUpperCase() on a number. Getting the
data type wrong is one of the most common sources of
bugs in JavaScript.
JavaScript has seven primitive data types (the simple, immutable ones):
- 1. Boolean
- 2. Null
- 3. Undefined
- 4. Number
- 5. BigInt (added in ES2020)
- 7. Symbol
And two non-primitive (reference) types that can hold collections of data:
- 1. object
- 2. function
In JavaScript, a
symbol
is a primitive type introduced in ES6 that creates a
guaranteed unique identifier. The main use case?
Adding properties to objects without risking name
collisions. Every Symbol() call creates
a brand new, unique value - even
Symbol("id") === Symbol("id") is
false. You'll see them used in libraries
and frameworks to add "hidden" properties to objects.
const mySym = Symbol(8);
BigInt
lets you work with integers beyond JavaScript's safe
number limit (2^53 - 1). Regular numbers lose precision
past that point, which can cause real bugs in financial
or scientific apps. You create a BigInt by appending
"n" to a number literal or calling the
BigInt() constructor. One gotcha: you
can't mix BigInt and regular numbers in the same
expression without explicit conversion.
const bigNumber = BigInt(35445565654656);
const anotherBigNumber = 3454354543543543n;
| Primitive Data Types | Non-Primitive Data Types |
|---|---|
| Hold a single value directly in the variable | Hold a reference (pointer) to the actual data in memory |
| Types: number, string, boolean, null, undefined, symbol, bigint | Types: objects, arrays, functions, dates, regex |
| Immutable - you can't change the value itself, only reassign | Mutable - you can change properties and contents in place |
| Stored on the stack (fast access) | Stored on the heap (flexible size) |
| Compared by their actual value | Compared by reference - two identical objects are not equal unless they're the same object |
This is a classic JavaScript trick question.
typeof NaN returns "number".
Yes, "Not a Number" is technically a number. It exists
because NaN is the result of failed numeric operations
(like 0/0 or parseInt("hello")),
and the IEEE 754 floating-point spec defines it as part
of the number type. Another gotcha: NaN !== NaN,
so use Number.isNaN() to check for it.
console.log(typeof NaN); //number
Another famous JavaScript quirk: typeof null
returns "object". This is actually a bug
from the very first version of JavaScript in 1995, and
it was never fixed because too much existing code
depends on it. In reality, null is a
primitive value that means "intentionally empty" or "no
value." If you need to check for null specifically, use
=== null instead of typeof.
console.log(typeof null); //object
Infinity is a special numeric value in
JavaScript that represents a number beyond the largest
representable value. You'll see it when you divide by
zero (10/0 gives Infinity)
or when a calculation overflows. There's also
-Infinity for the opposite direction.
A practical tip: you can use Infinity as
an initial value when searching for a minimum in an
array, since any real number will be smaller.
console.log(10/0); //Infinity
High-level languages like JavaScript and Python hide hardware details and let you focus on solving problems quickly. But sometimes you need direct control over memory and CPU.
Low-level languages like C and Assembly shine in embedded systems (think microcontrollers in your car), operating system kernels, device drivers, and anywhere performance is critical - like game engines or real-time audio processing. When every microsecond counts, you want low-level control.
High-level languages
(JavaScript, Python, Ruby) let you write code that
reads almost like English. You don't worry about
memory allocation or CPU registers. Trade-off?
Slightly less performance.
Low-level languages
(C, Assembly) give you direct control over hardware
and memory. The code is harder to write and read, but
you get maximum performance and minimal overhead. The
quick test: if the language manages memory for you,
it's high-level. If you're calling malloc
yourself, it's low-level.
| Front-end | Back-end |
|---|---|
| What the user sees and interacts with directly | The server logic, databases, and APIs users never see |
| Technologies: HTML, CSS, JavaScript, React, Vue | Technologies: Node.js, Python (Django), Ruby (Rails), Go |
| Builds the UI - buttons, forms, layouts, animations | Handles business logic, authentication, data storage |
| Runs in the user's browser | Runs on a remote server or cloud |
| Focuses on design, speed, and responsive UX Read more | Focuses on security, data integrity, and scalability Read more |
Developer tools are how you debug, profile, and build efficiently. You'll use them every single day as a developer. Here are the ones that matter most:
-
1. Chrome DevTools:
- Inspect and live-edit HTML/CSS in real time.
- Set breakpoints and step through JavaScript.
- Profile network requests and find performance bottlenecks.
-
2. Visual Studio Code:
- IntelliSense gives you smart autocomplete as you type.
- Built-in debugger with breakpoints and watch expressions.
- Thousands of extensions for any framework or language.
-
3. Webpack:
- Bundles your JS, CSS, and assets into optimized files for production.
Without standardization, every browser would implement JavaScript differently, and your code would break unpredictably across Chrome, Firefox, and Safari. That was actually a real problem in the early web days.
The
ECMA
International organization maintains the ECMAScript
specification, which defines exactly how JavaScript
should behave - its syntax, built-in objects, error
handling, and more. This spec is why
Array.map() works the same way in every
modern browser. Browser vendors implement the spec, and
the TC39 committee proposes new features through a
staged process before they become official.
Each browser uses a different JavaScript engine (V8 in Chrome, SpiderMonkey in Firefox, JavaScriptCore in Safari), and they don't always implement features at the same pace. Older browsers might not support newer syntax at all. The real-world fixes include: feature detection (check before you use), polyfills (add missing features), transpilers like Babel (convert modern code to older syntax), and thorough cross-browser testing. Read more
Dealing with JavaScript compatibility is a daily reality for web developers. Here's your toolkit:
Polyfills and transpilers solve two different problems when it comes to JavaScript compatibility :
- Polyfills:
- Transpilers (e.g., Babel):
These are two ways to handle JavaScript compatibility:
- Feature Detection:
- Browser Detection:
Garbage collection is JavaScript's way of automatically freeing up memory you're no longer using. You create objects and variables, and when nothing references them anymore, the garbage collector cleans them up. The main algorithm is "mark-and-sweep" - it marks all reachable objects starting from the root, then sweeps away everything unmarked. You don't manage memory manually, but be careful with closures, global variables, and forgotten event listeners - they can keep references alive and cause memory leaks.
Interpreted languages like JavaScript get read and executed line-by-line at runtime. Compiled languages (like C or Go) get converted to machine code before execution, which makes them faster but adds a build step. Here's the twist: modern JavaScript engines like V8 use Just-In-Time (JIT) compilation. They interpret code first, then identify "hot" functions that run frequently and compile those into optimized machine code on the fly. So JavaScript is technically both interpreted and compiled - a hybrid approach that gives you flexibility with surprisingly good performance.
JavaScript doesn't force you into one coding style. You can use multiple programming paradigms depending on what fits your problem. Need classes and inheritance? Go object-oriented. Want pure functions and immutability? Use functional programming. Quick script that runs top-to-bottom? Procedural works fine. Most real-world JavaScript mixes these styles. React, for example, leans heavily into functional patterns with hooks, while Angular uses OOP-style classes.
In JavaScript,
prototype-based
inheritance means objects inherit directly from other
objects through a prototype chain. There are no real
classes under the hood (the class keyword
is just syntactic sugar). When you access a property
that doesn't exist on an object, JavaScript walks up
the prototype chain to find it. In Java or C++,
inheritance is class-based - you define a rigid class
hierarchy at compile time. JavaScript's approach is
more flexible since you can modify prototypes at
runtime, but it can be confusing if you're coming
from a classical OOP background.
First-class functions means functions in JavaScript are treated like any other value. You can store them in variables, pass them as arguments, and return them from other functions. This is a big deal - it's not something every language supports.
This enables powerful patterns: callbacks (passing
functions to handle events), higher-order functions
(functions that take or return other functions), and
array methods like .map() and
.filter(). It's the foundation of
functional programming in JavaScript and makes your
code more composable and reusable.
With JavaScript's
dynamic typing
, you never declare types - a variable can hold a
string, then a number, then an object. The engine
figures out types at runtime. In statically typed
languages like Java or TypeScript, you declare types
upfront and the compiler catches mismatches before your
code runs. JavaScript's approach means faster
prototyping but more runtime surprises. Classic
gotcha: "5" + 3 gives "53"
(string concatenation), while "5" - 3
gives 2 (numeric subtraction). That's
dynamic typing in action.
JavaScript's single-threaded nature means it has one call stack and executes one piece of code at a time. If a function takes 5 seconds to compute, everything else waits - the UI freezes, clicks don't register, animations stall. That's why JavaScript relies on asynchronous patterns: the event loop, callbacks, Promises, and async/await let you offload slow operations (network requests, timers) so the main thread stays responsive. For truly CPU-heavy work, Web Workers give you separate threads that run in the background without blocking the page.
Picture a restaurant with one waiter (the call stack). When a customer orders something slow to prepare, the waiter doesn't just stand there - they take other orders. When the kitchen finishes, that dish goes into a queue. The waiter grabs it when free. That's your event loop - it constantly checks if the call stack is empty, then pulls the next callback from the task queue. This is how JavaScript handles async work like network requests and timers without freezing your page. Gotcha: microtasks (Promises) get priority over macrotasks (setTimeout), which trips people up in interviews.
JavaScript's platform independence is one of its biggest selling points. Write your code once, and it runs in Chrome on Windows, Safari on a Mac, Node.js on a Linux server, or even a mobile app with React Native. You don't need to recompile or port anything. This is why one language can power your frontend, backend, CLI tools, and even desktop apps. For teams, it means fewer context switches and shared code between layers.
console.log() is your everyday debugging buddy - it prints general info to the console. But the console object has more targeted tools: console.error() shows messages in red, making errors jump out immediately. console.warn() uses yellow for things that aren't broken yet but could be. The practical difference matters in production logging too, since log aggregators can filter by severity level. Pro tip: console.table() is great for arrays and objects when you want a quick visual.
console.log() shines when you need to peek inside your code while it runs. Drop it before and after a suspicious function to track variable values, confirm which branch of an if-statement executed, or inspect the shape of an API response. It's especially handy with async code where the execution order might surprise you. That said, don't ship console.log calls to production - use a proper logger instead, and lean on browser DevTools breakpoints for complex debugging sessions.
document.write() injects content straight into the HTML document as it loads. Here's the big gotcha: if you call it after the page finishes loading, it wipes out everything on the page and replaces it with your content. That alone makes it dangerous for real apps. It also blocks HTML parsing, can cause race conditions with async scripts, and doesn't play well with modern rendering. You might see it in old tutorials or quick throwaway demos, but in production code, use DOM methods like createElement() or textContent instead.
Honestly, almost never in modern development. The only semi-legitimate use case is during initial page load for things like third-party ad scripts that need to inject content synchronously. Some legacy testing setups also use it for quick output. But for anything real, methods like innerHTML or appendChild() give you precise control over what changes and when, without the risk of accidentally nuking your entire page. If you see document.write() in a codebase, it's usually a sign the code needs updating.
The innerHTML property lets you read or replace the HTML content inside any element. It's like swapping out the guts of a DOM node in one shot. Quick and convenient for simple updates, but there are real traps to watch out for when using it Read more :
innerHTML is the quick-and-dirty approach: hand it an HTML string and the browser parses and renders it in one go. createElement() and appendChild() are the surgical approach: you build each node by hand, attach event listeners, and insert them precisely where you want. Use innerHTML when you're replacing a chunk of static content. Use createElement() when you need fine control, like building a dynamic list where each item has click handlers. The big difference? innerHTML wipes out existing event listeners on child elements, while appendChild() preserves them.
Every value in JavaScript has a hidden boolean personality. When you drop a value into an
if statement, JavaScript secretly converts it to
truthy or falsy.
The falsy list is short and worth memorizing: false, 0, "" (empty string), null, undefined, and NaN. Everything else is truthy - including empty arrays and empty objects, which surprises a lot of people. So if ([]) is true, even though the array has nothing in it.
Type coercion
is JavaScript quietly converting values behind your back to make an operation work. When you write "5" + 3, JS turns 3 into a string and gives you "53". That's implicit coercion - the engine decides on its own. Explicit coercion is when you do it yourself with Number("5") or String(42). The key interview insight: implicit coercion is the root cause of most == vs === confusion, and why experienced devs almost always prefer strict equality.
Manual type conversion is when you explicitly tell JavaScript to change a value's type, rather than letting the engine guess. You use built-in functions like Number(), String(), Boolean(), parseInt(), or parseFloat(). This makes your intent clear to anyone reading the code. For example, Number("42") gives you 42 as a number, and String(true) gives you "true". Always prefer this over relying on implicit coercion - your future self will thank you when debugging.
Identifiers
are simply the names you give things in your code - variables, functions, classes, parameters. Think of them as labels you stick on boxes so you can find them later. Every time you write let userName or function calculateTotal(), those names are identifiers. Choosing clear, descriptive identifiers is one of the simplest ways to make your code readable without needing comments.
JavaScript has a few hard rules for identifiers : they must start with a letter, underscore, or dollar sign (never a number), they're case-sensitive, and they can't be reserved keywords. Beyond the rules, conventions matter: use camelCase for variables and functions, PascalCase for classes, and UPPER_SNAKE_CASE for constants. Following these patterns makes your code instantly recognizable to other JavaScript developers. Read More
Reserved keywords
are words that JavaScript has already claimed for its own grammar - things like if, return, class, let, and const. You can't use them as variable or function names because the engine would get confused about whether you mean the keyword or your variable. A common trip-up: class is reserved even in contexts where you're not using classes, and some words like await are only reserved inside async functions.
Declare a variable with
constants
using const and JavaScript won't let you reassign it. You must give it a value right away - no declaring now and assigning later. This is your go-to for values that shouldn't change, like API URLs, configuration values, or mathematical constants. A good rule of thumb: start with const for everything, and only switch to let when you actually need to reassign.
Before ES6, we only had var, and it caused all sorts of scoping headaches. let was introduced to fix that by giving you block-level scope - a variable declared with let inside an if-block or for-loop stays inside that block.
let also behaves differently with hoisting. While var gets hoisted and initialized as undefined (so you can access it before declaration), let gets hoisted but stays in a "temporal dead zone" until execution reaches the declaration line. Access it too early and you get a ReferenceError instead of a silent undefined - which is actually better because it catches bugs early.
var is the original way to declare variables in JavaScript, and understanding its quirks helps you debug legacy code and ace interview questions.
var has function scope, not block scope. Declare a var inside an if-block and it leaks out into the surrounding function. It also hoists to the top of its scope and initializes as undefined, which means you can use a var before the line where you declared it without getting an error - you'll just get undefined. This silent behavior is a classic bug factory. The classic interview example: a for-loop with var and setTimeout where every iteration prints the same final value. Use let or const in modern code to avoid these traps.
const tells JavaScript "this binding won't change." You must assign a value immediately, and you can't reassign it later. This is why most style guides recommend const as your default choice - it signals intent clearly.
Here's the biggest misconception about const: it does NOT make values immutable. It only prevents reassignment of the variable itself. So a const object can still have its properties changed, and a const array can be pushed to. You just can't point the variable at a completely different object or array. If you need true immutability, look into Object.freeze().
Alert boxes are the blunt instrument of user communication. They block everything - your script pauses, and the user can only click OK. Here's what makes them painful:
The confirm() box is like alert()'s older sibling - it shows a message but gives the user two buttons: "OK" and "Cancel." It returns true or false based on their choice. You'll see it used for "Are you sure you want to delete this?" type prompts. Unlike alert(), it actually captures a decision from the user Read more . In production apps, custom modals are preferred because confirm() blocks execution and you can't customize the button text or styling.
window.alert() pops up a browser-native dialog with your message and an OK button. The catch? It completely freezes your script and the page until the user dismisses it. That's why it's mostly used for quick debugging during development or critical warnings. Overusing alerts trains users to click OK without reading, which defeats the purpose. For anything user-facing in production, use toast notifications or modals instead.
window.alert() wins on simplicity: zero setup, works everywhere, guaranteed to grab attention. But that's about where the advantages end.
The downsides are real: it blocks all JavaScript execution, you can't style it, you can't add custom buttons, and it looks different on every browser. Custom modals and toast notifications take more effort to build, but they integrate with your design, support rich content, don't freeze the page, and let users keep interacting. For anything beyond a quick dev-time debug message, custom UI wins every time. The tradeoff is development time vs. user experience.
The prompt() box is the only built-in dialog that collects text input from the user. It shows a message, a text field, and OK/Cancel buttons. Click OK, it returns the typed string. Click Cancel, it returns null. You can also pass a default value as the second argument Read more . Quick tip: always check for null before using the return value, since users can cancel. Like alert() and confirm(), it blocks execution, so use HTML form inputs for real applications.
Operators are the verbs of your code. They do things to values: add them, compare them, assign them, check conditions. Without operators, your variables would just sit there doing nothing. They range from the familiar math symbols (+, -, *) to logical checks (&&, ||) and even structural ones like typeof and instanceof. Understanding operators is the foundation for writing any expression in JavaScript.
An operand is the value that an operator works on. In a + b, both a and b are operands and + is the operator. Think of it like a sentence: the operator is the verb, and operands are the nouns it acts on.
Some operators take one operand (unary, like !true), most take two (binary, like 5 + 3), and the ternary operator takes three. Knowing this terminology helps when reading documentation or error messages.
let a = 5; // Operand (variable)
let b = 3; // Operand (variable)
let result = a + b; // Operator (+) acts on 'a' and 'b' to produce a result
console.log(result); // Output will be 8 (sum of 'a' and 'b')
Arithmetic operators
are your basic math toolkit: addition (+), subtraction (-), multiplication (*), division (/), modulus (%), and exponentiation (**). The one that trips people up is +, because JavaScript overloads it for string concatenation too. So 5 + "3" gives you "53", not 8. Also worth knowing: modulus (%) is handy for checking even/odd numbers or cycling through arrays.
Comparison operators ask questions about values and always answer with true or false. You've got greater than (>), less than (<), greater-or-equal (>=), less-or-equal (<=), loose equality (==), strict equality (===), and their "not" versions. These are the backbone of every if statement, while loop, and for condition you'll ever write. The biggest interview topic here is == vs === - stick with === unless you have a specific reason not to.
== does a loose comparison that tries to convert types first, which leads to weird results like "0" == false being true. === is strict - it checks both value and type with no conversion, so "5" === 5 is false. In real projects, always default to ===. The only common exception is checking null == undefined (which is true with ==) as a shortcut for checking both at once.
console.log(5 == "5"); // true
console.log(5 === "5"); // false
console.log(null == undefined); // true
console.log(null === undefined); // false
Assignment operators put values into variables. The basic = just assigns, but the compound ones like +=, -=, *=, /= do math and assign in one step. So x += 5 is shorthand for x = x + 5. They exist for every arithmetic operator plus a few others like &&= and ||= (added in ES2021). Using them makes your code shorter, but don't sacrifice readability for
assignment
cleverness - x = x + 5 is perfectly fine if it's clearer in context.
= (Assignment Operator): This puts a value into a variable. x = 5; stores 5 in x. It's not asking a question - it's giving an order.
== (Equality Operator): This asks "are these loosely equal?" and lets JavaScript convert types to make it work. So 5 == '5' returns true because JS converts the string to a number first. This is where bugs love to hide.
=== (Strict Equality Operator): This asks "are these exactly the same value AND type?" No conversions, no surprises. 5 === '5' returns false because number and string are different types. This is the one you should use by default. A common interview test: mixing these up in conditions is one of the most frequent JavaScript bugs in code reviews.
Logical operators combine or flip boolean values. AND (&&) returns true only when both sides are true - great for "this AND that must be valid." OR (||) returns true if either side is true - perfect for fallbacks. NOT (!) flips true to false and vice versa. Here's the real-world trick:
logical
operators in JavaScript don't always return true/false. They return the actual value that determined the result. That's why name || "Anonymous" works as a default value pattern - if name is falsy, you get "Anonymous" back.
The
ternary
operator (? :) is JavaScript's only operator that takes three operands. It's a compact if-else on a single line: condition ? valueIfTrue : valueIfFalse. Great for simple assignments like const status = age >= 18 ? "adult" : "minor". But resist the urge to nest them - a ternary inside a ternary inside a ternary is a code review nightmare. If the logic is complex, just use a regular if-else block.
The
typeof
operator returns a string telling you what type a value is - "number", "string", "boolean", "object", "function", "undefined", or "symbol". It's your go-to for type checking before performing operations. The famous gotcha: typeof null returns "object", which is a bug from JavaScript's earliest days that was never fixed for backward compatibility. Also, typeof on an undeclared variable returns "undefined" instead of throwing an error, which can be useful for feature detection.
Operator precedence is the pecking order that determines which operations run first when multiple operators appear in one expression. Just like in math where multiplication happens before addition, JavaScript has its own hierarchy: grouping () > member access . > NOT ! > arithmetic > comparison > logical AND > logical OR > assignment. When in doubt, use parentheses to make your intent explicit. Code that relies on obscure precedence rules is hard to read and a breeding ground for bugs.
A control structure is how you tell your program to make decisions and repeat work instead of just running line by line from top to bottom. Without them, your code would be a straight line with no branches. The three main types are: sequential (code runs in order, the default), selection (if/else, switch - pick a path based on a condition), and iteration (for, while - repeat a block until a condition changes). Interviewers like to hear you name all three.
Imagine you need to send a "Happy Birthday" email to 10,000 users. You wouldn't write 10,000 lines of send-email code. A loop lets you write the logic once and repeat it as many times as needed. It keeps running a block of code as long as some condition holds true, then stops. Loops are everywhere: processing arrays, reading files line by line, polling for data, or retrying failed operations. The key is always making sure your condition eventually becomes false, or you've got an infinite loop on your hands.
The for loop is your go-to when you know exactly how many times you need to repeat something. It packs three things into one line: initialization, condition check, and update. You'll use it constantly for iterating over arrays by index, running a block a specific number of times, or counting through a range Read more . Pro tip: declare your loop variable with let, not var, to avoid the classic closure-in-a-loop bug.
The for...of loop (ES6) gives you the values directly instead of making you deal with index counters. Instead of for (let i = 0; i < arr.length; i++), you just write for (const item of arr). Cleaner and less error-prone. It works on anything iterable: arrays, strings (character by character), Maps, Sets, and even NodeLists from the DOM
Read more . One catch: it does NOT work on plain objects. For objects, you need for...in or Object.entries().
The for...in loop walks through the enumerable property names (keys) of an object. It's designed for objects, not arrays. Write for (const key in myObject) and you get each property name as a string, then access values with myObject[key]
Read more . Heads up: for...in also iterates over inherited prototype properties, so always use hasOwnProperty() to filter, or better yet, use Object.keys() with forEach for cleaner object iteration.
The while loop keeps running as long as its condition is true, checking before each iteration. It's perfect when you don't know upfront how many times you'll loop - like reading data until you hit the end of a stream, or retrying an operation until it succeeds Read more . The critical rule: make sure something inside the loop changes the condition, or you'll freeze the browser with an infinite loop.
Reach for a for loop when you know the count upfront, like iterating through an array or running something exactly 10 times. All the loop logic (start, stop, step) lives in one neat line:
for (let i = 0; i < 10; i++) {
// Executes 10 times
}
Use a while loop when the number of iterations depends on a condition you can't predict, like waiting for user input, polling a server, or processing data until a sentinel value appears:
while (condition) {
// Executes as long as condition is true
}
The do...while loop is the while loop's cousin with one key twist: it runs the code block first, then checks the condition. This guarantees at least one execution even if the condition is false from the start. Think of a menu prompt that should display at least once before checking if the user wants to continue Read more . In practice, do...while is less common than for or while, but it's the right tool when that "at least once" guarantee matters.
Nested loops are loops inside loops. The outer loop runs once, and the inner loop runs completely for each outer iteration. If the outer runs 10 times and the inner runs 10 times, that's 100 total iterations - and this is where performance problems sneak in. They're commonly used for working with 2D arrays, grids, or generating combinations. Be careful with nesting depth: three levels deep usually means you should refactor into separate functions.
The switch statement is a cleaner alternative to long if...else chains when you're comparing one value against many possible matches. It reads like a routing table: "if this value is A, do this; if it's B, do that" Read more . Switch uses strict equality (===) for comparisons, which catches some people off guard. It's most useful when you have 3+ discrete values to check against, like handling different action types in a reducer.
break is your emergency exit from a loop or switch block. The moment JavaScript hits it, execution jumps to the first line after the loop. It's perfect for stopping early once you've found what you're looking for - like searching an array and stopping at the first match Read more .
continue is more selective - it skips the rest of the current iteration and jumps to the next one. Use it to skip items that don't meet your criteria without nesting everything inside an if-block. For example, skip invalid entries in an array while processing the rest. Both keywords only affect the innermost loop they're inside.
The default case is your fallback when none of the other case values match - think of it like the else in an if-else chain. You should almost always include one, even if it just logs an unexpected value or throws an error, so you know when something unexpected slips through Read more.
Switch uses strict equality (===) for comparisons, which means it's case-sensitive - "apple" and "Apple" are different cases. If you're matching user input or data from an external source, normalize the case first (e.g., .toLowerCase()) before passing it to the switch, otherwise you'll end up with unmatched cases
Read more.
Fall-through happens when a switch case has no break, so execution continues into the next case.
Always add break after each case unless fall-through is intentional, and add a comment when it is.
Unintended fall-throughs are a common source of bugs.
- Capabilities:
- Limitations:
Forward compatibility means older code can handle data or content from a newer version without breaking. It's harder to guarantee than backward compatibility since you can't predict future formats - the old system has to be designed to gracefully ignore things it doesn't understand rather than crashing.
Backward compatibility means new versions of a system can still work with older data and code. JavaScript takes this extremely seriously - the spec almost never removes features because breaking existing websites is unacceptable. That's why you still see quirky old behaviors (like typeof null === "object") that can't be fixed without breaking the web.
ECMA-262 is the formal specification that defines how JavaScript works - the syntax, semantics, built-in objects, all of it. New language features go through the TC39 proposal process before landing in a new edition of ECMA-262. When you see "ES2023" or "ES6", those are shorthand for specific editions of this spec.
ISO/IEC 22275 is based on ECMA-262 - it's essentially the same JavaScript spec published as an ISO standard. This matters mainly for organizations or governments that require ISO-certified standards for procurement. In day-to-day development you won't encounter it, but it's worth knowing it exists.
Separation of concerns means keeping different responsibilities in different places - HTML structures content, CSS handles presentation, and JavaScript handles behavior. In component-based frameworks it means not mixing data-fetching logic with rendering logic. It makes code easier to maintain and test because each piece only needs to know about its own job.
Putting "use strict" at the top of a file or function enables strict mode, which catches common mistakes that would otherwise fail silently - like accidentally creating global variables by omitting var, or assigning to read-only properties. ES modules are always in strict mode by default, so in modern code you'll rarely write it explicitly, but knowing what it does is useful when debugging older codebases.
Just-In-Time (JIT) compilation means code is compiled at runtime, just before execution. This gives JavaScript near-native performance: the engine optimizes hot code paths on the fly, reduces memory waste with dead code elimination, and adapts to actual runtime behavior.
A string is just text - a sequence of characters. In JavaScript, strings are primitive and immutable, meaning operations like replace() or toUpperCase() don't modify the original string, they return a new one. You can quote them with single quotes, double quotes, or backticks - in modern code, backticks (template literals) are usually the best choice since they support interpolation and multiline without extra syntax.
You mainly create strings with literals - just wrap your text in quotes. The new String() constructor creates a String object instead of a primitive, which almost always causes problems (it breaks === comparisons and is slower), so avoid it unless you have a very specific reason:
-
String Literal (most common):
let str = "Hello"; - Using new String() Constructor (creates an object, rarely used):
let strObj = new
String("Hello");
The cleanest options are String(num) or num.toString(). toString() also accepts a radix - for example, num.toString(16) gives you the hex representation. Avoid concatenating a number with an empty string (num + "") just to coerce it - it works, but it reads poorly and can cause confusing bugs:
- Using toString() method:
- Using String() function:
let num = 123;
let str = num.toString();
let strObj = new
String("Hello");
The safest general option is Number(str) - it returns NaN for anything that can't convert cleanly, so you know when something went wrong. Use parseInt() when you want an integer and don't mind it ignoring trailing non-numeric characters like parseInt('42px') returning 42. Always pass the radix (10) to parseInt to avoid surprises:
- Number() function:
- parseInt() for integers:
- parseFloat() for floating-point numbers:
let str = "123";
let num = Number(str);
let str = "123";
let num = parseInt(str);
let str = "123.45";
let num = parseFloat(str);
Always use === for string comparisons. With ==, JavaScript tries to coerce types before comparing - so "123" == 123 is true, which is almost never what you want. With ===, both value and type must match. This is one of the most common sources of subtle bugs, so just make === your default.
"123" == 123 // true (string is converted to a number)
=== (strict equality) checks for both value and type equality without type conversion - use this one.
"123" === 123 //false
Use includes() for a simple true/false check - it's clear and readable. Fall back to indexOf() if you need the position or are targeting older environments. Both are case-sensitive, so normalize the case first if you're doing user-facing search:
- includes() method (returns true or false):
- indexOf() method (returns the index, or -1 if not found):
let str = "Hello, world!";
str.includes("world");// true
let str ="Hello, world!";
str.indexOf("world") !== -1; // true
All three extract a portion of a string, but slice() is the one you'll reach for most. It supports negative indices (counting from the end) and has consistent, predictable behavior:
- slice:
- substring:
-
substr:
//substr(start, length) (deprecated but still used) let str = "Hello, world!"; let part = str.substr(0, 5); // "Hello"
//slice(start, end)
let str = "Hello, world!";
let part = str.slice(0, 5); // "Hello"
//substring(start, end)
let str = "Hello, world!";
let part = str.substring(0, 5); // "Hello"
In practice, just use slice() - it works like substring() but also accepts negative indices, which makes it more flexible. substring() is fine but older. substr() is deprecated - it takes a start index and a length rather than start and end, which is confusing, so drop it from your vocabulary:
slice(start, end) - supports negative indices, the go-to choice:
- Extracts from start to end (not inclusive).
- Accepts negative indices.
"Hello".slice(1, 4); // "ell"
"Hello".slice(-4, -1); // "ell"
substring(start, end): -
- Extracts from start to end (not inclusive).
- Does not accept negative indices.
"Hello".substring(1, 4); // "ell"
substr(start, length) (deprecated):
- Extracts a substring starting at start with a specified length.
- Accepts negative start values.
"Hello".substr(1, 3); // "ell"
"Hello".substr(-4, 3); // "ell"
Template literals (backticks) are the modern way to work with strings in JavaScript. They let you embed expressions directly with ${} and span multiple lines without escape characters. In modern code, reach for backticks by default - they're strictly more capable than single or double quoted strings.
String Interpolation: Embed variables or expressions using ${}.
let name = "John";
let greeting = `Hello, ${name}!`; // "Hello, John!"
Multi-line strings: Write strings over multiple lines without needing escape characters.
let multiLine = `This is a multi-line string.`;
Differences from regular strings:
- Template literals: Use backticks (` `), support interpolation, and handle multi-line easily.
- Regular strings: Use single (' ') or double (" ") quotes and require manual concatenation for variables or expressions.
replace(): - Replaces only the first occurrence of a substring or pattern.
let str = "apple apple";
str.replace("apple", "orange"); // "orange apple"
replaceAll(): - Replaces all occurrences of a substring or pattern.
let str = "apple apple";
str.replaceAll("apple", "orange"); //"orange orange"
The big gotcha: replace() with a string pattern only replaces the first match. For all occurrences, use replaceAll() or pass a regex with the g flag to replace(). replaceAll() was added in ES2021 as the more readable option.
Use regex.test(str) when you just need a boolean - it's faster and cleaner. Use str.match(regex) when you need the actual matched text or capture groups. One gotcha: if you call test() in a loop with a regex that has the g flag, the lastIndex state can cause unexpected results - create a fresh regex each time or reset lastIndex manually.
test() method (used with a regular expression): - Returns true if the string matches the pattern.
let regex = /hello/;
regex.test("hello world"); // true
match() method (returns matching substrings or null): - Can be used to check if there's a match (returns an array or null).
let str = "hello world";
str.match(/hello/); // ["hello"]
indexOf() returns the position of the first match, or -1 if not found. If you only need a boolean, includes() is more readable. There's also search() which takes a regex and returns the position of the first match:
let str = "Hello, world!";
let index = str.indexOf("world"); // returns 7
The cleanest way in modern JS is str.match(/pattern/g) - the g flag returns an array of all matches. If you need both the matches and their positions, use a loop with indexOf() or matchAll() with a regex, which is cleaner:
let str = "Hello world, welcome to the world!";
let searchTerm = "world";
let indices = [];
let index = str.indexOf(searchTerm);
while (index !== -1) {
indices.push(index);
index = str.indexOf(searchTerm, index + 1); // Continue searching from the next position
}
console.log(indices); // [6, 23]
Or using regular expressions with match() and the g flag:
let str = "Hello world, welcome to the world!";
let matches = str.match(/world/g); // ["world", "world"]
//The g flag in the regex ensures it finds all matches.
JavaScript strings are internally encoded as UTF-16. Most characters you'll deal with are in the Basic Multilingual Plane (BMP) and take one 16-bit code unit, so str.length equals the number of characters. The catch is emojis and some special characters use two code units (a surrogate pair), so '??'.length is 2, not 1. Use for...of to iterate by actual characters rather than raw code units.
1. Unicode: Unicode is a character encoding standard that aims to represent every character from every writing system. Each character is assigned a unique code point (e.g., 'A' = U+0041). JavaScript strings are based on the Unicode standard, meaning they can represent characters from virtually any language.
2. UTF-16: UTF-16 (16-bit Unicode Transformation Format) is the encoding JavaScript uses internally to represent strings. Each character is represented by 16 bits (2 bytes). For characters beyond the Basic Multilingual Plane (BMP), UTF-16 uses two 16-bit code units (known as a surrogate pair). Examples:
- A character like 'A' is represented as a single 16-bit code unit.
- Emojis or certain special characters (e.g., ??) require two 16-bit units in UTF-16.
let str = '??';
console.log(str.length); // 2 (because it uses a surrogate pair in UTF-16)
charCodeAt() gives you the UTF-16 code unit for a character, and String.fromCharCode() goes the other way. These work fine for basic ASCII and most Latin characters. If you're dealing with emojis or characters outside the BMP, use codePointAt() and String.fromCodePoint() instead - they handle the full Unicode range correctly.
1. charCodeAt : Use the charCodeAt() method to get the character code (Unicode) of a character at a specific position in a string.
let str = 'A';
let code = str.charCodeAt(0); // 65 (Unicode code for 'A')
2. String.fromCharCode(): Use the String.fromCharCode() method to convert a character code back to a string.
let code = 65;
let char = String.fromCharCode(code); // 'A'
These methods work for characters within the Basic
Multilingual Plane (BMP), where each character is
represented by a single 16-bit code unit. For
characters outside BMP (e.g., emojis), you'd use
codePointAt() and String.fromCodePoint().
The for...of loop is the preferred way - it correctly handles multi-byte characters (like emojis) by iterating over code points rather than raw code units. A regular for loop or split('') can split surrogate pairs, giving you garbled results with emoji. Here are the main options:
1. for loop:
let str = "Hello";
for (let i = 0; i < str.length; i++) {
console.log(str[i]);
}
2. for...of` loop (preferred for strings):
let str = "Hello";
for (let char of str) {
console.log(char);
}
3. split() method with forEach():
let str = "Hello";
str.split('').forEach(char => console.log(char));
4. charAt() method in a loop:
let str = "Hello";
for (let i = 0; i < str.length; i++) {
console.log(str.charAt(i));
}
charAt(i) returns the character at index i as a string, while charCodeAt(i) returns its numeric Unicode value. In most modern code you'll see bracket notation (str[i]) instead of charAt() - they're equivalent except for out-of-bounds: charAt() returns '' while str[i] returns undefined. Know that difference and you'll be fine with either.
1. charAt(index): Returns the character at the specified index in the string. - Output is a string.
let str = "Hello";
str.charAt(1); // "e"
2. charCodeAt(index): Returns the Unicode character code (numeric value) of the character at the specified index. - Output is a number.
let str = "Hello";
str.charCodeAt(1); // 101 (Unicode code for "e")
In short, charAt() gives you the character, while charCodeAt() gives you its numeric Unicode value.
The behavior differs depending on how you access it: charAt(outOfBounds) returns an empty string '', while bracket notation str[outOfBounds] returns undefined. Both handle it gracefully without throwing. In practice this distinction rarely matters, but if you're doing something like if (str.charAt(i) === '') to check for missing characters, that's the reason why:
1. Using charAt: Returns an empty string ("").
let str = "Hello";
console.log(str.charAt(10)); // ""
2. Using bracket notation ([]): Returns undefined.
let str = "Hello";
console.log(str[10]); // undefined
Both cases handle out-of-range indices gracefully without throwing an error - charAt returns '' and bracket notation returns undefined.
Yes, strings are immutable in JavaScript - you can't change a character in place. Assigning to str[0] silently does nothing. Every string method that looks like it modifies the string (like replace(), toUpperCase(), trim()) actually returns a new string. This is generally fine, but be aware that creating many intermediate strings in hot loops can hurt performance.
let str = "Hello";
str[0] = "h"; // This has no effect, strings are immutable
console.log(str); // "Hello"
To change a string, you need to create a new one - for example:
let newStr = str.replace("H", "h"); // "hello"
Template literals are the way to go for multi-line strings - just put line breaks directly in the backtick string. The old approach of using \n in a regular string still works but is harder to read. If you're building multi-line strings dynamically, pushing parts into an array and calling .join('\n') is a clean pattern.
1. Using Template Literals (best option): Template literals, enclosed in backticks (` `), allow you to create multi-line strings easily.
let multiLine = `This is
a multi-line
string.`;
2. Using Escape Character (\n): You can also use \n for new lines within regular strings.
let multiLine = "This is\na multi-line\nstring.";
The main thing to watch: avoid string concatenation with + inside a loop - each + creates a new string object, which can get expensive. Push parts into an array and call .join('') at the end instead. For everything else, modern engines optimize string operations well.
1. String Immutability: Since strings are immutable, any modification creates a new string, which can lead to memory overhead in loops or large data processing.
2. String Concatenation: Repeated string concatenation (using `+`) in loops can be inefficient due to the creation of many intermediate strings. Instead, consider using:
let str = ["Hello", "World"].join(" ");
3. Template Literals: While template literals improve readability, their performance is similar to regular string concatenation, but be mindful of excessive usage in performance-critical areas.
4. Avoiding Unnecessary String Operations: Frequent use of methods like replace(), slice(), or substring() on large strings can be costly due to the creation of new strings each time.
5. Memory Usage: If dealing with large datasets or strings (e.g., file contents), try to minimize unnecessary string operations to reduce memory consumption and processing time.
By keeping these in mind, you can avoid the most common string performance pitfalls in JavaScript.
The main string methods that accept a regex are match(), replace(), search(), and split(). The most useful are replace() with a regex (especially with capture groups for transforms) and match() with the g flag to get all matches. Remember: without the g flag, replace() and match() only operate on the first occurrence.
1. match(): Finds matches based on a regular expression.
let str = "Hello world!";
let result = str.match(/world/); // ["world"]
2. replace(): Replaces parts of a string that match a regular expression.
let str = "Hello world!";
let result = str.replace(/world/, "JavaScript"); // "Hello JavaScript!"
3. search(): Returns the index of the first match or -1 if not found.
let str = "Hello world!";
let index = str.search(/world/); // 6
4. split(): Splits a string based on a regular expression.
let str = "apple, banana, cherry";
let result = str.split(/,\s*/); // ["apple", "banana", "cherry"]
Regular Expression Flags:
- g: Global search (find all matches).
- i: Case-insensitive search.
- m: Multi-line search.
let str = "Hello hello";
let result = str.match(/hello/gi); // ["Hello", "hello"]
Regular expressions provide powerful pattern matching and string manipulation in JavaScript.
match() is called on a string and exec() is called on a regex - that's the main mental model. Without the g flag they behave similarly, both returning the first match with capture groups. With g: match() returns all matches as a flat array but drops capture groups, while exec() returns one match at a time with full capture group info and advances lastIndex. Use matchAll() (ES2020) if you need all matches with capture groups - it's the cleanest option now.
1. match():
- Used on strings.
- Returns an array of matches or null if no match is found.
- With the global (g) flag, it returns an array of all matches.
//Example without g flag:
let str = "Hello world";
let result = str.match(/world/); // ["world"]
//Example with `g` flag:
let str = "Hello world world";
let result = str.match(/world/g); // ["world", "world"]
2. exec():
- Used on regular expression objects.
- Returns an array with detailed match information, or null if no match is found.
- Only returns one match at a time, even with the global flag. It must be called repeatedly in a loop to get all matches.
let regex = /world/;
let result = regex.exec("Hello world"); // ["world"]
//With g flag and loop:
let regex = /world/g;
let str = "Hello world world";
let match;
while ((match = regex.exec(str)) !== null) {
console.log(match); // Logs each "world" match
}
Key Differences:
-
match()is simpler for straightforward matching on strings - use it withgto get all matches as an array. -
exec()is more powerful and provides full capture group details, but requires looping to get all matches when using thegflag.
Use str === '' for a strict empty check, or str.length === 0 for the same effect. If you want to also treat whitespace-only strings as empty, use str.trim() === ''. One gotcha: if (!str) is also falsy for null and undefined, which may or may not be what you want - be explicit about what you're checking:
let str = "";
if (str === "") {
console.log("String is empty");
}
Or using the .length property:
let str = "";
if (str.length === 0) {
console.log("String is empty");
}
A tagged template literal lets you preprocess a template literal with a function. The tag function receives the string parts as an array and the interpolated values as separate arguments - letting you produce something other than a plain concatenated string. This is how libraries like styled-components, GraphQL's gql, and SQL query builders work. It's an advanced feature you probably won't write yourself often, but you'll encounter it in third-party code.
//Syntax:
tagFunction`Template literal string ${expression}`;
//Example:
function myTag(strings, ...values) {
console.log(strings); // Array of string parts
console.log(values); // Array of expression values
return strings[0] + values[0]; // Just for demonstration
}
let name = "John";
let result = myTag`Hello, ${name}!`;
console.log(result); // "Hello, John!"
myTag is the tag function that receives:
- strings: an array of string segments (static parts of the template).
- values: the values of the interpolated expressions.
You can process these values and return a new string or value. Common use cases:
- Custom string formatting (e.g., currency, localization).
- Security (e.g., escaping HTML or SQL injection prevention).
- More complex processing of template literals beyond simple interpolation.
A function declaration uses the function keyword as a statement and is fully hoisted - you can call it before the line where it's defined. A function expression assigns a function to a variable and isn't hoisted, so it must be defined before you call it. In modern code, declarations are common for named top-level functions, while arrow function expressions are the norm for short utilities and callbacks.
// Function Declaration - hoisted
console.log(square(3)); // 9
function square(n) { return n * n; }
// Function Expression - NOT hoisted
// console.log(cube(3)); // TypeError
const cube = function(n) { return n * n * n; };
console.log(cube(3)); // 27
Arrow functions are a shorter syntax introduced in ES6, but the most important difference is how this works - they don't have their own this, they inherit it from the surrounding scope. That makes them great for callbacks inside class methods or event handlers. Don't use arrow functions as object methods or constructors - the this binding will surprise you.
- No own
this: inheritsthisfrom the enclosing scope. - No
argumentsobject. - Cannot be used as a constructor (no
new). - Supports implicit return when body has no curly braces.
const add = (a, b) => a + b;
console.log(add(2, 3)); // 5
const obj = {
value: 10,
regular: function() { return this.value; }, // 10
arrow: () => this // refers to outer 'this', not obj
};
The arguments object is an array-like object available inside regular functions that contains all the values passed in, regardless of how many parameters were defined. It's array-like but not a real array - so you can't call .map() on it directly. In modern JS, use rest parameters (...args) instead - they give you an actual array and also work inside arrow functions.
function sum() {
let total = 0;
for (let i = 0; i < arguments.length; i++) {
total += arguments[i];
}
return total;
}
console.log(sum(1, 2, 3, 4)); // 10
In modern JavaScript, it is recommended to use rest parameters (...args) instead, which gives you a real array that you can immediately call .map(), .filter(), etc. on.
Default parameters let you define a fallback value used when a caller doesn't pass that argument (or passes undefined explicitly). One thing to watch: defaults are evaluated fresh each call - if you use an object as a default, you get a new instance each time, which is actually better behavior than Python's mutable default gotcha.
function greet(name = "Guest") {
return "Hello, " + name + "!";
}
console.log(greet("Alice")); // Hello, Alice!
console.log(greet()); // Hello, Guest!
Default values are evaluated at call time, not at function definition time. They can also reference other parameters defined before them in the parameter list.
Primitives (numbers, strings, booleans) are passed by value - the function gets a copy, so changes inside don't affect the original. Objects and arrays are passed by reference - the function gets a reference to the same object, so mutating it affects the caller's data. The gotcha: reassigning the parameter itself doesn't affect the original, but mutating a property does.
// Primitive - pass by value
function addTen(n) { n += 10; }
let x = 5;
addTen(x);
console.log(x); // 5 (unchanged)
// Object - pass by reference
function rename(person) { person.name = "Bob"; }
const user = { name: "Alice" };
rename(user);
console.log(user.name); // Bob (changed)
Functions in JavaScript are first-class values - you can store them in variables, pass them as arguments, return them from other functions, and put them in arrays or objects, just like any other value. This is what makes patterns like callbacks, higher-order functions, and closures possible. It's a fundamental feature that shapes how JavaScript code is written.
const greet = function(name) { return "Hello, " + name; };
console.log(greet("Alice")); // Hello, Alice
function run(fn) { return fn("Bob"); }
console.log(run(greet)); // Hello, Bob
A higher-order function either takes a function as an argument or returns a function. The built-in array methods map, filter, and reduce are higher-order functions you'll use constantly. Understanding them is key to writing clean, reusable JavaScript.
const numbers = [1, 2, 3, 4];
const doubled = numbers.map(n => n * 2);
console.log(doubled); // [2, 4, 6, 8]
function multiplier(factor) {
return n => n * factor;
}
const triple = multiplier(3);
console.log(triple(5)); // 15
A callback is just a function you pass to another function so it can call it later. Before Promises and async/await, callbacks were how all async work was done - setTimeout, event listeners, and Node's fs.readFile all use them. They're still everywhere, so understanding them is essential. The downside is deeply nested callbacks ("callback hell"), which is why Promises were introduced.
function fetchData(callback) {
setTimeout(() => {
callback("Data loaded");
}, 1000);
}
fetchData(result => console.log(result)); // Data loaded (after 1s)
A closure is a function bundled with its surrounding lexical environment - it remembers the variables from the scope where it was created, even after that scope has gone. This is how you get private state in JavaScript. Every function in JS is technically a closure, but the term is usually used when you return a function from another function and the inner function keeps access to the outer variables.
function makeCounter() {
let count = 0;
return function() {
count++;
return count;
};
}
const counter = makeCounter();
console.log(counter()); // 1
console.log(counter()); // 2
console.log(counter()); // 3
Hoisting means JavaScript moves declarations to the top of their scope before running. Function declarations are fully hoisted - you can call them before they appear in the file. var declarations are hoisted but set to undefined until the assignment runs, which causes subtle bugs. let and const are hoisted too, but in the Temporal Dead Zone until declared, so accessing them early throws a ReferenceError rather than silently returning undefined.
console.log(sayHello()); // "Hello!" - function is fully hoisted
function sayHello() { return "Hello!"; }
console.log(x); // undefined - var hoisted, value is not
var x = 10;
// console.log(y); // ReferenceError - let is in TDZ
let y = 20;
The Temporal Dead Zone (TDZ) is the window between entering a block scope and when a let or const variable is actually declared. During this time, the variable exists in memory but accessing it throws a ReferenceError. This is intentional - it prevents the confusing "undefined before declaration" behavior you get with var. The practical lesson: always declare let/const at the top of the block they're used in.
// TDZ starts here for 'name'
console.log(name); // ReferenceError: Cannot access 'name' before initialization
let name = "Alice"; // TDZ ends here
An IIFE is a function that runs immediately after it's defined. Before ES modules and block scoping with let/const, IIFEs were the standard way to create a private scope and avoid polluting the global namespace. You still see them in legacy codebases and some build tool output. In modern code, modules handle encapsulation so IIFEs are rarely needed.
(function() {
const message = "I run immediately!";
console.log(message);
})();
// Arrow function IIFE
(() => {
console.log("Arrow IIFE");
})();
Currying transforms a multi-argument function into a chain of single-argument functions, enabling partial application - pre-filling some arguments to get back a specialized function. It's a functional programming concept used heavily in libraries like Lodash and Ramda. You'll encounter it more in interviews and functional-style code than in typical app development.
// Normal function
function add(a, b) { return a + b; }
// Curried version
function curriedAdd(a) {
return function(b) {
return a + b;
};
}
const addFive = curriedAdd(5);
console.log(addFive(3)); // 8
console.log(addFive(10)); // 15
Memoization caches the result of a function call so if you call it again with the same arguments, you get the cached result instead of recomputing. It's a memory-for-speed trade-off - use it for pure functions with expensive computations and repeated identical inputs. React's useMemo and useCallback are built-in examples. It's most useful when the same inputs occur often; otherwise the cache overhead isn't worth it.
Recursion is when a function calls itself to solve a smaller version of the same problem. Always define a base case - a condition where the function returns without calling itself - otherwise you'll hit a stack overflow. It's natural for tree/graph traversal, nested data structures, and divide-and-conquer algorithms. For large inputs, be aware JavaScript has a finite call stack, so deep recursion can crash - consider iteration if depth is unbounded.
function factorial(n) {
if (n <= 1) return 1; // base case
return n * factorial(n - 1); // recursive call
}
console.log(factorial(5)); // 120
// 5 * 4 * 3 * 2 * 1 = 120
All three let you explicitly set the value of this when calling a function. call and apply invoke the function immediately - the only difference is call takes individual arguments while apply takes them as an array. bind returns a new function with this permanently set, without calling it - useful for event handlers and passing methods as callbacks. In modern code, arrow functions often replace the need for bind since they capture this lexically.
- call(): invokes the function immediately, arguments passed one by one.
- apply(): invokes the function immediately, arguments passed as an array.
- bind(): returns a new function with
thisbound, does not call it immediately.
function greet(city, country) {
return this.name + " from " + city + ", " + country;
}
const person = { name: "Alice" };
console.log(greet.call(person, "Paris", "France"));
console.log(greet.apply(person, ["Paris", "France"]));
const boundGreet = greet.bind(person, "Paris", "France");
console.log(boundGreet()); // called later
Debugging is finding and fixing the bugs in your code - the mismatch between what you think the code does and what it actually does. Start with console.log to inspect values, use the browser's debugger for anything more complex, and read error messages carefully since they usually point you directly to the problem. The best debugging skill is forming a hypothesis about what's wrong and systematically testing it.
function add(a, b) {
return a - b; // bug: should be +
}
console.log(add(3, 4)); // -1 (wrong, expected 7)
// After fix:
function add(a, b) {
return a + b;
}
console.log(add(3, 4)); // 7
Chrome DevTools is the browser's built-in developer toolkit, opened with F12 or right-click and Inspect. For JavaScript, the Console and Sources panels are your main tools. Console lets you run JS snippets and see errors and logs. Sources lets you set breakpoints and step through code line by line. The Network tab helps when debugging API calls or missing assets.
- Console: view errors, warnings, and log messages; run JavaScript interactively.
- Sources: view source files, add breakpoints, and step through code line by line.
- Network: inspect HTTP requests and responses.
A breakpoint pauses code execution at a specific line so you can inspect variable values, the call stack, and scope chain. Click a line number in the DevTools Sources panel to set one, or write debugger; in your code to do the same thing when DevTools is open. Once paused, you can step through code line by line, step into function calls, or run to the next breakpoint.
function calculateTotal(price, tax) {
debugger; // pauses here when DevTools is open
return price + tax;
}
calculateTotal(100, 18);
- console.log(): prints general information (white/default styling).
- console.warn(): prints a warning with a yellow background - use it for non-critical issues that might indicate a problem.
- console.error(): prints an error with a red background and a stack trace - use it for actual failures. The distinction matters when filtering Console output by severity.
- console.table() is also worth knowing - it displays arrays and objects in a readable grid format.
console.log("User loaded"); // general info
console.warn("Cache is almost full"); // warning
console.error("Failed to fetch data"); // error with stack trace
- SyntaxError: invalid code that can't be parsed - usually caught before runtime (e.g., missing bracket).
- ReferenceError: accessing a variable that doesn't exist in scope.
- TypeError: using a value in a way its type doesn't support (e.g., calling a non-function, reading a property off
null). - RangeError: a value is outside the allowed range (e.g., negative array length, too much recursion).
- URIError: malformed URI passed to functions like
decodeURIComponent(). - InternalError: engine-level error, most commonly too much recursion.
A ReferenceError means the variable doesn't exist at all - you never declared it, or it's out of scope. A TypeError means the variable exists but you're using it in a way its type doesn't support - like calling a number as a function, or reading a property off null. These are the two most common runtime errors - learn to recognize them and you'll fix bugs much faster.
// ReferenceError
console.log(foo); // ReferenceError: foo is not defined
// TypeError
let num = 42;
num(); // TypeError: num is not a function
let obj = null;
console.log(obj.name); // TypeError: Cannot read properties of null
This trips up everyone - JavaScript uses 64-bit IEEE 754 floating-point, and 0.1 in binary is a repeating fraction, just like 1/3 in decimal. So when you add them, you get 0.30000000000000004. Never use == to compare floats directly - round with toFixed() for display, or check if the difference is smaller than Number.EPSILON for logic comparisons.
console.log(0.1 + 0.2); // 0.30000000000000004
// Fix: use toFixed() to round
console.log((0.1 + 0.2).toFixed(1)); // "0.3"
NaN means 'Not a Number' - you get it when a math operation fails or makes no sense, like 0/0 or parsing 'hello' as a number. The big gotcha is that NaN is never equal to itself (NaN === NaN is false), so always use Number.isNaN() to check. Avoid the global isNaN() - it coerces its argument first, so isNaN('hello') returns true even though 'hello' is just a string.
console.log(0 / 0); // NaN
console.log(Number("abc")); // NaN
console.log(Number.isNaN(NaN)); // true
console.log(Number.isNaN("abc")); // false (no coercion)
console.log(isNaN("abc")); // true (coerces first | less reliable)
parseInt() stops at the decimal point and gives you a whole number, while parseFloat() keeps it. Both are lenient - they'll parse '3.99px' without throwing an error, which is handy when dealing with CSS values. Just remember to always pass a radix (like 10) to parseInt() to avoid surprises with octal or hex interpretation.
console.log(parseInt("3.99px")); // 3 (integer only)
console.log(parseFloat("3.99px")); // 3.99 (keeps decimals)
console.log(parseInt("abc")); // NaN
console.log(parseFloat("abc")); // NaN
Without a radix, older JavaScript engines could interpret strings starting with '0' as octal, which is rarely what you want. Always pass 10 as the second argument when parsing user input or decimal strings - it makes the intent obvious and avoids cross-browser inconsistencies. When you actually need hex or binary parsing, pass 16 or 2 explicitly.
console.log(parseInt("10", 10)); // 10
console.log(parseInt("10", 2)); // 2
console.log(parseInt("0x10", 16)); // 16
BigInt is for when regular numbers lose precision - anything beyond Number.MAX_SAFE_INTEGER (about 9 quadrillion) can silently give wrong results. You'll hit this with database IDs from certain systems, cryptocurrency values, or bitwise operations on large numbers. The catch: you can't mix BigInt with regular numbers in arithmetic - you'll get a TypeError - so you need to be explicit about conversions.
const big = 9007199254740991n + 1n;
console.log(big); // 9007199254740992n
// Cannot mix with regular numbers
console.log(10n + 5); // TypeError
Infinity is what you get when a number overflows or you divide a positive number by zero. Unlike most languages, JavaScript doesn't throw on divide-by-zero - it silently returns Infinity, which can cause bugs that are hard to trace. Always use Number.isFinite() to validate numeric results if you're doing divisions or working with user-supplied math.
console.log(1 / 0); // Infinity
console.log(-1 / 0); // -Infinity
console.log(Infinity + 1); // Infinity
console.log(Number.isFinite(Infinity)); // false
console.log(Number.isFinite(100)); // true
Use the Intl.NumberFormat API - it handles all the locale differences like decimal separators, grouping, and currency symbols automatically. Don't roll your own formatting logic. For one-off cases number.toLocaleString('en-US', { style: 'currency', currency: 'USD' }) works fine, but if you're formatting many numbers in a hot path, create an Intl.NumberFormat instance once and reuse it - constructing it is the expensive part.
const num = 1234567.89;
// US format
console.log(new Intl.NumberFormat("en-US").format(num));
// 1,234,567.89
// Currency
console.log(new Intl.NumberFormat("en-US", {
style: "currency", currency: "USD"
}).format(num));
// $1,234,567.89
In the browser you can use inline <script> tags or link external .js files - external files are almost always the right choice for maintainability. Beyond the browser, JavaScript runs on servers with Node.js, in mobile apps via React Native, in desktop apps via Electron, and in serverless functions like AWS Lambda or Cloudflare Workers. The browser console is also great for quick experiments.
A value is the actual data - a number, string, boolean, object, etc. A variable is a named label you attach to a value so you can reference and reuse it. Think of a variable as a box with a name on it; the value is what's inside the box - and with let you can swap what's inside, while const locks the box shut.
Comments are notes for humans - the engine ignores them completely. Use // for single-line comments and /* ... */ for blocks. A good rule of thumb: comment the 'why', not the 'what' - the code already shows what it does, so explain the reasoning behind non-obvious decisions instead.
In non-strict (sloppy) mode, JavaScript silently swallows a lot of mistakes - assigning to undeclared variables creates accidental globals, writing to read-only properties fails silently, and octal literals can cause confusion. These 'helpful' behaviors are actually traps. Always use 'use strict' at the top of your files, or just use ES modules which are strict by default.
A statement is one complete instruction - declare a variable, call a function, run a loop, branch with an if. Think of a program as a series of statements the engine executes top-to-bottom. Statements differ from expressions in that they perform an action rather than produce a value.
Just put your instruction on one line - const x = 5; or console.log(x);. Semicolons are technically optional thanks to ASI (Automatic Semicolon Insertion), but it's safer to include them - ASI has well-known edge cases where omitting semicolons causes unexpected behavior, especially when a line starts with ( or [.
Wrap your expression in parentheses or use an open bracket - JavaScript won't insert a semicolon in the middle of an unfinished expression. This is common with chained method calls, long ternaries, or object/array literals. Avoid backslash line continuation - it's fragile and rarely used in modern code.
A code block is a set of statements wrapped in curly braces {} that are treated as a single unit. It's how JavaScript knows which statements belong to a function, loop, or if-branch. Importantly, let and const declared inside a block are scoped to that block - they don't leak out, which is a key reason to prefer them over var.
var is the old way to declare variables - function-scoped and hoisted, which causes confusing bugs. let and const (ES6+) are block-scoped and much more predictable. Use const by default, and reach for let only when you actually need to reassign. Avoid var in modern code.
The big differences are scope and hoisting. var is function-scoped and hoisted as undefined - you can use it before its declaration and it won't throw. let and const are block-scoped and live in a 'temporal dead zone' until their declaration - accessing them early throws a ReferenceError. Also, const can't be reassigned, though the object it points to can still be mutated.
Default to const - it signals to readers that this value won't be reassigned, and the engine can potentially optimize it. Switch to let when you need to reassign (like a loop counter or accumulator). Never use var in new code - its function-scope and hoisting behavior produces bugs that are subtle and annoying to debug.
These three are different levels of 'nothing'. undefined means declared but not yet assigned - it's the default. null is an explicit empty value you set intentionally to say 'this has no value right now.' An undeclared variable was never declared at all, and trying to read it throws a ReferenceError. Use null to intentionally clear a value; rely on undefined checks to detect missing properties.
let a;
let b = null;
console.log(a); // undefined
console.log(b); // null
console.log(userName); // ReferenceError
++ adds 1 and -- subtracts 1. The subtle part is prefix vs postfix: ++x increments first and returns the new value, while x++ returns the current value and then increments. This distinction only matters when the result is used in an expression - in a standalone statement like a for-loop counter, both behave the same.
The % operator gives you what's left over after division - 9 % 4 is 1. It's useful for checking if a number is even (n % 2 === 0), clamping array indices to wrap around, or any time you need cyclic behavior. Watch out with negative numbers - in JavaScript the sign of the result follows the dividend, so -7 % 3 is -1, not 2.
For debugging, console.log() is your primary tool - also check out console.table() for objects and console.error() for errors. For the UI, set element.textContent or element.innerHTML to update page content. Avoid document.write() in modern code - it overwrites the entire page if called after load. Reserve alert() for quick debugging only; never ship it to users.
Almost always use string literals - 'hello' or "hello". They're primitives, fast, and work exactly as you'd expect with ===. String objects (new String('hello')) are wrapper objects and behave oddly - two different new String('hello') instances won't be equal with === because they're different object references. There's virtually no reason to use new String() in real code.
.length tells you how many UTF-16 code units a string contains. For most everyday text it equals the number of characters, but emoji and some Unicode characters take two code units each, so '😀'.length is 2, not 1. It's read-only - trying to assign to it does nothing. For counting visible characters with emoji, use the spread operator: [...str].length.
Use split(separator) - pass a space to get words, a comma for CSV, or an empty string for individual characters. The spread operator [...str] is better for splitting into characters because it correctly handles emoji and multi-codepoint Unicode, where split('') would break them apart incorrectly. Both approaches return a new array without modifying the original string.
Template literals (backticks) replaced the old string concat dance. With `Hello, ${name}!` you can embed any expression directly instead of using +. They also preserve actual line breaks, making multi-line strings trivial. Prefer template literals whenever you're building strings with dynamic content - the code is cleaner and much easier to read.
padStart() pads from the left, padEnd() from the right - both until the string hits the target length. The classic use case for padStart is zero-padding numbers: '5'.padStart(3, '0') gives '005'. Useful for formatting IDs, timestamps, or aligned table columns. If the string is already at or longer than the target length, nothing happens.
Use includes() when you just need to know if something is there - it returns a clear true/false and reads like plain English. Use indexOf() when you also need the position. A common mistake is writing if (str.indexOf('x')) which is truthy even when the result is 0 (found at the start) - includes() avoids that trap entirely.
indexOf() scans left to right and returns the first match, lastIndexOf() scans right to left and returns the last. Both return -1 if nothing is found. A practical use: getting just the filename from a path - path.lastIndexOf('/') finds the final slash so you can slice from there. Both methods work on strings and arrays.
search() takes a regex and returns the index of the first match (or -1) - it's basically indexOf() but with regex support. match() returns the actual matched strings as an array, or null if nothing matched. Use match() with the g flag to collect all occurrences. Also consider matchAll() for a more ergonomic way to get all matches with capture groups.
Implicit coercion is JavaScript quietly converting types behind your back during operations - like '5' + 3 giving '53' instead of 8. It's one of the language's most notorious gotcha areas. The rules are complex: + with a string coerces to string, but - coerces to number. Use === instead of == to avoid comparison coercions, and convert types explicitly when you need control.
Explicit conversion means you're deliberately converting a value yourself - Number('42'), String(123), Boolean(0). This is always preferable to letting JavaScript coerce implicitly, because it makes your intent clear and avoids surprises. Number() is stricter than parseInt() - Number('42px') gives NaN, while parseInt('42px') gives 42.
There are three: alert() shows a message with an OK button (fire and forget), confirm() shows OK/Cancel and returns a boolean, and prompt() shows a text field and returns the input string (or null if cancelled). These are blocking calls - they freeze the entire page while open. Fine for quick debugging, but never use them in production UI; build a proper modal instead.
The falsy values in JavaScript are: false, 0, 0n (BigInt zero), empty string ('', "", ``), null, undefined, and NaN. Everything else is truthy - including empty arrays and empty objects. The biggest gotcha is that [] and {} are truthy - if you want to check for an empty array, use arr.length === 0, not just if (!arr).
The key difference is when the condition is checked. A while loop checks first - if the condition is false from the start, the body never runs. A do...while loop runs the body once before checking, guaranteeing at least one execution. Use do...while when you need to run at least one iteration - like reading user input until it's valid.
Each case is a possible matching value for the switch expression - when the expression equals the case value (using strict equality), that block runs. Always end each case with break, or execution falls through to the next case, which is almost never what you want. Use a default case as a fallback for unmatched values, similar to an else.
Scoping is the set of rules that determines where a variable is accessible. JavaScript has lexical scoping, meaning scope is determined by where code is written, not where it's called from. Inner scopes can access outer scope variables, but not vice versa - this is the basis of closures and is one of the most important concepts to internalize.
There are three scopes: global (accessible everywhere), function (accessible only within the function it's declared in), and block (accessible only within the {} block, and only for let/const). The addition of block scope in ES6 was a big deal - it lets you limit a variable's lifetime to exactly the loop or if-branch that needs it, rather than the entire function.
A function is a reusable, named block of code. The important thing in JavaScript is that functions are first-class - you can assign them to variables, pass them as arguments, and return them from other functions, just like any other value. This is the foundation for callbacks, closures, higher-order functions, and most of the patterns modern JavaScript is built on.
Functional programming centers on three ideas: pure functions (no side effects, same input always returns same output), immutability (don't mutate data, create new copies), and higher-order functions (functions that take or return other functions). In practice this means favoring map, filter, and reduce over loops and mutations. Pure functions are trivially testable and easy to reason about.
Generators are functions that can pause mid-execution with yield and resume later when you ask for the next value. Defined with function*, they return an iterator you call .next() on. This is useful for lazy sequences (generating values only on demand) and implementing custom iteration. In day-to-day code you'll see them less than async/await, but understanding them helps with advanced patterns.
You have four options: function declaration (function foo() {}) - hoisted and callable anywhere in its scope; function expression (const foo = function() {}) - not hoisted; arrow function (const foo = () => {}) - shorter syntax and lexical this; and the Function constructor - basically never use this in real code since it evaluates a string. In modern code, declarations and arrow functions cover 95% of use cases.
A function expression is a function assigned to a variable rather than declared standalone - const double = function(x) { return x * 2; }. Because it's assigned like any other value, it's not hoisted, so you can only call it after the line it's defined on. You can give function expressions a name (useful for stack traces in debugging), but the name is only accessible inside the function itself.
An anonymous function has no name - common as inline callbacks where you don't need to reuse the function elsewhere. You see them constantly in array methods (arr.map(x => x * 2)), event listeners, and promise chains. The downside is that unnamed functions show up as 'anonymous' in stack traces, which can make debugging harder - so if a callback is complex or reused, give it a name.
The main practical difference is hoisting - declarations are fully hoisted so you can call them before they appear in the code, which can be convenient but also surprising. Expressions aren't hoisted, enforcing a clear 'define before use' order. Most style guides prefer function declarations for top-level reusable functions, and expressions (usually arrow functions) for inline or callback use.
The critical difference is this binding. Regular function expressions get their own this set by how they're called - which causes the classic 'lost context' bug in callbacks and event handlers. Arrow functions capture the this from the enclosing scope at definition time and never change it. Arrow functions also can't be used as constructors and don't have their own arguments object.
Parameters are the names in the function definition - they're placeholders. Arguments are the actual values you pass when calling the function. In function add(a, b), a and b are parameters; in add(2, 3), 2 and 3 are arguments. Easy to mix up verbally, but it matters in interviews and when reading docs about rest parameters vs rest arguments.
Callback hell is when you nest async callbacks three, four, or five levels deep because each operation depends on the previous one's result - you get a triangle of doom that's nearly impossible to read or debug. It was a real problem before Promises and async/await existed. The fix is to use async/await or chain Promises - both flatten the nesting into sequential-looking code that's much easier to follow.
setTimeout(fn, delay) schedules a function to run once after delay milliseconds, returning a timer ID. Pass that ID to clearTimeout(id) to cancel it before it fires - this is important in React components to avoid calling setState after a component has unmounted. Note that the delay is a minimum, not a guarantee - if the main thread is busy, the callback fires later.
setInterval(fn, delay) calls your function repeatedly every delay milliseconds until you stop it. Always capture the returned ID and call clearInterval(id) when you're done - forgetting to clear it is a common memory leak. One gotcha: if your callback takes longer to run than the interval delay, calls can pile up. For that pattern, chain setTimeout calls recursively instead.
Debouncing delays execution until a burst of events stops - each new event resets the timer. The classic example is a search box: you don't want an API call on every keystroke, just when the user pauses typing. Implement it by clearing and resetting a setTimeout on each event. Libraries like Lodash have a battle-tested _.debounce() utility, so use that rather than rolling your own.
Throttling limits how often a function fires regardless of how many events come in - like allowing a call at most once every 200ms. Unlike debouncing (which waits for silence), throttling guarantees regular execution during continuous events. Use it for scroll or mousemove handlers where you want smooth, regular updates without hammering the browser. Lodash's _.throttle() is the pragmatic choice for production code.
DRY - Don't Repeat Yourself - means every piece of logic should live in one place. When you copy-paste code, you create multiple places to update when requirements change, and they inevitably drift out of sync. Extract shared logic into functions, modules, or components. That said, don't over-DRY prematurely - sometimes two similar-looking pieces of code have genuinely different concerns, and forcing them together creates more coupling than it's worth.
Design patterns are named, reusable solutions to common software problems - they're vocabulary for communicating architectural decisions. In JavaScript you'll encounter the Module pattern (encapsulating code with closures), Observer/Pub-Sub (event-driven communication), Factory (creating objects without exposing constructor logic), and Singleton (ensuring one instance). Knowing the names helps you recognize and discuss architectural choices clearly.
console.log() dumps a raw representation of any value - great for quick inspection. console.table() takes an array of objects and renders a proper table with column headers, making it dramatically easier to compare rows of structured data. When debugging a list of users or orders, console.table(data) is far more readable than scrolling through nested log output.
JavaScript has only one number type - all numbers, whether integers or decimals, are stored as 64-bit IEEE 754 floating-point values. This means integers up to 2^53 are represented exactly, but beyond that you lose precision. Special numeric values include NaN (invalid operation), Infinity (overflow or divide by zero), and -Infinity. For integers larger than 2^53, use BigInt.
Regular numbers cap out at Number.MAX_SAFE_INTEGER (2^53 - 1) before they lose integer precision. BigInt handles arbitrarily large integers exactly, which matters for things like database IDs from 64-bit systems or financial calculations. The big limitation: you can't mix them in arithmetic - 10n + 5 throws a TypeError - you have to convert explicitly. Also, BigInt doesn't support decimal fractions.
Internally all JavaScript numbers are 64-bit IEEE 754 floats, but you can write numeric literals in different bases: hex with 0x prefix, octal with 0o, and binary with 0b. These are just different ways to write the same underlying number. Hex is common for colors and bit masks, binary for flags, and octal occasionally for file permissions. They're all just number type at runtime.
toFixed(n) rounds and formats to n decimal places as a string - useful for displaying currency, but remember it returns a string so don't do math on it afterward. toString(base) converts to a string in any base, so (255).toString(16) gives 'ff'. Number.parseInt() and Number.parseFloat() are the same as the global versions. Number.isNaN() is the safe NaN check, and valueOf() is rarely called directly - it's used internally by JavaScript during type coercion.
The Date object represents a moment in time as milliseconds since Unix epoch (Jan 1, 1970 UTC). Create one with new Date() for now, or pass a timestamp, date string, or explicit year/month/day values. One major gotcha: months are zero-indexed (January is 0), which trips up everyone. For anything non-trivial in production - parsing, formatting, timezones - use a library like date-fns or the newer Temporal API rather than fighting the built-in Date.
toLocaleString(locale, options) is the quick way - works on both numbers and dates. For better performance when formatting many values, use Intl.NumberFormat or Intl.DateTimeFormat - create the formatter once and call .format() repeatedly, since constructing the formatter is the expensive part. These APIs handle all the locale quirks - decimal separators, currency symbols, date ordering - automatically.
The DOM (Document Object Model) is the browser's live, in-memory tree representation of your HTML. When the browser parses your HTML, it builds this tree, and JavaScript can then walk, query, and modify it - every element, text node, and attribute becomes an object you can interact with. This is ultimately how all those React and Vue frameworks work under the hood too.
Without the DOM, JavaScript would have no way to interact with the page - it would just be a scripting language with no connection to what users see. The DOM is the bridge between your JavaScript logic and the HTML structure, letting you react to user input, update content dynamically, and build SPAs. Every front-end framework you'll ever use is built on top of DOM APIs.
In practice you'll mostly use querySelector and querySelectorAll since they accept any CSS selector and are more flexible. getElementById is still the fastest for single lookups by id. The older getElementsBy* methods return live HTMLCollections that auto-update as the DOM changes, while querySelectorAll returns a static NodeList - a gotcha that trips people up when looping over elements while mutating the DOM.
A Node is the base type for everything in the DOM tree - elements, text nodes, comments, and even the document itself are all Nodes. An Element is a more specific Node that represents an actual HTML tag with attributes and child elements. HTMLCollection is a live, array-like object from older DOM methods that auto-updates as the DOM changes, which can cause unexpected behavior when iterating - most devs convert it to a real array with Array.from() before looping.
innerHTML parses and renders whatever HTML string you give it, which is powerful but dangerous - never use it with user-supplied content or you're opening yourself up to XSS attacks. textContent just treats everything as a plain string, so it's safe for user data. Rule of thumb: use textContent when you're setting text, innerHTML only when you genuinely need to inject HTML markup you control.
When you click a button inside a div, the click event fires on the button first, then bubbles up through the div, body, and document - that's event bubbling. It's the default behavior for almost all events and is actually really useful for event delegation, where you attach one listener on a parent instead of dozens on each child. You can stop it with event.stopPropagation(), but only do that when you truly need to - it makes debugging event flows much harder.
Event capturing is the opposite of bubbling - the event travels down from the document root to the target element before it fires on that element. In practice you rarely need it, but it's useful when you want to intercept an event before it reaches its target. To opt in, pass true (or { capture: true }) as the third argument to addEventListener - by default it's false, meaning you're in the bubbling phase.
Capturing goes top-down (document to target), bubbling goes bottom-up (target to document) - they're the two phases of the same event journey. Your addEventListener calls use bubbling by default, which is what you want 99% of the time. The real-world implication: event delegation relies on bubbling, and if you ever call stopPropagation() on a child element, the parent's bubbling listeners won't fire.
innerHTML is the content between an element's opening and closing tags - it's what's inside. Attributes are metadata on the opening tag itself, like id, class, href, or data-*. To read or write attributes use getAttribute/setAttribute, or the direct property shortcut (element.id, element.className). A common gotcha: some attributes map to differently-named properties (class becomes className in JS), so the direct property approach is usually cleaner when it's available.
Use element.style.propertyName to set inline styles directly - just remember CSS properties become camelCase in JS (background-color becomes backgroundColor). For toggling styles, element.classList.add/remove/toggle is usually a better approach since it keeps your styles in CSS where they belong. Avoid modifying element.style directly for complex styling - it couples your JS to visual concerns and is hard to override with stylesheets.
An event is the browser telling you something happened - a click, a keypress, a network response completing, the page loading. JavaScript is event-driven by design, so almost everything you do in the browser is in response to an event. You wire up responses using addEventListener, and the event object passed to your handler contains useful info like which element was clicked, which key was pressed, and mouse coordinates.
preventDefault() tells the browser "I'll handle this myself, don't do what you'd normally do." Classic use cases: stopping a form from submitting so you can validate first, preventing a link from navigating for client-side routing, or disabling a right-click context menu. It does not stop bubbling - that's stopPropagation(). Many developers confuse these two, so it's worth keeping them straight.
For mouse events you've got click, dblclick, mousedown, mouseup, mousemove, mouseover, and mouseout - plus the newer mouseenter/mouseleave which don't bubble, unlike mouseover/mouseout. For keyboards: keydown fires first, then keypress (deprecated - avoid it), then keyup. In practice, use keydown for most keyboard handling. Also worth knowing: the input and change events for form fields, and focus/blur for focus management.
addEventListener is the right way to attach event handlers - unlike the old onclick property approach, you can attach multiple handlers for the same event without overwriting each other. The handler receives an event object with all the context you need. Always use named functions when you need to remove the listener later - anonymous arrow functions can't be removed because removeEventListener needs the exact same function reference.
Call element.removeEventListener(eventType, handler) with the exact same function reference you used when adding it - that's the critical part. If you used an anonymous function or inline arrow function, you're out of luck, it can't be removed. This is why any listener you might need to clean up should be stored in a named variable. Forgetting to remove listeners is a common source of memory leaks, especially in SPA frameworks where components mount and unmount repeatedly.
Call document.createElement('div'), set whatever properties you need (id, className, textContent, dataset attributes), then insert it with appendChild or insertBefore. For more control, insertAdjacentElement and insertAdjacentHTML give you beforebegin/afterbegin/beforeend/afterend positions. Performance tip: if you're creating many elements at once, build them in a DocumentFragment first and append the fragment in one shot - it's much faster than hitting the DOM repeatedly.
The BOM is everything the browser exposes beyond the DOM - basically the browser itself as an API. window is the global object (all globals are technically properties of window). location lets you read or redirect the current URL. history gives you back/forward navigation and pushState for SPA routing. navigator is where you check things like userAgent, language, or geolocation. screen is rarely used in practice, but tells you about the display resolution.
A JS engine takes your JavaScript source code and runs it - parsing, compiling to bytecode, and optimizing hot paths with JIT compilation. V8 (Chrome/Node.js) is the most relevant one for most devs today, SpiderMonkey powers Firefox, and JavaScriptCore (also called Nitro) is in Safari. The engine is what makes Node.js possible - V8 was extracted from Chrome and embedded into a server runtime.
The call stack is JS's way of tracking where it is in execution - every time you call a function, a new frame gets pushed on the stack, and when it returns it gets popped off. Since JS is single-threaded, the stack can only process one thing at a time. If you see "Maximum call stack size exceeded", that's a stack overflow from infinite recursion. The stack is also what you see in your browser's debugger as the call trace when an error is thrown.
Every time JavaScript runs code, it does so inside an execution context - a wrapper that contains the local variables, the scope chain for looking up outer variables, and the value of this. There's one global execution context when your script starts, and a fresh one is created for each function call. Understanding execution contexts is key to understanding hoisting, closures, and why this behaves the way it does.
Every item on the call stack is an execution context - they're two sides of the same coin. When you call a function, a new execution context is created and pushed onto the stack, and the engine always executes whatever's at the top. When the function finishes, that context is popped and the engine resumes the one below it. This is exactly what you see in a stack trace when an error is thrown.
The heap is where all your objects, arrays, and functions live in memory - it's the large, unstructured pool that the engine allocates from at runtime. Unlike the stack (which is fast and ordered), heap allocation is more flexible but less predictable. The garbage collector periodically scans for objects that are no longer reachable and frees that memory, which is why holding onto references unnecessarily (like in closures or global variables) can cause memory leaks.
A compiler reads your entire source code upfront and translates it into machine code or bytecode before any of it runs - think C or Java. The compiled output runs fast since the heavy lifting is done upfront. The downside is you need a build step, and errors only surface at compile time rather than interactively as you run the code.
An interpreter reads and executes code line by line at runtime with no upfront compilation step - you just run the source directly. This makes it great for scripting and rapid development since you get immediate feedback. Traditional JavaScript was purely interpreted, which is why it was historically slower than compiled languages. Modern engines like V8 are much smarter now, using JIT compilation to get near-compiled performance.
A compiler does all the translation upfront before execution - faster runtime, but requires a build step. An interpreter does it on the fly, line by line - more flexible but traditionally slower. Modern JS engines like V8 use JIT (Just-In-Time) compilation, which is the best of both: it starts running immediately, then identifies hot code paths and compiles those to native machine code. This is why JS performance has improved so dramatically over the years.
Every execution context has three main components: the variable environment (where local variables and functions are stored), the scope chain (the chain of outer environments for variable lookup), and the this binding. The global execution context is created once when your script loads - in the browser, this maps to the window object. Function execution contexts are created fresh on every function call, which is how local variables stay isolated between calls.
The variable environment is basically the scope's memory - it holds all the variables and function declarations available in that execution context. It's set up during the creation phase before any code runs, which is why hoisting works: var declarations get initialized to undefined and function declarations get fully stored. This is also why you can call a function before its definition in the code, but can't access a let variable before its declaration line.
When JS can't find a variable in the current scope, it walks up the scope chain - checking each enclosing scope in order until it finds it or hits the global scope (and throws a ReferenceError if it's not there). The scope chain is set at the point a function is defined, not where it's called - that's lexical scoping. Closures are just functions that maintain a reference to their outer scope chain even after that outer function has returned.
this is one of the most confusing parts of JavaScript because its value depends entirely on how a function is called, not where it's defined. In an object method, this is the object. In a regular function, this is the global object (or undefined in strict mode). Arrow functions don't have their own this - they inherit it from the surrounding scope, which makes them great for callbacks inside methods. When in doubt, use bind() or an arrow function to lock in the this you want.
Before any code runs, the engine goes through a creation phase: it scans for all var declarations and initializes them to undefined, stores function declarations in full, establishes the scope chain, and sets up the this binding. Then in the execution phase it runs your code top to bottom, assigning real values. This two-pass behavior is exactly what causes hoisting - var and function declarations are processed early, before any assignment code runs.
In strict mode, this is undefined inside a plain function call instead of defaulting to the global object. This is actually the safer behavior - before strict mode, accidentally calling a method without an object context would silently mutate global state, which is a nasty bug to track down. Strict mode makes these mistakes throw errors immediately. Arrow functions are unaffected since they don't have their own this regardless of the mode.
Primitives (strings, numbers, booleans, null, undefined, Symbol, BigInt) are stored by value and are immutable - when you copy one you get a completely independent value. Objects (including arrays and functions) are stored by reference - the variable just holds a pointer to the data in the heap. This is why const with objects doesn't prevent mutation: you can change the object's contents, you just can't reassign the variable to point to a different object.
A "normal copy" is just assigning one variable to another with =. For primitives it works as expected - you get an independent value. For objects and arrays you're just copying the reference, so both variables point to the same data in memory. This trips developers up constantly: they think they're working with a copy but they're actually mutating the original. It's the root cause of a whole class of subtle bugs.
A shallow copy creates a new object with the same top-level properties, but any nested objects or arrays are still shared references to the original. Use spread ({ ...obj } or [...arr]) or Object.assign() to make one. It's fine for flat objects, but if you shallow-copy { user: { name: 'Alice' } } and then change the name, you've changed it in both copies. For deeply nested structures, you need a deep copy.
A deep copy clones everything recursively - nested objects, nested arrays, all the way down - so modifying the copy never affects the original. The classic trick is JSON.parse(JSON.stringify(obj)), but it drops functions, converts Dates to strings, and can't handle circular references. The newer structuredClone() built-in is a much better option for most cases. For edge cases with functions or special types, use a library like Lodash's _.cloneDeep().
Objects are non-primitive and live in heap memory - the large, dynamically managed pool the engine uses for reference types. Your variable doesn't hold the object itself, just a reference (think pointer) to where it lives in the heap. Primitives on the other hand are stored directly by value in the execution context. This reference vs. value distinction is the foundation for understanding copy behavior, equality checks, and how closures work.
instanceof walks up an object's prototype chain to check if a constructor's prototype appears anywhere in it - in other words, it tells you whether an object was created from a particular class or constructor. One gotcha: it doesn't work on primitives ("hello" instanceof String is false), and it can give unexpected results across iframes since they have separate global contexts. For primitive type checks, typeof is your friend.
console.log([] instanceof Array); // true
console.log(new Date() instanceof Date); // true
console.log("hello" instanceof String); // false
Destructuring is a clean ES6 syntax for pulling values out of arrays and objects without writing verbose dot-notation chains. Instead of const name = person.name; const age = person.age; you just write const { name, age } = person. You'll use this constantly - especially in function parameters, when consuming API responses, and with React hooks like useState which returns [value, setter].
The killer use case here is swapping two variables without a temp variable: [a, b] = [b, a]. Before destructuring you needed three lines (let temp = a; a = b; b = temp;) - now it's one. It's a popular interview question because it shows you know destructuring well. To actually reverse an entire array, use arr.reverse() (mutates the original) or [...arr].reverse() if you want to keep the original intact.
JavaScript only returns one value, but that value can be an array or object carrying multiple pieces of data. Return an array when order matters (like useState does: return [value, setter]), or return an object when you want named properties for clarity. At the call site, destructure immediately: const [x, y] = getCoords() or const { width, height } = getDimensions(). Returning an object is usually clearer since callers don't need to remember the order.
Array destructuring is position-based - [a, b] picks the first and second elements regardless of name. Object destructuring is name-based - { x, y } pulls properties with those exact names. You can skip array elements with commas (const [, second] = arr), set defaults, rename, and use rest (...). Once you get the hang of it, you'll reach for destructuring constantly when working with API responses and function parameters.
Mirror the nested structure in your pattern: const [a, [b, c]] = [1, [2, 3]]. The pattern shape has to match the data shape, which can get unreadable if you go more than 2-3 levels deep. For deeply nested data, it's usually better to destructure in multiple steps rather than cramming it all into one expression. You'll hit this pattern with API responses that return coordinates or matrix data.
Add = defaultValue after the variable name in your pattern: const [a = 0, b = 0] = arr or const { name = 'anonymous' } = user. The default only kicks in when the value is undefined - not null, not 0, not empty string. This is really handy when working with optional config objects or partially-defined API responses where some fields might be missing.
Object destructuring lets you pull specific properties out of an object in one clean line - much better than doing const name = person.name; const age = person.age; separately. You'll use this everywhere: in function parameters to pick out just the props you need, when consuming API responses, and when working with framework APIs. Only extract the properties you actually need - it makes the code self-documenting about what data you're working with.
The basics: const { name, age } = person extracts those properties into same-named variables. To rename, use a colon: const { name: fullName } = person. To set a default: const { age = 25 } = person. You can combine both: const { name: fullName = 'Guest' } = person. The colon syntax looks weird at first but you'll internalize it quickly - left side is the object's property name, right side is your new variable name.
Use the rest pattern (...) at the end of your destructuring to scoop up whatever's left: const [head, ...tail] = arr grabs the first element and puts the rest in an array. For objects: const { id, ...data } = response lets you separate the id while keeping everything else together. The rest element must always come last - putting it anywhere else is a syntax error. This is great for extracting a few specific properties while spreading the rest somewhere else.
Use a colon after the property name: const { userId: id } = response extracts the userId property but stores it in a variable called id. This is really useful when an API returns property names that conflict with existing variable names, or when the backend names aren't descriptive enough in your context. Remember: left side of the colon is the object's property name, right side is your new variable name.
Assign defaults inline: const { role = 'user', limit = 10 } = config. If the property is missing or undefined, the default kicks in - but not for null. This is the standard pattern for optional config objects: function init({ timeout = 3000, retries = 3 } = {}) {}. The = {} at the end means the function still works if called with no arguments at all.
Mirror the object's nesting in your pattern: const { address: { city, zip } } = person pulls out city and zip from person.address directly. Note that address itself won't be a variable here - only city and zip are. Nested destructuring can get hard to read if you go too deep; for complex structures, consider destructuring in multiple steps. Also worth adding defaults for nested values since you'll get a TypeError if address is undefined.
Put destructuring directly in the parameter signature: function render({ title, body, author = 'Anonymous' }) {}. This is much cleaner than accessing props.title, props.body inside the function body - the parameter list itself documents what shape of object the function expects. Always add = {} as the parameter default so the function doesn't blow up if called without arguments. You'll see this pattern constantly in React components and Express middleware.
The spread operator (...) unpacks the elements of an iterable - array, string, or object - into individual elements. It's hugely versatile: spread an array into function arguments, merge arrays with [...arr1, ...arr2], or clone/merge objects with {...obj1, ...obj2}. Just remember it's a shallow copy - nested objects are still references.
For arrays use [...original], for objects use {...original} - both give you a new top-level container with the same values. The gotcha is that this is only one level deep - if your array contains objects or your object has nested objects, those inner values are still shared references. For a true deep clone, reach for structuredClone() or JSON.parse(JSON.stringify(obj)).
Just spread each array inside a new array literal: const combined = [...arr1, ...arr2, ...arr3]. You can mix in individual values too - like [0, ...arr1, ...arr2, 99]. It's cleaner than concat() and handles as many arrays as you need.
Wrap the string in brackets with spread: const arr = [...str], so [...'hello'] gives ['h', 'e', 'l', 'l', 'o']. One advantage over str.split('') is that spread correctly handles Unicode surrogate pairs - emoji and multi-byte characters stay as a single element rather than splitting into broken bytes.
Spread an array directly at the call site: myFunction(...args). So if args = [1, 2, 3], that's the same as myFunction(1, 2, 3) - no more .apply() gymnastics. A classic use case is Math.max(...numbers) since Math.max doesn't accept an array, just individual arguments.
The rest parameter (...) gathers any remaining arguments into a real array - function sum(...nums) {} means you can call it with any number of values. It must come last in the parameter list, and unlike the old arguments object, it gives you a proper array so all array methods work on it. Prefer rest over arguments in any modern code.
Same syntax (...), completely opposite jobs - context is everything. Spread expands: you're at a call site or inside a literal, pushing elements out. Rest collects: you're in a function signature or destructuring pattern, gathering elements in. Think of spread as 'unpack' and rest as 'pack'.
Just add ...args as your last (or only) parameter: function myFunc(first, ...rest) {}. Everything beyond first lands in rest as a real array - so you can immediately do rest.map(), rest.filter(), whatever you need. This replaces the old arguments hack and is much more readable.
Logical operators don't evaluate more than they need to. With &&, if the left side is falsy, the right side is never run - so user && user.getName() won't blow up if user is null. With ||, if the left side is truthy, the right side is skipped - making value || 'default' a common pattern for fallbacks. It's not just optimization, it's often used intentionally for conditional execution.
The bug with || is that it treats 0, '', and false as falsy, so count || 0 always returns 0 even when count is a valid 0. The ?? operator only triggers the fallback for null or undefined - so count ?? 0 correctly returns 0 when count is 0. Anytime you have a value that could legitimately be 0, false, or an empty string, use ?? instead of ||.
The ?? operator provides a fallback only when the left side is null or undefined - making it the right tool for default values when the actual value could be 0, false, or ''. Write config.timeout ?? 3000 and you'll get 3000 only if timeout was never set, not when it's intentionally set to 0. It's one of those features that fixes a whole class of subtle bugs once you start using it.
Optional chaining (?.) lets you safely access deeply nested properties without manually checking each level for null or undefined. Instead of writing if (user && user.address && user.address.city), you just write user?.address?.city. If any part of the chain is null or undefined, the whole expression returns undefined instead of throwing a TypeError. This was a game-changer for working with API responses.
An array is an ordered list of values accessed by a zero-based numeric index - arr[0] is the first element. Elements can be any type, including other arrays or objects, so you can model complex data. In JavaScript, arrays are actually objects under the hood, which explains why typeof [] returns 'object' - use Array.isArray() to properly check.
The most used property is length - it tells you the count of elements and you can even set it directly to truncate the array. Arrays are mutable, zero-indexed, and can hold any mix of types. They're technically objects with numeric keys, which means you can add non-numeric properties to them, though you almost never should.
Always use the literal syntax (const arr = [1, 2, 3]) - it's cleaner and avoids a nasty gotcha with the Array constructor. When you pass a single number to new Array(3), you get an empty array with length 3, not an array containing 3. That ambiguity catches people out; the literal syntax never has this problem.
Indexes are zero-based, so the first element is arr[0] and the last is arr[arr.length - 1] - or in modern JS, arr.at(-1) for negative indexing. You can read or write by index: arr[2] = 'new value'. Accessing an index that doesn't exist returns undefined rather than throwing, which can hide bugs if you're not careful.
sort() works on any array but converts elements to strings before comparing by default - which means [10, 9, 2] sorts as [10, 2, 9] because '10' comes before '2' lexicographically. For numbers, objects, or dates, always pass a comparator. Also worth noting: sort() mutates the original array in place, so clone it first with [...arr].sort(...) if you need to preserve the original.
Pass a comparator any time you're sorting anything other than plain strings - numbers, dates, objects, or locale-sensitive text. Use (a, b) => a - b for ascending numbers and (a, b) => b - a for descending. For objects, sort by a property: arr.sort((a, b) => a.name.localeCompare(b.name)). The comparator must return a negative number, zero, or positive number - that's the contract sort() relies on.
push and pop work at the end of the array - push adds, pop removes. unshift and shift work at the start - unshift adds, shift removes. Push/pop are O(1) and very cheap. Unshift/shift are O(n) because every element needs to be re-indexed, so avoid them in performance-sensitive loops on large arrays.
slice is non-destructive - it returns a new array with the selected elements and leaves the original untouched. splice mutates the original array: it can remove elements, insert new ones, or do both at once. The easy way to remember: slice is safe to use anywhere; splice is a scalpel that cuts the original. If you find yourself reaching for splice, check if slice + a rebuild would be cleaner.
forEach is tied to arrays and array-like objects - it can't be broken out of early and doesn't work with await properly inside async callbacks. for...of works with any iterable (arrays, strings, Maps, Sets, generators) and supports break, continue, and await. In practice, if you need early exits or async iteration, use for...of; otherwise forEach is fine for simple side effects.
map transforms each element and returns a new array of the same length. filter tests each element and returns a new array with only the elements that pass. reduce walks through the array accumulating a single output value - a sum, an object, a string, whatever you need. In real code you'll use map and filter constantly; reduce is powerful but can be hard to read, so only reach for it when a simpler approach won't do.
some() is like asking 'does any element match?' - it returns true as soon as it finds one match and stops. every() asks 'do all elements match?' - it returns false as soon as it finds one that doesn't. Both short-circuit, so they're efficient. Common use: some() to check if a user has any admin role, every() to validate that all form fields are filled.
flat() just flattens nested arrays - flat() by one level, flat(Infinity) all the way down. flatMap() is map + flat(1) in a single pass, which is useful when your map callback returns arrays and you don't want the result to be an array of arrays. A typical use case is splitting sentences into words: sentences.flatMap(s => s.split(' ')). For anything deeper than one level, stick to flat(depth) separately.
An object is a collection of key-value pairs where keys are strings or Symbols and values can be anything - primitives, arrays, functions, or other objects. When the value is a function, we call it a method. In JavaScript, almost everything is an object under the hood, so understanding objects well is fundamental to understanding the entire language.
Use an object literal ({}) for one-off data structures - it's simple and clear. Use a constructor function or class with new when you need to create multiple instances that share behavior through the prototype. The literal approach is also slightly faster since the engine doesn't need to set up a prototype chain. In modern code, prefer class syntax over raw constructor functions when you need instances.
Use dot notation (obj.name) for static, known property names - it reads cleanly. Use bracket notation (obj[key]) when the property name is dynamic, stored in a variable, has special characters, or starts with a number. A common real-world use is when processing API responses where you loop over dynamic keys: Object.keys(obj).forEach(key => obj[key]).
Object.keys() gives you an array of property names, Object.values() gives you the values, and Object.entries() gives you [key, value] pairs - all for own enumerable properties only. In practice, entries() is the most powerful because you can destructure it in loops: for (const [key, val] of Object.entries(obj)). It's also the bridge to convert an object into a Map: new Map(Object.entries(obj)).
Object.freeze() makes the object fully read-only at the top level - no adding, removing, or updating properties. Object.seal() locks the shape of the object (no adding or removing) but still lets you update existing property values. The key caveat for both: they're shallow - nested objects are not frozen or sealed, so you need to recursively apply them for deep immutability.
A Set is a collection of unique values - it automatically removes duplicates, which makes it perfect for deduplicating arrays: [...new Set(arr)]. Unlike an array, a Set doesn't have indexed access, but iteration order is guaranteed to be insertion order. Sets use strict equality (===) for comparison, so two different objects that look identical are still considered unique.
WeakSet is like Set but only holds objects, and it holds them weakly - meaning if nothing else references an object, the garbage collector can clean it up even if it's in the WeakSet. You can't iterate over it or check its size, which limits its use cases. The main practical use is tracking whether you've already processed an object without creating a memory leak.
Since Set is iterable, you can loop over it directly: for (const value of mySet) { console.log(value); }. You can also call mySet.forEach(v => console.log(v)) or explicitly use mySet.values(). All approaches iterate in insertion order, which is a nice guarantee compared to iterating plain object keys.
Use spread - const arr = [...mySet] - or Array.from(mySet). Both work fine; spread is more concise. The really useful pattern here is deduplication: [...new Set(arr)] converts an array to a Set (dropping duplicates) then immediately back to an array.
A Map is a key-value store where keys can be any type - objects, functions, primitives, anything. That's the big difference from plain objects where keys are always strings or Symbols. Maps also maintain insertion order and have a .size property. Use a Map when your keys aren't strings, when you need to frequently add/remove entries, or when key-value pairs are the primary purpose of the data structure.
WeakMap is like Map but keys must be objects and are held weakly - when the key object gets garbage collected, the entry disappears automatically. This makes it great for associating private data with objects without causing memory leaks. A classic pattern is using a WeakMap to store private state for class instances: the data is automatically cleaned up when the instance is gone.
Since Map's set() method returns the Map itself, you can chain multiple set() calls: new Map().set('a', 1).set('b', 2).set('c', 3). It's a builder pattern for initializing a Map inline. That said, for larger Maps it's usually cleaner to just pass an array of pairs to the constructor: new Map([['a', 1], ['b', 2]]).
Map uses the SameValueZero algorithm for key equality, which means objects (including arrays) are compared by reference, not value. So map.set([1,2], 'x') and then map.get([1,2]) returns undefined because that's a different array instance. You'd need to hold a reference to the original array to retrieve the value. This trips people up constantly - if you need value-based lookup, stringify the key or rethink your data structure.
Spread the Map to get an array of [key, value] pairs: const arr = [...myMap]. Use [...myMap.keys()] or [...myMap.values()] if you only need one side. This pairs well with the reverse operation - you can round-trip a Map to an array and back: new Map([...myMap]).
Pass an array of [key, value] pairs directly to the Map constructor: new Map([[key1, val1], [key2, val2]]). If your array isn't already in that shape, transform it first - for example, converting an array of objects: new Map(users.map(u => [u.id, u])). That last pattern is really handy for fast O(1) lookups by ID instead of repeatedly calling find().
Pass Object.entries(obj) into the Map constructor: new Map(Object.entries(obj)). Object.entries() gives you [key, value] pairs, which is exactly what Map expects. Going the other way - Map back to Object - use Object.fromEntries(map) which was introduced in ES2019.
With forEach, the callback receives value first, then key - the reverse of what you might expect coming from plain objects: map.forEach((value, key) => {}). With for...of, destructure the entries: for (const [key, value] of map) {}. Both respect insertion order. The for...of approach is usually nicer since it supports break and works naturally with async/await.
OOP in JavaScript is a way of structuring code around objects that bundle data and behavior together. Rather than scattered functions and variables, you model things as objects with properties and methods - a User object knows its own name and can log itself in. JavaScript does OOP differently from Java or C++ since it's prototype-based under the hood, but the ES6 class syntax makes it feel familiar.
The 6 OOP principles are: Encapsulation (bundling data and methods, hiding internals), Abstraction (exposing only essential features), Inheritance (child classes reuse parent behavior), Polymorphism (objects of different types used interchangeably), Composition (building from smaller objects), and Interface (defining interaction contracts).
A Class is a blueprint for objects. An Object is an instance with specific state. Encapsulation hides internal implementation details behind a public interface (e.g. a bank account exposes deposit/withdraw but hides balance logic). Abstraction focuses on what something does (e.g. a play button). Inheritance lets child classes reuse parent behavior. Polymorphism lets different object types be used through a common interface.
Inheritance in JavaScript works through the prototype chain. When you access a property, JavaScript looks up the chain until it finds it or reaches null. Use Object.create(parent) to inherit from a prototype, constructor functions with prototype assignment, or ES6 class with extends and super().
Prototypal inheritance is a mechanism where objects inherit properties and methods directly from other objects through the prototype chain. Each object has an internal [[Prototype]] link. If a property is not found on the object, JavaScript searches up the prototype chain until it finds it or reaches null.
Prototypal inheritance can be achieved using Object.create(parent) to set a prototype, using constructor functions with Child.prototype = Object.create(Parent.prototype), or using ES6 class with extends keyword. The class syntax is the most readable and is syntactic sugar over prototype-based inheritance.
A constructor function is a function used with the new keyword to create and initialize objects. It is named with an uppercase letter by convention. When called with new, a new object is created, this is bound to it, properties are assigned, and the object is returned automatically.
ES6 classes provide a cleaner, more familiar syntax for creating objects and implementing inheritance in JavaScript. They use the class keyword to define blueprints, constructor() for initialization, and the extends keyword with super() for inheritance. Internally, they still use prototypal inheritance.
Object.create(proto) creates a new object with its [[Prototype]] set to the specified proto object. The new object inherits all properties and methods from its prototype. It is a foundational way to implement prototypal inheritance without using constructor functions or classes.
The prototypal chain is the series of linked [[Prototype]] references that JavaScript traverses when looking up a property or method. It starts at the object itself, then moves to its prototype, then to the prototype's prototype, and so on until Object.prototype (whose [[Prototype]] is null).
prototype is a property on constructor functions and classes that is used when new objects are created with new. __proto__ is the prototype of a specific object instance, though it is considered a legacy way to access it. The prototype chain is the full lookup path JavaScript follows from one prototype to the next when searching for a property or method.
Getters (get keyword) and setters (set keyword) in ES6 classes are special methods that control access to object properties. A getter retrieves a value and a setter updates it. They allow validation, computed values, and encapsulation. Access them like regular properties: obj.name.
If a property and its getter/setter share the same name, accessing the property calls the getter, which tries to access the same property, which calls the getter again, creating infinite recursion. This results in a 'Maximum call stack size exceeded' error. Use a backing property with a different name, often prefixed with an underscore.
A static method is defined with the static keyword inside a class and belongs to the class itself rather than to any instance. It is called directly on the class (MyClass.myMethod()) without needing to create an instance. Static methods are useful for utility functions related to the class.
With constructor functions: call Parent.call(this) and set Child.prototype = Object.create(Parent.prototype). With ES6 classes: use class Child extends Parent and call super() in the constructor. With Object.create: create child = Object.create(parent) and add properties to the child object.
Use closures inside constructor functions to create private variables and expose only getter/setter methods. In ES6 classes, use the # prefix for private fields (e.g. #name) to restrict access to within the class body. Convention-based 'protection' uses an underscore prefix to signal internal-only usage.
Use the module pattern with an IIFE to create a closure where private variables and functions are inaccessible from outside. In ES6 classes, use the # prefix syntax for truly private fields and methods that cannot be accessed outside the class body.
Method chaining allows calling multiple methods in a single expression on the same object. Implement it by returning this at the end of each method in an ES6 class. For example: obj.add(5).subtract(3) works when both add() and subtract() return this.
Synchronous code runs line by line in a single thread, blocking execution until each operation completes. Asynchronous code allows other operations to run while waiting for slow tasks like network requests or timers. Asynchronous patterns include callbacks, Promises, and async/await.
AJAX (Asynchronous JavaScript and XML) is a technique for making asynchronous HTTP requests from a browser to a server without reloading the entire page. It enables dynamic content updates. Modern implementations use JSON rather than XML, and the Fetch API or XMLHttpRequest to make requests.
An API (Application Programming Interface) defines how software components interact. SOAP uses XML with strict contracts. REST uses HTTP methods (GET, POST, PUT, DELETE) for CRUD on resources. A Request is a client message to the server. A Response is the server's reply. Request Body contains payload data. Query Params pass extra info in the URL. JSON and XML are common data formats.
Server-client architecture separates a web app into a server (data storage and logic) and a client (user interface). JavaScript on the client makes HTTP requests to the server, which processes them and returns data. This enables dynamic, data-driven web applications without full page reloads.
A Promise represents the eventual success or failure of an asynchronous operation. The fetch() function makes HTTP requests and returns a Promise that resolves to a Response object. Together, they enable clean asynchronous network requests using .then()/.catch() or async/await.
Use .then(result => {}) on a Promise to handle fulfillment and .catch(error => {}) for rejections. Alternatively, use async/await with a try/catch block for more readable code. Both approaches allow you to work with the resolved value once the asynchronous operation completes.
Use the .catch() method after .then() to handle rejected Promises: promise.then(...).catch(error => {}). With async/await, wrap the await in a try/catch block. Always handle rejections to avoid unhandled promise rejection warnings and to gracefully recover from errors.
Create a Promise using new Promise((resolve, reject) => {}). Call resolve(value) when the asynchronous operation succeeds and reject(error) when it fails. Chain .then() to handle the resolved value and .catch() to handle errors.
async/await is syntax that makes asynchronous code look synchronous. Mark a function with async to make it return a Promise, then use await before a Promise to pause execution until it resolves. Use try/catch to handle errors. It is built on top of Promises and makes async code easier to read.
An async function always returns a Promise. The explicit return value becomes the resolved value of that Promise. Access it using await (inside another async function) or .then(). For example: const result = await myAsyncFn() or myAsyncFn().then(result => {}).
Use Promise.all([p1, p2, p3]) to run multiple Promises concurrently. It returns a new Promise that resolves when all input Promises resolve, providing an array of results. If any Promise rejects, Promise.all() rejects immediately with that error.
Wrap potentially failing code in a try block. If an error is thrown, execution jumps to the catch block where you handle the error. The optional finally block always runs regardless of success or failure, making it ideal for cleanup tasks like closing connections.
Promise.race() resolves or rejects as soon as the first Promise settles. Promise.allSettled() waits for all Promises to settle and returns an array of outcome objects (with status, value or reason) regardless of success or failure. Promise.any() resolves with the first fulfilled Promise, or rejects with AggregateError if all fail.
Microtasks are high-priority tasks such as Promise.then(), catch(), finally(), and queueMicrotask(). Macrotasks are regular tasks such as setTimeout(), setInterval(), and many browser events. After the current call stack finishes, JavaScript runs all microtasks before it moves to the next macrotask.
setTimeout(() => console.log("timeout"), 0);
Promise.resolve().then(() => console.log("promise"));
// Output:
// promise
// timeout
Microtasks are drained immediately after the current synchronous code finishes and before the event loop continues to the next timer or browser task. This is why Promise callbacks usually run before setTimeout(..., 0). If your code keeps adding many microtasks, they can delay timers, rendering, and other queued work.
A module is an independent, reusable piece of code that encapsulates related functionality in its own file scope. ES6 modules use the import and export keywords to share functionality between files, enabling better code organization, avoiding global scope pollution, and explicit dependency management.
Export items using the export keyword: export const name = ... or export default myFunction. Import them with import: import { name } from './module.js' for named exports, or import myFunction from './module.js' for default exports. Use import * as module from './module.js' for all exports.
CommonJS uses require() and module.exports, and it became popular in Node.js. ES Modules use import and export, and they are the standard module system in modern JavaScript for both browsers and Node.js. ES Modules are easier for tools to analyze, which helps with optimizations like tree shaking.
Polyfilling is adding JavaScript code that emulates newer features in older browsers that don't support them natively. A polyfill detects if a feature is missing and provides a fallback implementation, ensuring cross-browser compatibility without requiring users to update their browsers.
Transpiling converts modern JavaScript code (e.g. ES6+) into an equivalent older version (e.g. ES5) that is compatible with older browsers. Tools like Babel handle this automatically. Transpiling converts syntax like arrow functions and classes, while polyfilling adds missing APIs.
Polyfilling adds new API functionality to older environments by implementing missing features in JavaScript code (e.g. adding Array.from to older browsers). Transpiling converts modern syntax to equivalent older syntax (e.g. arrow functions to regular functions). Polyfilling is about features; transpiling is about syntax.
The DOM (Document Object Model) is a programming interface that represents an HTML page as a tree of objects. When a browser loads a page, it builds this tree from the HTML. JavaScript can then select, read, and modify any node in the tree to update what the user sees without reloading the page.
innerHTML reads or writes the HTML content inside an element, including any tags, so tags are parsed and rendered. textContent reads or writes plain text only; any HTML tags are treated as literal text and not rendered. Use textContent when inserting user-provided data to avoid XSS security risks.
querySelector returns the first element that matches a CSS selector, or null if nothing matches. querySelectorAll returns a NodeList of all matching elements (which may be empty). Both accept any valid CSS selector, making them more flexible than getElementById or getElementsByClassName.
Use document.createElement() to create a new element in memory, set its content and attributes, then use appendChild() (or insertBefore(), prepend(), append()) to attach it to an existing element in the DOM. The element only becomes visible on the page after it is attached to the document tree.
The modern way is to call element.remove() directly on the element you want to remove. The older approach is parent.removeChild(child), where you call removeChild on the parent element and pass the child to remove. Both produce the same result; element.remove() is simpler and does not require a reference to the parent.
The BOM (Browser Object Model) is a set of objects provided by the browser that let JavaScript interact with the browser environment beyond the page content. The root object is window, which also gives access to history, navigator, screen, location, and Web Storage APIs. Unlike the DOM, the BOM has no official standard but is consistently supported across all modern browsers.
Both store key-value pairs in the browser with the same API (setItem, getItem, removeItem, clear). The difference is lifetime: localStorage data persists indefinitely until explicitly cleared, surviving tab and browser restarts. sessionStorage data is cleared automatically when the browser tab is closed. Both are scoped to the current origin (protocol + domain + port).
Cookies are sent to the server automatically with every HTTP request and support an expiry date; they are limited to about 4 KB. localStorage is client-side only (never sent to the server), has a larger capacity of around 5 MB, and has no built-in expiry. Use cookies for server-side authentication and session data; use localStorage for client-only preferences and caching.
window.location is an object that contains the current page URL broken into parts: href (full URL), hostname, pathname, search (query string), and hash. You can read these to inspect the URL or set location.href to navigate the browser to a new page. location.reload() reloads the current page.
window.navigator is a BOM object that contains information about the user's browser and device. Common properties include userAgent (browser and OS identification string), language (user's preferred language), onLine (whether the browser has network access), and cookieEnabled. It is often used for browser detection and feature checks.
Event bubbling means that when an event fires on an element, it propagates upward through the DOM tree to all its ancestor elements. For example, clicking a button inside a div will also trigger any click listeners on that div and on the document. You can stop this with event.stopPropagation() inside the handler.
Event capturing is the opposite of bubbling. The event travels from the document root down to the target element before the target handles it. You opt into capturing by passing true as the third argument to addEventListener. The full event lifecycle is: capturing (top-down), target, then bubbling (bottom-up).
Event delegation is a pattern where a single event listener is attached to a parent element to handle events for all its children, relying on event bubbling. It is useful because it reduces the number of listeners (better performance for large lists), and it automatically handles dynamically added child elements without needing to attach new listeners each time.
onclick is an element property that can only hold one handler at a time. Assigning a new function overwrites the previous one. addEventListener can attach multiple independent handlers for the same event without overwriting any of them. It also supports capturing phase listeners and can be removed with removeEventListener, making it the preferred approach.
Call event.stopPropagation() inside an event handler to prevent the event from bubbling up (or capturing down) to ancestor elements. To also prevent any other listeners on the same element from running, use event.stopImmediatePropagation(). To prevent the browser's default action (like following a link), use event.preventDefault(). These methods can be combined as needed.
event.stopPropagation() stops the event from moving to parent or child elements in the event flow. event.preventDefault() stops the browser's default action, like following a link or submitting a form. One controls event movement, while the other controls browser behavior, and sometimes both are used together.
An error in JavaScript is something that goes wrong while the code runs. Without proper handling, it stops the normal flow of your program. JavaScript has a built-in Error object that carries information about what went wrong, including a message and the type of error.
console.log(undeclaredVariable); // ReferenceError: undeclaredVariable is not defined
The try block contains code that might throw an error. If something goes wrong, JavaScript jumps to the catch block instead of crashing the whole program. The catch block receives the error object so you can handle it properly.
try {
let result = JSON.parse("invalid json");
} catch (error) {
console.log("Something went wrong:", error.message);
}
The finally block runs no matter what. Whether the code in try succeeds or throws an error, finally always runs. It is great for cleanup tasks like closing a connection or hiding a loading spinner.
try {
fetchData();
} catch (error) {
console.log("Error:", error.message);
} finally {
hideLoadingSpinner(); // runs whether fetchData succeeded or failed
}
JavaScript has several built-in error types:
- SyntaxError - code with invalid syntax that cannot be parsed (usually a typo)
- ReferenceError - trying to use a variable that was never declared
- TypeError - using a value in a way that does not match its type (e.g. calling something that is not a function)
- RangeError - a value is outside of an allowed range
- URIError - a malformed URI passed to functions like
decodeURI() - EvalError - related to the
eval()function (you will rarely see this one)
null.toString(); // TypeError
undeclaredVar; // ReferenceError
new Array(-1); // RangeError
You can extend the built-in Error class to create your own error types. This is useful when you want to throw errors with a specific name that makes them easier to identify and catch.
class ValidationError extends Error {
constructor(message) {
super(message);
this.name = "ValidationError";
}
}
try {
throw new ValidationError("Email is required");
} catch (error) {
if (error instanceof ValidationError) {
console.log("Validation failed:", error.message);
}
}
return exits a function and sends a value back to the caller. throw also stops execution but it signals that something went wrong. The thrown value (usually an Error object) travels up the call stack until it hits a catch block. If it never does, it crashes the program.
function divide(a, b) {
if (b === 0) throw new Error("Cannot divide by zero"); // signals failure
return a / b; // normal exit
}
You use .catch() at the end of a Promise chain to catch rejected promises. You can also pass a second argument to .then(), but that only catches errors from that specific step. Using .catch() at the end is cleaner and catches errors from any step in the chain.
fetch("/api/data")
.then(res => res.json())
.then(data => console.log(data))
.catch(error => console.log("Request failed:", error.message));
With async/await you wrap the code in a regular try/catch block. If the awaited promise rejects, the error is caught just like a synchronous error. This makes async error handling much easier to read than chaining .catch().
async function loadUser() {
try {
const res = await fetch("/api/user");
const data = await res.json();
return data;
} catch (error) {
console.log("Failed to load user:", error.message);
}
}
A closure is when a function remembers the variables from the place where it was created, even after that outer function has finished running. In simple words, the inner function "closes over" the outer function's variables and keeps them alive.
function outer() {
let count = 0;
return function inner() {
count++;
return count;
};
}
const increment = outer();
console.log(increment()); // 1
console.log(increment()); // 2
// outer() is done but count is still alive inside increment
Closures are useful for two main things. They let you create private variables that nothing outside the function can access or modify. They also allow functions to hold onto some state between calls, like a counter that keeps incrementing or a value that gets computed only once and remembered.
A common use case is creating a counter or a function factory where each function gets its own private piece of state. Another very common use is the module pattern, where you expose only specific functions and keep everything else private.
function makeMultiplier(multiplier) {
return function (num) {
return num * multiplier;
};
}
const double = makeMultiplier(2);
const triple = makeMultiplier(3);
console.log(double(5)); // 10
console.log(triple(5)); // 15
When you use var inside a loop and create a function for each iteration, all those functions share the same variable. By the time any function runs, the loop is already done, so they all see the final value. Using let fixes this because let creates a new binding for each loop iteration.
// Problem with var
for (var i = 0; i < 3; i++) {
setTimeout(() => console.log(i), 100);
}
// prints: 3, 3, 3
// Fixed with let
for (let i = 0; i < 3; i++) {
setTimeout(() => console.log(i), 100);
}
// prints: 0, 1, 2
Closures keep the outer function's variables alive in memory as long as the inner function exists. This is intentional and useful, but it can cause memory issues if you are not careful. If you hold onto a closure that references a large object and you never let go of that closure, the large object stays in memory longer than needed. Once you no longer need the closure, set the reference to null so the garbage collector can clean it up.
JavaScript uses automatic garbage collection to free memory that is no longer reachable by your program. In simple terms, if nothing in your code can access a value anymore, the engine can remove it from memory. Modern JavaScript engines mainly use a mark-and-sweep style approach: reachable values stay, unreachable values are cleaned up.
Memory leaks happen when your code keeps references to data that is no longer needed, so the garbage collector cannot remove it. Common causes include forgotten timers, event listeners that are never removed, large objects captured by closures, and accidentally storing too much data in global variables or long-lived caches. Cleaning up references when you are done helps avoid these problems.
Technically, every function in JavaScript is a closure because every function has access to its own scope plus the scopes around it. That said, the term "closure" is normally used when a function is still using variables from an outer function that has already finished running. If a function only works with its own arguments and local variables, there is no meaningful closure in play, even though the scope chain still exists. The interesting part of closures is when that outer scope stays alive because of a function holding onto it.
this refers to the object that is currently running the code. Its value depends on how a function is called, not where the function is written. It can be the global object, an object instance, or something explicitly set using call, apply, or bind.
In the global scope (outside any function), this refers to the global object. In a browser that is window. In Node.js it is the global object. In strict mode, this at the top level is still the global object, but inside a function call it becomes undefined.
console.log(this === window); // true (in browser, global scope)
A regular function has its own this, which is determined by how it is called. An arrow function does not have its own this at all. It inherits this from the surrounding code where it was written. This is why arrow functions are often used for callbacks inside class methods.
const obj = {
name: "Alice",
regularFn: function () {
console.log(this.name); // "Alice" - this is obj
},
arrowFn: () => {
console.log(this.name); // undefined - this is the global/window
},
};
obj.regularFn();
obj.arrowFn();
All three let you manually set what this should be inside a function. The difference is when and how they call the function.
- call() - calls the function right away, arguments passed one by one
- apply() - calls the function right away, arguments passed as an array
- bind() - does not call the function immediately, returns a new function with
thispermanently set
function greet(greeting) {
console.log(greeting + ", " + this.name);
}
const user = { name: "Alice" };
greet.call(user, "Hello"); // Hello, Alice
greet.apply(user, ["Hi"]); // Hi, Alice
const boundGreet = greet.bind(user);
boundGreet("Hey"); // Hey, Alice
Inside a class method, this refers to the instance of that class. So if you create const user = new User(), then inside any method of User, this points to user. However, if you pass a method as a callback and call it without the object context, this can get lost.
class Counter {
constructor() {
this.count = 0;
}
increment() {
this.count++;
console.log(this.count);
}
}
const c = new Counter();
c.increment(); // 1 - this is c
When you pass a method as a callback, you are passing the function itself, not the object it belongs to. So when the callback runs, it is called as a plain function, and this becomes undefined (in strict mode) or the global object. Fix this with .bind() or by using an arrow function.
class Timer {
constructor() {
this.seconds = 0;
}
start() {
// "this" is lost here because setTimeout calls the function standalone
setTimeout(function () {
this.seconds++; // TypeError: Cannot set property of undefined
}, 1000);
// Fix 1: arrow function inherits this
setTimeout(() => {
this.seconds++;
}, 1000);
// Fix 2: bind
setTimeout(function () {
this.seconds++;
}.bind(this), 1000);
}
}
In strict mode, when a regular function is called without any object context (just as a plain function call), this is undefined instead of the global object. This catches a lot of accidental bugs where you thought you were working with an object but you were actually modifying the global scope.
"use strict";
function show() {
console.log(this); // undefined, not window
}
show();
JSON stands for JavaScript Object Notation. It is a text format used to store and send data. Even though it looks like a JavaScript object, it is just a string with a specific structure. It is used everywhere for sending data between a server and a browser.
{
"name": "Alice",
"age": 25,
"isStudent": false
}
JSON.stringify() converts a JavaScript value (like an object or array) into a JSON string. This is used when you want to send data to a server or save it to localStorage.
const user = { name: "Alice", age: 25 };
const jsonString = JSON.stringify(user);
console.log(jsonString); // '{"name":"Alice","age":25}'
JSON.parse() does the opposite of JSON.stringify(). It takes a JSON string and converts it back into a JavaScript object. If the string is not valid JSON, it throws a SyntaxError.
const jsonString = '{"name":"Alice","age":25}';
const user = JSON.parse(jsonString);
console.log(user.name); // Alice
JSON can only represent a limited set of data types. Things JSON does not support:
- Functions are dropped silently
undefinedvalues are droppedSymbolvalues are dropped- Circular references cause an error
Dateobjects are converted to strings, not back to Date when parsedMap,Set,RegExpdo not serialize properly
If a property value is a function, undefined, or a Symbol, it is silently removed from the result. If the top-level value itself is a function or undefined, stringify returns undefined (not a string).
const obj = {
name: "Alice",
greet: function () { return "hello"; },
score: undefined,
};
console.log(JSON.stringify(obj));
// '{"name":"Alice"}' - greet and score are gone
A circular reference is when an object refers back to itself. JSON.stringify() throws a TypeError if it encounters one. You can handle it by using the replacer function in stringify, or by using a library like flatted or json-stringify-safe for cases where you actually need to serialize circular structures.
const obj = { name: "Alice" };
obj.self = obj; // circular reference
JSON.stringify(obj); // TypeError: Converting circular structure to JSON
// Simple workaround using a WeakSet to track seen objects
function safeStringify(obj) {
const seen = new WeakSet();
return JSON.stringify(obj, (key, value) => {
if (typeof value === "object" && value !== null) {
if (seen.has(value)) return "[Circular]";
seen.add(value);
}
return value;
});
}
console.log(safeStringify(obj)); // {"name":"Alice","self":"[Circular]"}
The reviver is an optional second argument to JSON.parse(). It is a function that runs for each key-value pair in the parsed result and lets you transform the value before it is returned. A common use is converting date strings back into actual Date objects.
const json = '{"name":"Alice","joinedAt":"2024-01-15T00:00:00.000Z"}';
const user = JSON.parse(json, (key, value) => {
if (key === "joinedAt") return new Date(value);
return value;
});
console.log(user.joinedAt instanceof Date); // true
A regular expression (regex) is a pattern used to match, search, or replace text. You describe a pattern and JavaScript checks if a string fits that pattern. They are useful for things like validating email addresses, extracting numbers from text, or replacing all occurrences of a word.
There are two ways to create a regex. The literal syntax uses forward slashes and is the most common. The constructor syntax using new RegExp() is useful when you need to build the pattern dynamically from a variable.
// Literal syntax
const pattern = /hello/;
// Constructor syntax (useful for dynamic patterns)
const word = "hello";
const dynamicPattern = new RegExp(word);
Flags are added after the closing slash and change how the regex behaves.
i- case insensitive, so/hello/imatches "Hello", "HELLO", etc.g- global, find all matches (not just the first one)m- multiline, makes^and$match the start/end of each lines- allows.to match newline characters too
const str = "Hello World hello";
console.log(str.match(/hello/gi)); // ["Hello", "hello"]
test() is a method on the regex object. It takes a string and returns true or false. Use it when you just want to know if the pattern matches. match() is a method on the string. It returns the actual matched text (or null if nothing matched). Use it when you want the matched values.
const pattern = /\d+/; // matches one or more digits
pattern.test("abc123"); // true
"abc123".match(pattern); // ["123"] - returns the match
Capture groups let you extract specific parts of a match by wrapping a part of the pattern in parentheses. Each group has a number starting from 1. Named capture groups (using ?<name>) make the code easier to read.
// Numbered groups
const match = "2024-06-15".match(/(\d{4})-(\d{2})-(\d{2})/);
console.log(match[1]); // 2024 (year)
console.log(match[2]); // 06 (month)
console.log(match[3]); // 15 (day)
// Named groups (cleaner)
const named = "2024-06-15".match(/(?<year>\d{4})-(?<month>\d{2})-(?<day>\d{2})/);
console.log(named.groups.year); // 2024
Use the g flag with replace() or use replaceAll(). With replace() and a regex, you can also use $1, $2 etc. to reference capture groups in the replacement string.
const str = "foo bar foo";
console.log(str.replace(/foo/g, "baz")); // "baz bar baz"
// Using capture group in replacement
const date = "2024-06-15";
console.log(date.replace(/(\d{4})-(\d{2})-(\d{2})/, "$3/$2/$1")); // "15/06/2024"
A lookahead lets you match something only if it is followed (or not followed) by something else, without including that "something else" in the match. Positive lookahead uses (?=...) and negative lookahead uses (?!...).
// Positive lookahead: match digits only if followed by "px"
"12px 5em 8px".match(/\d+(?=px)/g); // ["12", "8"]
// Negative lookahead: match digits NOT followed by "px"
"12px 5em 8px".match(/\d+(?!px)/g); // ["5"]
An iterator is an object that lets you go through a sequence of values one at a time. It has a next() method that returns an object with two properties: value (the current item) and done (a boolean that is true when there are no more items). Arrays, strings, Maps, and Sets are all iterable in JavaScript.
const arr = [10, 20, 30];
const it = arr[Symbol.iterator]();
console.log(it.next()); // { value: 10, done: false }
console.log(it.next()); // { value: 20, done: false }
console.log(it.next()); // { value: 30, done: false }
console.log(it.next()); // { value: undefined, done: true }
A generator function is a special kind of function that can pause in the middle of running and be resumed later. You define one using function*. Calling a generator function does not execute it immediately. It returns a generator object instead. You then call .next() on that object to run the function up to the next yield statement, pause it there, and get back the yielded value.
function* greet() {
yield "Hello";
yield "World";
}
const gen = greet();
console.log(gen.next()); // { value: "Hello", done: false }
console.log(gen.next()); // { value: "World", done: false }
console.log(gen.next()); // { value: undefined, done: true }
yield pauses the generator and hands a value back to the caller. It's essentially a temporary return that can happen multiple times. The next time you call .next(), the function picks up from exactly where it stopped. You can also send a value back INTO the generator by passing it to .next(value), and that value becomes what the yield expression evaluates to inside the function.
function* add() {
const x = yield "Give me a number";
const y = yield "Give me another";
return x + y;
}
const gen = add();
gen.next(); // { value: "Give me a number", done: false }
gen.next(5); // { value: "Give me another", done: false } - 5 becomes x
gen.next(3); // { value: 8, done: true } - 3 becomes y
A regular function runs from start to finish in one shot and returns one value. A generator function can pause in the middle using yield, return multiple values over time, and resume from where it stopped. A regular function always starts fresh when you call it. A generator object keeps its state between .next() calls.
Each time you call .next(), the generator runs until it hits the next yield statement, then pauses. It returns an object with two keys: value (the yielded value) and done (a boolean: false while the generator still has more to yield, true once it has finished). If you pass a value into .next(value), that value becomes the result of the paused yield expression inside the function.
function* count() {
yield 1;
yield 2;
yield 3;
}
const gen = count();
console.log(gen.next()); // { value: 1, done: false }
console.log(gen.next()); // { value: 2, done: false }
console.log(gen.next()); // { value: 3, done: false }
console.log(gen.next()); // { value: undefined, done: true }
yield* delegates to another iterable or generator. Instead of yielding a single value, it yields all values from another generator (or array, string, etc.) one by one. It is useful for composing generators together.
function* inner() {
yield 2;
yield 3;
}
function* outer() {
yield 1;
yield* inner(); // delegate to inner
yield 4;
}
console.log([...outer()]); // [1, 2, 3, 4]
Lazy evaluation means you only compute a value when you actually need it. Generators are lazy by nature because they do not run until you call .next(). This is useful for infinite sequences or large datasets where you do not want to compute everything upfront.
// Infinite sequence - only generates values when asked
function* naturals() {
let n = 1;
while (true) {
yield n++;
}
}
const gen = naturals();
console.log(gen.next().value); // 1
console.log(gen.next().value); // 2
console.log(gen.next().value); // 3
// This never crashes even though it's "infinite"
A Proxy wraps an object and lets you intercept and customize operations on that object, like reading a property, writing a value, or calling a function. You create it with new Proxy(target, handler) where the handler defines the functions (called traps) that intercept operations.
const obj = { name: "Alice" };
const proxy = new Proxy(obj, {
get(target, key) {
console.log("Getting:", key);
return target[key];
},
});
console.log(proxy.name); // logs "Getting: name" then "Alice"
Traps are the methods you define in the handler object. Each trap corresponds to a specific operation on the target object.
get(target, key)- intercepts property readsset(target, key, value)- intercepts property writeshas(target, key)- intercepts theinoperatordeleteProperty(target, key)- interceptsdeleteapply(target, thisArg, args)- intercepts function callsconstruct(target, args)- interceptsnew
The Reflect object is a built-in that provides static methods matching the same operations a Proxy can intercept. For example, Reflect.get(target, key) reads a property the same way bracket notation does, and Reflect.set(target, key, value) sets one. Inside a Proxy trap, you typically call the matching Reflect method to carry out the default behavior after running your custom logic.
const proxy = new Proxy({}, {
set(target, key, value) {
console.log("Setting", key, "=", value);
return Reflect.set(target, key, value); // do the default thing
}
});
proxy.name = "Alice"; // logs "Setting name = Alice"
You can use the set trap to check the value before it actually gets stored on the object. If the value fails validation, you throw an error. This keeps validation logic in one place.
const user = new Proxy({}, {
set(target, key, value) {
if (key === "age" && (typeof value !== "number" || value < 0)) {
throw new TypeError("age must be a positive number");
}
return Reflect.set(target, key, value);
}
});
user.age = 25; // works fine
user.age = -1; // throws TypeError
Object.defineProperty lets you define a getter/setter for one specific, named property. A Proxy intercepts ALL operations on the entire object, including properties that do not exist yet. This makes Proxy far more flexible for things like reactive data systems and validation. Vue 3 switched from Object.defineProperty (used in Vue 2) to Proxy for exactly this reason.
Inside Proxy traps, using Reflect to perform the default operation is considered good practice because it correctly handles edge cases (like receiver context) that doing it manually might get wrong. Also, Reflect.ownKeys() is more complete than Object.keys() because it includes non-enumerable and Symbol keys in one call.
// Inside a get trap, prefer this:
get(target, key, receiver) {
return Reflect.get(target, key, receiver); // handles prototype correctly
}
// over this:
get(target, key) {
return target[key]; // misses the receiver context
}
A Web Worker is a way to run JavaScript in a background thread, separate from the main browser thread. JavaScript is normally single-threaded, which means heavy computations can freeze the UI. A Web Worker lets you offload that heavy work to a background thread so the page stays responsive.
When JavaScript runs a heavy task (like sorting a huge dataset, processing images, or complex calculations), it blocks everything else including UI updates and user interactions because it is single-threaded. Web Workers solve this by running those tasks in a separate thread. The page stays smooth while the worker quietly does its job in the background.
You create a worker by passing a script file URL to the Worker constructor. The script runs in its own environment. You communicate with it using messages.
// main.js
const worker = new Worker("worker.js");
worker.postMessage({ task: "compute", data: [1, 2, 3] });
worker.onmessage = function (event) {
console.log("Result from worker:", event.data);
};
// worker.js
self.onmessage = function (event) {
const result = event.data.data.reduce((sum, n) => sum + n, 0);
self.postMessage(result);
};
Workers and the main thread communicate through message passing. The main thread sends data to the worker using worker.postMessage(data) and listens for responses via worker.onmessage. The worker does the same using self.postMessage() and self.onmessage. The data is copied (not shared) between threads by default, using the structured clone algorithm.
Web Workers run in an isolated environment with several restrictions:
- No access to the DOM,
window, ordocument - No access to
localStorageorsessionStorage - Communication with the main thread only happens through message passing
- The worker script must come from the same origin (or use CORS headers)
- Data sent in messages is copied, not shared by reference (unless you use Transferable Objects like
ArrayBuffer)
A dedicated worker is tied to one specific script or tab. Only that tab can talk to it. A shared worker (new SharedWorker()) can be accessed by multiple tabs or windows from the same origin at the same time. Shared workers use a port object for communication instead of direct message passing.
Both run in a background thread, but they serve very different purposes. A Web Worker is for CPU-heavy tasks for a specific page while that page is open. A Service Worker acts like a programmable network proxy. It can intercept network requests, cache responses, and even work offline. Service Workers have a lifecycle independent of any page and are the foundation of Progressive Web Apps (PWAs).
XSS is a type of attack where someone injects malicious JavaScript into a web page that other users visit. A classic example: if a site shows user-submitted comments without sanitizing the input, an attacker posts a comment with a <script> tag inside. When another user loads the page, that script runs in their browser and can steal cookies, hijack sessions, or do other damage.
The main defenses against XSS are:
- Never put user input directly into HTML. Use
textContentinstead ofinnerHTMLwhen displaying user-provided text - Escape special HTML characters (<, >, ", &) before displaying them
- Use a Content Security Policy (CSP) header to restrict which scripts are allowed to run
- Use frontend libraries (like DOMPurify) to sanitize HTML if you truly need to render user-provided HTML
- Validate and sanitize all user input on the server side as well
// Vulnerable - never do this
element.innerHTML = userInput;
// Safe - use textContent
element.textContent = userInput;
CSRF (Cross-Site Request Forgery) is when a malicious site tricks a logged-in user into making a request to a site they are already authenticated with. For example, a hidden form on a bad site could silently submit to your bank's "transfer money" endpoint using your existing session cookie. The browser sends the cookie automatically. The key difference from XSS: XSS injects code that runs on the victim's page. CSRF makes the victim's browser fire a request to another site using their real credentials.
CSP is an HTTP response header that tells the browser which sources of scripts, styles, and other resources are trusted. It is one of the most effective defenses against XSS. For example, you can tell the browser to only run scripts from your own domain and block everything else, including any injected inline scripts.
Content-Security-Policy: default-src 'self'; script-src 'self' https://trusted-cdn.com
The same-origin policy is a browser security rule. It says that scripts from one origin (protocol + domain + port) cannot read data from a different origin. For example, a script on https://example.com cannot read the response of a fetch request made to https://other.com. The request might still be sent, but the browser blocks the response from being read. CORS (Cross-Origin Resource Sharing) is the mechanism that allows servers to explicitly say "I allow requests from this other origin."
Clickjacking is when a malicious page loads your site inside a transparent <iframe> and overlays it with a fake UI. The user thinks they are clicking a harmless button (like "Claim reward") but they are actually clicking something on your site underneath, like "Confirm payment" or "Delete account". You prevent this using the X-Frame-Options: DENY HTTP header or the frame-ancestors 'none' CSP directive, both of which tell the browser not to allow your page to be embedded in iframes.
X-Frame-Options: DENY
// or via CSP:
Content-Security-Policy: frame-ancestors 'none'