Typescript Interview Questions

Master your next Typescript interview with our comprehensive collection of questions and expert-crafted answers. Get prepared with real scenarios that top companies ask.

Find mentors at
Airbnb
Amazon
Meta
Microsoft
Spotify
Uber

Master Typescript interviews with expert guidance

Prepare for your Typescript interview with proven strategies, practice questions, and personalized feedback from industry experts who've been in your shoes.

Thousands of mentors available

Flexible program structures

Free trial

Personal chats

1-on-1 calls

97% satisfaction rate

Study Mode

Choose your preferred way to study these interview questions

1. What is the significance of strict mode in TypeScript, and which compiler options do you consider most valuable?

strict matters because it turns TypeScript from "helpful hints" into real compile time safety. It catches bad assumptions early, especially around null, undefined, weak typing, and unsafe object access. In teams, it also creates consistent standards, so code reviews focus less on obvious bugs and more on design.

The options I value most are: - strictNullChecks, forces you to handle missing values explicitly. - noImplicitAny, prevents silent loss of type safety. - strictFunctionTypes, catches unsafe callback and function assignments. - noUncheckedIndexedAccess, makes array and map lookups safer. - exactOptionalPropertyTypes, distinguishes "missing" from "present but undefined". - noImplicitOverride, protects inheritance-heavy code from accidental mismatches.

If I had to pick only two, I would start with strictNullChecks and noImplicitAny, because they eliminate a huge class of real production bugs fast.

2. What are the main benefits of TypeScript over plain JavaScript in a large production codebase?

TypeScript mainly helps you scale code and teams without losing confidence.

  • Static types catch mistakes at compile time, so you find issues before they hit production.
  • Refactoring is much safer, because renames, signature changes, and moved code are validated across the codebase.
  • IDE support gets much better, autocomplete, go-to-definition, inline docs, and smarter navigation all improve developer speed.
  • Types act as living documentation, which makes onboarding easier and reduces tribal knowledge.
  • Shared contracts between frontend, backend, and APIs reduce integration bugs.
  • It encourages clearer architecture, because you model domains, inputs, and outputs explicitly.

In large systems, the biggest win is not just fewer bugs, it is maintainability. JavaScript works fine, but TypeScript gives teams guardrails that matter more as the app and headcount grow.

3. How would you explain TypeScript’s type system to a developer who has only worked with JavaScript?

I’d frame it as JavaScript with a compile-time safety net. Your code still runs as normal JavaScript, but TypeScript checks shapes, inputs, and outputs before runtime, so you catch mistakes earlier and get better editor help.

  • Types describe what values look like, like string, number, arrays, objects, and function signatures.
  • It’s mostly structural, not nominal, so if an object has the right properties, it matches the type.
  • Type inference means you often do not annotate much, TypeScript can figure types out from assignments and returns.
  • You can be gradual, start with a few annotations or tsconfig rules and tighten over time.
  • It is erased at build time, so types do not exist in the runtime JavaScript unless you add validation yourself.

I’d also mention unions, narrowing, generics, and any vs unknown as the next concepts to learn.

No strings attached, free trial, fully vetted.

Try your first call for free with every mentor you're meeting. Cancel anytime, no questions asked.

Nightfall illustration

4. How do you gradually increase type safety in a legacy codebase without blocking product delivery?

I’d treat it like risk reduction, not a big-bang rewrite. The goal is to raise safety where changes are already happening, while keeping teams shipping.

  • Turn on TypeScript in allowJs or permissive mode first, then tighten flags in stages like noImplicitAny, strictNullChecks, and finally strict.
  • Add boundaries first, APIs, database models, shared utilities, and external integrations give the biggest payoff.
  • Use unknown instead of any for unsafe inputs, then narrow with type guards or runtime validation tools like Zod.
  • Require touched files to get better types, the "boy scout rule" keeps progress steady without huge dedicated projects.
  • Track hotspots, bugs from nulls, bad payloads, or mismatched shapes should drive where you invest next.

In practice, I’d pair this with CI rules for new code, but avoid forcing old untouched areas to comply immediately.

5. What patterns do you use to model API responses, especially when the backend may return inconsistent or evolving shapes?

I usually model API responses in layers, not as one big trusted type. The transport shape is loose, then I validate and map it into a stricter domain type the app actually uses.

  • Start with unknown at the boundary, not any, then parse with Zod, io-ts, or custom type guards.
  • Use discriminated unions for success, error, partial, or loading states, like { status: "ok" | "error" }.
  • Keep backend DTOs separate from frontend domain models, then map fields and fill defaults in one place.
  • Mark unstable fields as optional, but avoid optional-everything, because it hides problems.
  • For evolving APIs, version response types or support unions like OldShape | NewShape during migration.
  • Prefer runtime validation plus telemetry, so bad payloads fail loudly and are observable.

In practice, I’ve used an adapter layer in the data client, which let the UI stay stable even while the backend changed field names and nesting.

6. What is the difference between type and interface, and when do you prefer one over the other?

Both describe shapes in TypeScript, but they shine in slightly different places.

  • interface is best for object contracts, especially public APIs and class implementations.
  • Interfaces can be merged, so multiple declarations with the same name combine. That is useful for extending library types.
  • type is more flexible, it can represent unions, intersections, primitives, tuples, mapped types, and conditional types.
  • type cannot be declaration-merged the way interfaces can.
  • Both can usually model object shapes, and both support extension patterns.

My rule of thumb: use interface for straightforward object shapes that may be extended or implemented, and use type when I need composition or advanced type features like A | B, keyof, or tuple transformations. In many codebases, consistency matters more than the technical difference for simple objects.

7. How does structural typing work in TypeScript, and how can it affect assignability between objects?

TypeScript uses structural typing, which means compatibility is based on an object's shape, not its explicit type name. If two types have the same required properties with compatible property types, one can be assigned to the other, even if they were declared separately.

  • type A = { name: string } and type B = { name: string; age: number }, B is assignable to A
  • That works because B has at least the structure A requires
  • The reverse is not assignable, because A is missing age
  • Extra properties usually matter most with fresh object literals, due to excess property checks
  • Example, passing { name: "x", age: 1 } to a function expecting { name: string } is fine, but assigning { name: "x", age: 1, extra: true } to a narrower literal type can be flagged in some contexts

This makes TS flexible, but it can also allow accidental compatibility when two unrelated types happen to share the same shape.

8. What are union types and intersection types, and can you describe a practical use case for each?

Union types let a value be one of several types, like string | number. They’re useful when data can legitimately come in different shapes, and you narrow it before using it. A practical example is an API response state: type Result = { data: User } | { error: string }. Then you check which property exists and handle success or failure safely.

Intersection types combine multiple types into one, like A & B, meaning the value must satisfy both. They’re great for composing behaviors or shared fields. A common use case is extending domain data with metadata, for example User & { lastLogin: Date } or combining BaseEntity & Auditable so an object has both core fields and audit fields without duplicating type definitions.

User Check

Find your perfect mentor match

Get personalized mentor recommendations based on your goals and experience level

Start matching

9. How do generics improve reusability and type safety, and can you give an example of a generic abstraction you have built?

Generics let you write one abstraction that works across many types while preserving the relationship between inputs and outputs. That improves reusability because you avoid copy-pasting similar logic for User, Order, Product, and so on. It improves type safety because TypeScript can enforce, for example, that if you pass in User[], you get back User[], not any[], and that required properties exist when you constrain a type like T extends { id: string }.

One abstraction I built was a generic data table hook, useTable<T>. It handled sorting, filtering, pagination, and row selection for any entity type. I constrained T so consumers could optionally provide strongly typed column definitions like keyof T, and row IDs via getRowId: (row: T) => string. That gave teams one reusable hook across multiple screens, while keeping column access and callbacks fully type-safe.

10. What is the purpose of keyof, typeof, and indexed access types, and how have you used them in real projects?

They’re core tools for turning runtime-shaped data into safe TypeScript APIs.

  • keyof gives you the union of property names of a type, like keyof User becoming "id" | "email".
  • typeof captures the type of a value, useful when you want types derived from real constants or config objects.
  • Indexed access types, like User["email"] or User[keyof User], let you pull out property value types from existing types.

In real projects, I’ve used them a lot for form builders, API clients, and config-driven UI. Example: a column config object for a data table. I’d use typeof columns to derive the config type, keyof Row to restrict column keys to valid fields, and Row[K] in generics so each renderer got the correct value type. That cuts duplication and catches mismatches at compile time.

11. What are conditional types, and can you explain a scenario where they helped model complex logic?

Conditional types let a type branch based on another type, kind of like a type-level if. The basic shape is T extends U ? X : Y. They’re great when a function or API changes shape depending on input, and you want TypeScript to enforce that relationship automatically.

A common example is an API client where response data depends on options. If includeMeta is true, return Data & Meta; otherwise just Data. I’d model that with something like Response<T extends boolean> = T extends true ? DataWithMeta : Data. I used this pattern for a query builder where selected fields determined the result shape. Instead of returning broad any-like objects, conditional types mapped the selected config to an exact payload type, which improved autocomplete and caught mismatches at compile time.

12. How do infer and distributive conditional types work, and what kinds of problems can they solve?

Conditional types let you say, “if T matches this shape, return one type, otherwise return another.” infer is the extraction tool inside that match. Example: type Elem<T> = T extends (infer U)[] ? U : T pulls out an array’s item type. You’ll also see it in function helpers like extracting return types, parameter tuples, or the resolved type of a Promise.

Distributive conditional types happen when the checked type is a naked type parameter, like T extends X ? A : B. If T is A | B, TypeScript applies the condition to each member separately and unions the results. That’s useful for filtering unions, transforming API response variants, unwrapping nested types, building utility types like Exclude, Extract, NonNullable, or creating pattern-matching style type logic for safer libraries.

13. What is type narrowing, and what techniques does TypeScript use to narrow types at runtime branches?

Type narrowing is TypeScript refining a variable from a broad type like string | number to a more specific type based on control flow. It happens when your runtime checks give the compiler enough evidence that only part of the union is possible in that branch.

  • typeof checks, like typeof x === "string", narrow primitives.
  • instanceof narrows class or constructor-based types.
  • Equality checks, like x === null or comparing literal values, narrow unions.
  • Truthiness checks narrow away null, undefined, false, 0, or "", though this can be risky.
  • The in operator narrows by checking whether a property exists on an object.
  • Discriminated unions narrow via a shared literal field, like kind.
  • User-defined type guards, functions returning value is T, let you encode custom narrowing logic.

14. How do custom type guards work, and when have you written one to make unsafe data safer to use?

Custom type guards are functions that return a boolean, but with a special return type like value is User. That tells TypeScript, "if this returns true, narrow the type to User." They’re useful when you get unknown or messy API data and want runtime checks plus compile time safety.

  • Example: function isUser(x: unknown): x is User checks typeof x === 'object', non-null, and required fields like id and name.
  • After if (isUser(data)), TypeScript lets you access data.id safely.
  • I’ve used this around API responses and JSON.parse, where the input is basically untrusted.
  • In one project, I wrapped third party webhook payloads with guards before mapping them into domain models, which cut a lot of defensive null checks and prevented bad payloads from flowing deeper into the app.

15. What is the difference between unknown, any, and never, and how do you decide which is appropriate?

They represent very different levels of type safety.

  • any opts out of checking, you can do anything with it, and TypeScript will not help much. I use it only as a last resort, usually at messy boundaries or during migration.
  • unknown means "I do not know yet." You can assign anything to it, but you must narrow it before using it. This is the safe choice for API responses, catch errors, or untrusted input.
  • never means a value should never exist. It shows up in functions that always throw, infinite loops, or exhaustive switch checks.

My rule of thumb is: prefer unknown over any when data is uncertain, use never to model impossible states, and avoid any unless you intentionally want to bypass the type system.

16. How do mapped types work, and when would you reach for utilities like Partial, Required, Pick, Omit, or Record?

Mapped types let you transform an existing object type by iterating over its keys. The pattern is basically [K in keyof T]: ..., so you can say “for every property in T, make a new version with different rules,” like optional, readonly, or changed value types. They’re great when you want type changes to stay in sync with the source shape instead of duplicating interfaces.

  • Partial<T> makes every property optional, useful for patch/update inputs.
  • Required<T> does the opposite, handy when defaults have been applied.
  • Pick<T, K> selects a safe subset of fields, like DTOs or view models.
  • Omit<T, K> removes fields, often for hiding internal or sensitive data.
  • Record<K, V> builds an object type from a key union to one value type, like Record<'en' | 'fr', string>.

I reach for them anytime I want reuse, less duplication, and stricter refactors.

17. How do source maps and declaration files fit into a TypeScript build and debugging process?

They serve two different parts of the workflow, build output and developer experience.

  • Source maps, enabled with sourceMap, link emitted JavaScript back to .ts files, so breakpoints, stack traces, and stepping in DevTools or VS Code show TypeScript lines instead of compiled JS.
  • They are mainly for debugging, especially when bundling, transpiling, or running code in Node with inspector support.
  • Declaration files, .d.ts, are generated with declaration, and describe the public types of your code without shipping implementation.
  • They matter when publishing libraries, because consumers get autocomplete, type checking, and API contracts even if they only import the compiled JavaScript.
  • In a typical build, TypeScript emits .js for runtime, .map for debugging, and .d.ts for downstream type safety, depending on compiler options.

18. What challenges have you faced integrating TypeScript into an existing JavaScript codebase, and how did you approach migration?

The biggest challenges are usually type coverage, third party libraries, and team adoption. In a mature JavaScript codebase, you rarely get to "flip to TypeScript" all at once, so I treat it as a risk-managed migration.

  • I start with allowJs, checkJs, and noEmit, so we get type feedback without breaking builds.
  • I migrate by boundary first, APIs, shared utilities, and data models, because that gives the most leverage.
  • For messy areas, I use unknown instead of any, add runtime validation, and tighten types over time.
  • Missing library types are common, so I add @types packages or write small declaration files as a bridge.
  • Team friction matters too, so I add ESLint rules, CI checks, and a few patterns for props, async code, and error handling.

In one migration, we converted shared domain models first and caught several inconsistent API assumptions before they hit production.

19. In React specifically, how do you type component props, children, refs, hooks, and event handlers without overcomplicating things?

I keep React typing pragmatic: type the public API, let inference do the rest, and only get explicit when it improves safety or readability.

  • Props: use a type or interface like type ButtonProps = { variant: 'primary' | 'ghost'; onClick?: () => void }, then function Button(props: ButtonProps).
  • Children: only add children?: React.ReactNode if the component actually accepts children. Don’t default to React.FC.
  • Refs: for forwarded refs, use forwardRef<HTMLButtonElement, ButtonProps>(...). For local refs, const ref = useRef<HTMLInputElement | null>(null).
  • Hooks: let useState infer simple values, but specify unions like useState<string | null>(null). Type reducers more explicitly.
  • Events: use React event types, React.ChangeEvent<HTMLInputElement>, React.MouseEvent<HTMLButtonElement>.
  • Derived props: prefer built-ins like React.ComponentProps<'button'> or extend ButtonHTMLAttributes<HTMLButtonElement> for wrapper components.

20. How do you type asynchronous code involving Promise, async and await, and error handling in a way that remains safe and readable?

I keep async typing simple and explicit. The main rule is, type the resolved value, and let async produce the Promise for you.

  • Prefer async function getUser(): Promise<User> over returning any or implicit shapes.
  • Type awaited results at the source, for example const user = await fetchUser() where fetchUser returns Promise<User>.
  • For reusable APIs, use generics like Promise<T> or async function request<T>(): Promise<T>.
  • In catch, treat errors as unknown, then narrow with instanceof Error or custom guards.
  • For safer flows, return a result type like Promise<Result<User, ApiError>> when failures are expected.

Example approach: fetchJson<T> returns Promise<T>, callers get strong inference, and truly exceptional cases throw. That keeps business logic readable, while expected failures are modeled in the type system instead of hidden in exceptions.

21. Tell me about a time when TypeScript made development harder or slower. How did you balance safety with productivity?

I’d answer this with a quick STAR structure, situation, friction, tradeoff, result, then show that I used TypeScript pragmatically instead of dogmatically.

At a previous team, we integrated a third party payments API with inconsistent response shapes. TypeScript slowed us down at first because the generated types were incomplete, and we kept hitting edge cases the compiler could not model cleanly. Instead of forcing perfect typing everywhere, I drew a boundary. At the API layer, we used lightweight runtime validation and a few narrow unknown to typed conversions. Inside the app, we kept strict types so the rest of the codebase stayed safe. For urgent paths, I allowed temporary @ts-expect-error with a ticket linked for cleanup. That balance let us ship on time, while keeping type debt contained and visible.

22. How does TypeScript handle null and undefined, and why is strictNullChecks important?

TypeScript treats null and undefined as distinct types. Without strictNullChecks, they’re basically allowed everywhere, so a string can silently be null or undefined, which defeats a lot of the safety TypeScript is supposed to give you.

With strictNullChecks: true: - null and undefined must be explicitly included, like string | null - the compiler forces you to handle missing values before using them - APIs like Array.find() return T | undefined, which is more honest - narrowing works naturally, for example if (user) { ... } or if (value !== undefined)

It matters because most runtime bugs in JS come from accessing something that isn’t there, like user.name when user is undefined. strictNullChecks catches that at compile time instead of letting it blow up in production.

23. What are optional properties, optional chaining, and nullish coalescing, and how do they affect API design?

They solve slightly different problems around missing data, which is huge in API design.

  • Optional properties use ?, like name?: string, meaning the field may be absent.
  • Optional chaining uses obj?.profile?.email, so access stops safely if something is null or undefined.
  • Nullish coalescing uses value ?? fallback, which only falls back for null or undefined, not 0, false, or "".

For APIs, optional properties communicate what clients can omit or what responses may not include. That makes contracts clearer, but too many optionals can make models vague and force defensive code everywhere. Optional chaining helps consumers read nested response data safely, and ?? is better than || when defaults should preserve valid falsy values. In TypeScript, I usually pair this with strictNullChecks so the API surface reflects real absence explicitly.

24. How do noImplicitAny, noUncheckedIndexedAccess, exactOptionalPropertyTypes, and strictFunctionTypes influence code quality?

They tighten up places where TypeScript is usually a bit permissive, so you catch bugs earlier instead of letting them slip to runtime.

  • noImplicitAny forces you to be explicit when TS cannot infer a type, which prevents accidental weak typing from spreading.
  • noUncheckedIndexedAccess makes indexed access like obj[key] or arr[i] return T | undefined, so you handle missing keys and out of bounds cases.
  • exactOptionalPropertyTypes treats optional props as truly "may be absent", not the same as prop: T | undefined, which makes API contracts more accurate.
  • strictFunctionTypes checks function parameter variance more safely, especially for callbacks, so you do not pass handlers that accept narrower inputs than callers might provide.

Net effect, fewer hidden assumptions, stronger contracts, and better refactoring safety.

25. What are literal types and template literal types, and how can they be used to model constrained strings?

Literal types let you say a value must be an exact string, number, or boolean, like 'GET', 200, or true. They’re great for fixed options, usually combined as unions, for example 'GET' | 'POST' | 'DELETE'.

Template literal types build new string types from other types, so you can model string patterns, not just fixed values.

  • Example: type Method = 'GET' | 'POST'
  • Then type RouteKey = \${Method}:/users`gives'GET:/users' | 'POST:/users'`
  • You can compose pieces, like type Lang = 'en' | 'fr' and type Key = \${Lang}_home_title``
  • This is useful for API routes, event names, CSS class conventions, i18n keys, and IDs
  • It gives autocomplete and catches invalid strings at compile time, like rejecting 'PUT:/users'

So they’re a clean way to model constrained string formats without runtime-only validation.

26. How do function overloads work in TypeScript, and when are they better than union-typed parameters?

Function overloads let you describe multiple valid call signatures for one implementation. You write several overload signatures, then one implementation signature that is broad enough to handle them all. Callers see the specific signatures, not the implementation one. That gives better autocomplete and return type narrowing based on the arguments.

  • Use overloads when different argument shapes produce different return types, like string -> number and number -> string.
  • Use unions when the function logic and return type are basically the same, like string | string[] returning boolean.
  • Overloads are better for distinct call patterns, especially different arity, like fn(id) vs fn(first, last).
  • Unions are simpler and easier to maintain when behavior does not meaningfully change.
  • If you find yourself writing many overloads, consider generics or a discriminated union instead.

27. What is the difference between extends in a generic constraint and extends in interface or class inheritance?

extends means two related but different things depending on context.

  • In a generic constraint, T extends Foo means "T must be assignable to Foo". It restricts what types callers can pass in.
  • In inheritance, interface A extends B or class A extends B means "A is a subtype of B". It creates a new type or class based on another.
  • Generic extends is checked at the point of use, inheritance defines the type itself.
  • With generics, extends does not copy members, it just requires compatibility.
  • With classes, extends also affects runtime behavior, like prototype chain and inherited methods. With interfaces, it is type-level only.

Example: function f<T extends { id: string }>(x: T) accepts any type with id. But class User extends Person means User actually inherits from Person.

28. What is declaration merging, and what are some benefits and risks of relying on it?

Declaration merging is a TypeScript feature where multiple declarations with the same name are combined into one type or entity. Common cases are interface merging, namespace merging, and augmenting existing module types from libraries.

  • Benefit, it makes extension ergonomic, especially for third party libs, like adding fields to Express.Request.
  • Benefit, it supports plugin style architecture, where separate files can contribute to one shared contract.
  • Benefit, it can improve maintainability when modeling layered APIs without rewriting original types.
  • Risk, it can hide where properties came from, which hurts readability and makes types feel "magical."
  • Risk, conflicting merges can create hard to debug errors, especially across large codebases.
  • Risk, overusing global or module augmentation can couple unrelated modules and cause accidental breaking changes.

I use it deliberately for library augmentation, but avoid it for core app domain types unless the extension point is very clear.

29. How do enums compare to union literal types, and when would you avoid enums altogether?

I usually prefer union literal types in TypeScript unless I specifically need runtime behavior.

  • enum exists at runtime, union literals like 'admin' | 'user' are type-only.
  • Unions are lighter, easier to compose, and work really well with narrowing and autocomplete.
  • String unions avoid enum quirks like reverse mappings, emitted JS, and import overhead.
  • const enum removes runtime cost, but can be fragile with build tools, transpilation, and library boundaries.
  • Enums are still useful when you want a named runtime object, especially for interoperability or iteration.

I avoid enums when modeling fixed sets of strings in app code, API values, Redux actions, or component props. In those cases, unions plus a const object like as const usually give cleaner types and simpler output.

30. How does TypeScript support classes, access modifiers, abstract classes, and readonly properties, and how much do you rely on these features?

TypeScript adds OO features on top of JavaScript classes and mostly uses them at compile time. public is the default, private and protected restrict access in TypeScript, and readonly lets you assign a property once, usually in the declaration or constructor. abstract classes let you define shared behavior plus required methods, and they cannot be instantiated.

How much I rely on them depends on the codebase. - I use classes when modeling stateful domain objects or framework patterns, like Angular services. - I use readonly a lot, it is great for safer DTOs and config objects. - I use private and protected moderately, mostly to clarify intent, not as hard runtime security. - I use abstract classes sparingly, usually when I need shared implementation plus a contract. - In many TS apps, I still prefer plain objects, functions, and interfaces for simplicity.

31. What are decorators in TypeScript, what are they commonly used for, and what concerns do they introduce?

Decorators are functions that attach metadata or wrap behavior around classes and class members like methods, fields, accessors, or parameters. In TypeScript, you enable them with compiler options, and they are heavily used in frameworks such as Angular and NestJS.

  • Common uses: dependency injection, routing, validation, serialization, ORM entity mapping, logging, caching.
  • They help keep business logic clean by moving cross-cutting concerns into reusable annotations like @Injectable() or @Controller().
  • Concerns: they can hide control flow, make code harder to trace, and introduce "magic" that new developers struggle with.
  • They often rely on metadata reflection, which can affect runtime performance, bundling, and testability.
  • Another concern is standards alignment, older experimental decorators differ from newer ECMAScript decorators, so compatibility and migration can be tricky.

32. How do you configure tsconfig for different environments such as Node.js, browser applications, monorepos, or libraries?

I usually treat tsconfig as layered configs: a shared base, then environment-specific extensions. That keeps strictness consistent while changing only runtime assumptions.

  • Shared tsconfig.base.json: strict, noUncheckedIndexedAccess, exactOptionalPropertyTypes, moduleResolution, path aliases, and common excludes.
  • Node.js app: use lib: ["ES2022"], add types: ["node"], pick module and moduleResolution based on ESM or CJS, often NodeNext for modern Node.
  • Browser app: include lib: ["ES2022", "DOM", "DOM.Iterable"], usually no node types, and set JSX options if using React.
  • Library: enable declaration, declarationMap, emitDeclarationOnly if bundling separately, and avoid overly specific libs so consumers are not constrained.
  • Monorepo: use project references with composite: true, one root solution config, package-level configs extending the base.
  • Testing/build tools: separate tsconfig.test.json or tsconfig.build.json so Jest, Vitest, or scripts do not pollute app typing.

33. What is the purpose of module, target, lib, moduleResolution, and paths in tsconfig, and how do they affect builds?

These options control how TypeScript understands your environment and how it emits JavaScript, so they directly affect compatibility, bundling, and import resolution.

  • target sets the JS syntax TS emits, like ES2017 or ES2022, affecting output features, bundle size, and whether helpers/polyfills are needed.
  • module controls how imports/exports are emitted, like commonjs, esnext, or nodenext, which must match your runtime or bundler.
  • lib adds type definitions for built-in APIs, like DOM, ES2021, or WebWorker, affecting type checking only, not emitted code.
  • moduleResolution tells TS how to resolve imports, like node, bundler, or nodenext, impacting path lookup and package export handling.
  • paths creates import aliases, like @app/*, mainly for type resolution. It does not rewrite runtime imports unless your bundler or runtime also supports the alias.

Bad combinations cause broken builds, missing types, or runtime import failures.

34. How do modules and namespaces differ, and why are namespaces less common in modern TypeScript projects?

Modules are file based and use import and export. Namespaces are TypeScript specific wrappers, like namespace MyApp { ... }, that group code under a global object unless you use older patterns like triple-slash references.

Why namespaces are less common now: - ES modules are the JavaScript standard, so TypeScript aligns with how browsers, Node, and bundlers already work. - Modules have explicit dependencies, which makes code easier to navigate, test, tree-shake, and refactor. - Namespaces were more useful before module loaders and bundlers were common. - Modern tooling, like Vite, Webpack, Rollup, and native ESM, is built around modules, not namespaces. - Namespaces can still make sense for simple global scripts or when modeling an external global library, but that is a niche case now.

35. What is project references in TypeScript, and when would you use them to improve build performance or team workflows?

Project references let you split a TypeScript codebase into smaller composite projects that know about each other. Instead of one huge compile, TypeScript can build only the projects that changed, in dependency order, via tsc --build. That improves incremental build speed, editor performance, and boundaries between packages.

You’d use them when: - You have a monorepo or large app with clear modules like shared, api, web - Teams own separate areas and want explicit contracts between projects - Full type checking is getting slow, especially in CI or local rebuilds - You want cached outputs like .d.ts and .tsbuildinfo for faster rebuilds - You need independent builds but still want safe cross-project typing

The tradeoff is a bit more config and stricter structure, but it pays off once the repo gets large.

36. How do you validate untrusted external data at runtime when TypeScript types only exist at compile time?

TypeScript only protects you at compile time, so for untrusted input, like API responses, form data, or message queues, I treat everything as unknown first and validate at the boundary.

  • Use a runtime schema library like zod, io-ts, or valibot.
  • Parse external data with the schema, then narrow from unknown to a trusted type.
  • Prefer safeParse style APIs when you want structured errors instead of exceptions.
  • Keep schemas close to the boundary layer, not scattered through business logic.
  • If needed, infer TS types from the schema so runtime validation and static typing stay in sync.

For example, I’d define a UserSchema, validate the raw JSON, reject or log bad payloads, and only pass the parsed result deeper into the app. That avoids the common mistake of doing as User, which silences TypeScript without making the data safe.

37. Have you used libraries like Zod, io-ts, or Yup with TypeScript, and how did they help bridge runtime validation and static typing?

Yes. I have used Zod most, some Yup, and a bit of io-ts. They solve a real TypeScript gap: types disappear at runtime, so you still need validation for API payloads, forms, env vars, and third party data.

  • With Zod, I define a schema once, validate with safeParse, then infer the TS type via z.infer<typeof Schema>.
  • That keeps runtime rules and static types aligned, which reduces drift and duplicate model definitions.
  • In React forms, Yup was useful because of ecosystem support, especially with Formik, though type inference is weaker than Zod.
  • io-ts is powerful in FP-heavy codebases, especially with Either, but it is more verbose.
  • Biggest win was safer boundaries, for example validating backend responses before they touched app state, so downstream code could rely on narrowed types confidently.

38. How do you design discriminated unions, and why are they useful for modeling state machines or async request states?

I design discriminated unions around a single stable tag field, usually type or status, then make each variant hold only the data valid for that case. That gives you precise narrowing with switch or if, and it makes impossible states unrepresentable.

  • Example async state: {status: 'idle'}, {status: 'loading'}, {status: 'success'; data: T}, {status: 'error'; error: Error}
  • The discriminator should be required and unique per variant, not optional or reused
  • Shared fields can live in each variant, but avoid giant partially-optional objects
  • In reducers or state machines, exhaustive switch checks catch missing transitions at compile time
  • It improves DX, because once status === 'success', TypeScript knows data exists

For state machines, this maps naturally to finite states and legal transitions. Instead of checking random booleans like isLoading && data, you model exactly what can happen, which reduces bugs and makes refactors safer.

39. How does TypeScript interact with Babel, esbuild, swc, or ts-node, and what trade-offs exist between type checking and transpilation speed?

TypeScript does two separate jobs, type checking and transpiling. Tools like Babel, esbuild, and swc mostly handle the transpilation part, so they strip types fast but do not do full semantic type checking. In practice, teams often pair them with tsc --noEmit in CI or a separate watch process to keep type safety.

  • tsc: type checks and transpiles, most accurate, usually slowest.
  • Babel: transpiles TS syntax via @babel/preset-typescript, fast, no real type checking.
  • esbuild: very fast transpilation and bundling, skips type checking.
  • swc: similar to esbuild, very fast, good for large apps, no full type checking.
  • ts-node: runs TS directly in Node, convenient for scripts/dev, slower unless using --transpileOnly.

Typical trade-off is speed versus safety in the hot path. Fast transpilers improve local dev and builds, while tsc remains the source of truth for catching actual type errors.

40. What are .d.ts files, and when have you had to write or patch declaration files for third-party libraries?

.d.ts files are TypeScript declaration files. They describe the shape of a library, functions, classes, module exports, without containing runtime code. TypeScript uses them for type checking and IntelliSense, especially for plain JavaScript packages.

I’ve written or patched them in a few common cases: - A third-party JS library had no types, so I created a minimal declare module 'lib' file with the APIs we actually used. - A package had outdated typings, so I used module augmentation to add missing options or fix incorrect return types. - In one project, a charting library exposed plugin hooks that weren’t typed, so I added interfaces for the callback params to make integrations safe. - If it was a short-term fix, I kept a local declaration file. If it was broadly useful, I’d open a PR to DefinitelyTyped or the library itself.

41. How do you type a JavaScript library that has weak or missing TypeScript support?

I usually treat it as a risk-reduction exercise: start with the small surface area I actually use, then tighten types over time instead of trying to perfectly model the whole library on day one.

  • First, check for community types, bundled types, or generated API docs, @types/..., types field, or JSDoc.
  • If types are missing, add a local *.d.ts file with declare module 'lib' and only define the functions, options, and return shapes I use.
  • Start with safe loose types like unknown over any, then narrow with wrappers, type guards, and validated adapters.
  • For messy APIs, create a typed facade in my codebase so the rest of the app never touches the weakly typed library directly.
  • If the library is important, I may contribute typings upstream or publish internal types, backed by tests using real usage examples.

42. How do you ensure exhaustive checking in switches or conditionals, and why is that valuable?

I usually model states with discriminated unions, then force the compiler to prove I handled every case. In a switch, I add a default branch that assigns the value to never, like const _exhaustive: never = value. If I later add a new union member and forget to handle it, TypeScript errors immediately.

It’s valuable because it turns missing business logic into a compile-time failure instead of a runtime bug. That matters a lot for reducers, API result states, and UI rendering branches. You can do the same with if/else chains too, by ending with a never assertion helper like assertNever(x). It also makes refactors safer, since adding a new variant shows you every place that needs updating.

43. How have you used TypeScript with React, Angular, Vue, or Node.js, and what framework-specific typing challenges have you encountered?

Mostly with React and Node.js, plus some Angular on enterprise apps. I use TypeScript to make component contracts, API boundaries, and shared domain models explicit, so refactors are safer and autocomplete is actually useful.

  • In React, the main challenge is typing props, hooks, and context without overengineering, especially generic components and forwardRef.
  • I have dealt with event typing too, like ChangeEvent<HTMLInputElement> versus broader synthetic events.
  • In Node.js, the tricky part is runtime versus compile-time safety, so I pair TS types with validation libraries for request bodies and env vars.
  • In Angular, RxJS-heavy code can get hard to type cleanly, especially complex observable chains and strongly typed forms.
  • In Vue, when I have used it, the challenge was mostly around props, emits, and keeping inferred types clean in composables.

44. Tell me about a time when TypeScript caught a bug before production. What happened, and what did it prevent?

I’d answer this with a quick STAR structure: situation, the bug, the TypeScript signal, and the business impact.

At a previous team, we were refactoring a checkout flow after an API change. The backend renamed a field from totalAmount to amountTotal, but one part of the frontend still expected the old shape. TypeScript flagged it immediately because our API response types were generated and the component props were strictly typed. Without that, the UI would have rendered undefined, and in one case would have sent the wrong value into a payment summary. We caught it in development, updated the mapping layer, and avoided broken totals, payment confusion, and a pretty painful production hotfix.

45. How do you review TypeScript code differently from JavaScript code during pull requests?

I review the usual things in both, readability, correctness, tests, and maintainability, but TypeScript adds a whole extra layer: type design. I am not just asking "does it work?", I am asking "does the type system accurately describe the intent, and will it prevent future mistakes?"

  • I check for any, unsafe casts, and non-null assertions, unless they are clearly justified.
  • I look at type modeling, whether unions, generics, and discriminated unions express the domain cleanly.
  • I verify inferred types are not too loose, like accidental string | undefined or broad object shapes.
  • I review public APIs carefully, because exported types become long-term contracts.
  • I pay attention to runtime gaps, since TypeScript types disappear at runtime, so validation may still be needed.

In JavaScript, more of my focus goes to tests and defensive code. In TypeScript, I expect the types to carry part of that safety.

46. Describe a situation where you had to refactor types across many files. How did you plan the change and avoid regressions?

I’d answer this with a quick STAR structure: situation, approach, outcome, then emphasize risk control.

At a previous team, we had inconsistent API response types duplicated across dozens of frontend files. I refactored them into shared discriminated unions and a few reusable utility types. - First, I mapped usage with TS references, grep, and editor find-all, so I knew the blast radius. - I introduced new shared types alongside old ones, then migrated feature by feature instead of doing a big-bang change. - I used TypeScript errors as a checklist, strict mode, and a temporary compatibility layer to keep PRs reviewable. - To avoid regressions, I leaned on unit tests, a few integration tests around critical flows, and snapshotting API shapes. - Outcome, fewer duplicate types, safer narrowing, and future schema changes became much easier to roll out.

47. How would you decide whether to create a sophisticated reusable type utility versus writing a simpler explicit type in one place?

I’d optimize for readability first, then reuse. A clever utility is only worth it if it removes real duplication and stays understandable to the team.

  • Use a reusable utility when the pattern appears in 3 or more places, or represents a real domain rule.
  • Keep it local and explicit if it’s one-off, easier to read inline, or likely to change soon.
  • Consider maintenance cost, advanced conditional and mapped types can confuse people and slow onboarding.
  • Check inference and error messages, if the utility makes TypeScript errors harder to understand, that’s a red flag.
  • Prefer proven built-ins like Pick, Omit, Partial before inventing custom abstractions.

My rule is, abstract stable patterns, not speculative ones. If I build a utility, I’d name it clearly, document the intent, and make sure it simplifies usage rather than showing off type gymnastics.

48. What are some common TypeScript pitfalls you have seen on teams, such as overusing any, excessive assertions, or overly complex utility types?

A few come up over and over, usually when teams want speed and accidentally erase TypeScript’s value.

  • any spreading through the codebase, it kills autocomplete, refactoring safety, and trust in types.
  • Too many as assertions, especially as unknown as X, which often hides real modeling problems.
  • Giant utility types that nobody can explain, clever types are bad if maintenance drops.
  • Weak domain modeling, using broad string or object instead of unions, discriminated unions, or proper interfaces.
  • Ignoring strict settings, especially strictNullChecks and noImplicitAny, which catches tons of bugs early.
  • Runtime and compile-time confusion, assuming a TS type guarantees API data shape without validation.
  • Over-generic abstractions, where reusable helpers become harder to use than duplicated code.

My fix is usually, tighten compiler settings, prefer explicit models, use utility types sparingly, and validate external data with something like Zod.

49. What is your approach when a teammate suggests bypassing the type system with as any to move faster?

I’d treat it as a speed vs. risk conversation, not a hard no. as any can unblock something, but it also removes the compiler safety net, so I’d first ask what friction they’re hitting and whether there’s a narrower fix.

  • Prefer the smallest escape hatch, like refining types, adding a type guard, or using unknown plus validation.
  • If we truly need a temporary bypass, I’d make it explicit, localized, and documented with a comment and follow-up ticket.
  • I’d avoid letting as any spread through shared APIs, because that’s where it creates long-term pain.
  • In a crunch, I’d align on criteria, when it’s acceptable, who owns cleanup, and by when.

In practice, I’ve done this on deadline-driven integrations, ship with one isolated assertion, then replace it once the backend contract was clarified.

50. If you joined a team whose TypeScript codebase had inconsistent patterns and low type quality, what would you improve first and why?

I’d start with the highest leverage fixes, the ones that reduce bugs without freezing delivery. The goal is not to make everything “perfect TypeScript” on day one, it’s to create safer defaults and a path to consistency.

  • Turn on stricter compiler options first, especially strict, noImplicitAny, and strictNullChecks.
  • Standardize linting and formatting, so the team stops creating new inconsistencies.
  • Identify risky areas, API boundaries, shared utilities, form handling, and add strong types there first.
  • Replace vague types like any, broad object, and unsafe assertions with explicit domain models.
  • Add lightweight team conventions, naming, union patterns, error handling, and when to use type vs interface.

Why first? Compiler rules and conventions give immediate feedback, prevent more debt, and make incremental cleanup realistic.

51. How do you stay current with changes in TypeScript releases, and can you mention a recent feature or improvement you found useful?

I keep it pretty lightweight but consistent. I follow the official TypeScript release notes and the team’s blog, and I usually skim each release for anything that affects typing ergonomics, config changes, or editor tooling. I also watch a few library maintainers on GitHub and X, because real-world adoption tells me which features are actually useful versus just interesting on paper. If something looks relevant, I try it in a small sandbox before using it in production.

One recent improvement I liked was NoInfer, added to help control generic inference. It’s useful when a function has multiple inputs and you want one input to drive the inferred type, instead of TypeScript widening based on the other argument. That makes APIs safer and error messages clearer, especially in utility functions and reusable hooks.

52. Have you ever had to mentor less experienced developers in TypeScript? What concepts were hardest for them, and how did you teach them?

Yes. I usually mentor by pairing, reviewing real PRs, and teaching concepts in the codebase instead of in isolation. That helps people connect TypeScript rules to actual bugs and design decisions.

  • The hardest concept was often generics, especially when to make code reusable versus over-abstracting. I taught that with small utility examples, then applied it to API response types.
  • Another big one was narrowing, unions, and unknown versus any. I used runtime validation examples to show how types become safer after checks.
  • People also struggled with mapped types and utility types like Partial or Pick. I introduced those only after they understood object shapes well.
  • My approach was explain, pair, let them try, then give targeted feedback in PRs so the lesson stuck.

Get Interview Coaching from Typescript Experts

Knowing the questions is just the start. Work with experienced professionals who can help you perfect your answers, improve your presentation, and boost your confidence.

Complete your Typescript interview preparation

Comprehensive support to help you succeed at every stage of your interview journey

Still not convinced? Don't just take our word for it

We've already delivered 1-on-1 mentorship to thousands of students, professionals, managers and executives. Even better, they've left an average rating of 4.9 out of 5 for our mentors.

Find Typescript Interview Coaches